All Episodes

February 8, 2026 • 65 mins

The news this week highlights shifts in Linux from multiple angles. What's evolving, why it matters, and that moment where the future actually works.

Sponsored By:

Support LINUX Unplugged

Links:

Listen
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:11):
Hello, friends, and welcome back to your weekly Linux talk show. My name is Chris.
My name is Wes.
And my name is Brent.
Hello, gentlemen. Coming up on the show, today we're digging into the Linux
news that's being shaped by delays and some interesting technical shifts that
might actually matter more than they look at first. I'll tell you about that.
Then there's a moment where the future actually showed up in Linux and everything

(00:33):
worked. We'll share that story.
And then we'll round it out with some great boosts, some picks,
and a lot more. So before we go any further, let's say time-appropriate greetings
to producer Jeff. Hey, PJ.
Hello.
It's the big game day, as they say. So we have a few up there in the quiet listening
up the hello, hello, hello, mumble room.
And PJ is the sole member brave enough to set the nachos aside long enough to say hello on the show.

(01:00):
So why don't you make it a Tuesday on a Sunday next week and join us in our
mumble room. And let's have a nice, vital, big mumble room. You know what I mean?
I mean, if you do, you get a whole extra show.
Yeah. Round of applause to people. Dijil just showed up. Round of applause to you guys.
And of course, a big good morning to our friends over at Defined Networking.

(01:20):
Go to defined.net slash unplugged. This is where you want to go to get 100 machines
absolutely free, no credit card required on the decentralized managed VPN.
They manage it for you. It runs Nebula VPN.
Unlike traditional VPNs, Nebula's decentralized design keeps your network resilient.
You can manage a home lab with it. I have a Nebula network that's just two nodes.

(01:41):
And you can have a Nebula network that has thousands of nodes.
entire global infrastructures across data centers, carrier grade NAT, whatever it might be.
It's extremely resilient. Saved my butt the other day. I was able to get it
on the wife's machine and save a problem before she even knew it was going on.
It's so nice. You could go from
this teeny tiny lean infrastructure where there's no big tech company.

(02:03):
You don't need any sign on from Google or whoever to use your mesh net.
You can do these tiny setups or you can go to these massive slack size scale, right?
And it's incredible. And if you want to try it out, you can start with Managed
Nebula. You can use 100 devices for free.
You can really get a sense of it. It's a great product. It's also a lot leaner on the system.
Shows up in multiple ways, on the CPU and on the network.

(02:26):
It's really great for that. And it's got best-in-class encryption,
and it is fantastic because,
you control the keys, you control the lighthouse, so the redundancy,
the discoverability, all of it is under your control, or you let them run it
over at define.net slash unplugged.
Then if you wanted to move on, you could. You could self-host it too.
They're really, really great product because it's the Snebula open sourcing,

(02:48):
something that we've been following for years.
Absolutely. I think maybe we started watching in late 2017, early 2018.
And so we knew there was something there. Now to see it really take off,
it's really impressive.
I just noticed on February 6th, we got an updated Android app.
So, you know, if you are using it, go check that out.
Check it out. Go over to defined.net slash unplug, support the show,

(03:11):
and check out Nebula. It is fantastic.
Just around the corner, 25 days away, Planet Nix and Scale 23X.
We're working with our buddies over at Phlox, who's focused on making reproducible
dev environments actually usable.
They're sending us once again to Planet Nix for the second year.
They're throwing a hell of an event.

(03:33):
It's looking good. Year two is looking really, really good.
And Brent, you have 19 days until you're going to be absolutely slammed to get down the road.
It's about a 46-hour drive for you, buddy, which is six days of hardcore driving.
How are you feeling about that?
I'm glad you've been doing the travel math on this one for me.
I appreciate it. Yeah. That sounds daunting.

(03:54):
Sure.
I think is the main emotion that comes across. And also, holy,
I probably should leave tomorrow, right? That's what I should do?
Yeah, well, if you want to have a nice drive. Here's another way to put it in
perspective. There will be three more unplugs until we are in Pasadena.
That's frightening. I mean, wonderful. It's coming soon.
Just keep that one in your head. I think it's easier to work with.

(04:15):
Mm-hmm. Yep. Three more lumps. So there you go.
Check out planetnix.com for the details and then go get registered at scale.
That gets you to both events.
We have a link in the show notes for scale at socialinuxexpo.org.
And you can use our promo code unpludge, U-N-P-L-G, to get 40% off your registration. It's no joke.
And I've updated the meetup page a little bit as well. We, I think,

(04:37):
are locking in the yard house because the other two locations are no longer in business.
I was informed by a local listener which I really appreciated it is but it's
great we already have a good showing and if you are planning to be there at
our meetup please go sign up so we can let the venue know oh.
Great already 25 potential attendees join the crowd.

(04:58):
Meetup.com slash jupiter broadcasting for that and we'd love it if you could
be there even if you're not going to one of the events show up and say hi we
like that so there you go that's all the housekeeping I have for you,
So we wanted to get everybody on the same page with a couple of stories that
have gone down, and the first one you may have already heard about,

(05:21):
It's not too surprising. We just wanted you to be aware. Valve has updated their
plans for their recently announced upcoming hardware lineup.
All three products announced last November are now expected to ship in the first
half of the year instead of in early 2026. So it's a bit of a delay here.
In a Steam community post, Valve explained the lack of firm release dates.
Basically, ongoing RAM and SSD shortages combined with the rising prices,

(05:46):
making it hard to lock in final pricing and also launch timelines.
That's not too surprising, is it?
No. Okay, but what are we talking about concretely? Well, it's the Steam machine,
the Steam Frame VR headset, and that new Steam controller.
In their statement, they did say this in a very Valve way, I thought.
They have more, quote, work to do to land on concrete pricing and launch dates.

(06:10):
It's like they don't even know.
No, and are still kind of figuring that out internally. I mean,
but they did then secondly emphasize, especially given how quickly hardware
market conditions are changing right now.
I think some of us maybe were lingering on to this hope that they had maybe bought a pre-stock.
Which things were already rolling and done and you were just going to ship them. Yeah.
And then we recently saw these rumors floated that there was a possible bare

(06:34):
bones steam machine with no RAM or storage that they might ship.
And they didn't seem to really give much life to that rumor in this press release
and in the questions that they took. Do you think that would be a product?
I mean, it might be for our crowd. I don't know. It doesn't seem like it competes
as much in the console market.
For sure.
Yeah.

(06:54):
I think in normal market conditions, a bare-bones steam machine would be the one I would want.
But I can't buy RAM or storage any cheaper than Valve can.
Exactly.
So yeah, it's like if I have it, yeah, then I would like a bare-bones machine.
I'm curious that if you're listening to this and you could boost and let me
know if you would buy a bare-bones steam machine if they offered it.
And they might be able to ship that sooner would be the advantage.

(07:16):
So if you had the money to spend and you could buy your own storage and your own RAM,
And you could get your hands on this a couple of months before others.
One of the questions I have is like, how long can they wait for prices to stabilize
or to secure hardware before the product just gets older and older and less worth releasing?
Why is it two grand for a three-year-old product?

(07:38):
You're right. Yeah, that's tricky. Maybe they know something we don't know.
Maybe they know some other vendors about to come online and start manufacturing RAM.
I don't know. But that's a good point. If they wait too long,
they're not going to be super competitive machines.
Yeah.
Huh. Yeah. Yeah, I'm not sure. I guess what we do know is that it's going to
be later than they originally suggested.

(07:59):
But they're still, for now, promising first half. So what, by June, July?
That's what I would take it to mean.
Assuming they don't change that again.
Right. Do we think it even ships in this year?
Jeff, how are you feeling about this? Were you going to buy one of these?
Yeah i'm a little sad i really really really want a frame.
Yes yes me too that's the thing for me is the frame i'm surprised the controller's

(08:25):
delayed that's interesting maybe maybe there's memory on the controller it's.
All that ram in the controller.
You feel for valve because when they announced this it was maybe the writing
was on the wall at that point but it wasn't obvious where this was all going
price-wise certainly wasn't.
Where we are now.
And now here we are and it's like you feel for don't you i.
Think if they can get through the covid stuff with the steam deck i'm pretty

(08:47):
sure they're going to get through this too they'll find some way it won't be
as much as we hope you know i don't think they're going to do any better than
they did with the steam deck.
That's true though that was challenging got it out yeah and they.
Are very smart you know they have a lot of smart people working there that's.
A good point all right pj you're making me feel better i like that all right
here's another story that may suggest an interesting shift virtual

(09:10):
box is finally learning to ride on
top of kvm some code changes are landing
in virtual box that could have big implications long
term for how people use it it's just being tested now they're beginning to support
native kvm virtualization backend for the virtual box application that's crazy
i know it's a long-standing ask from the linux users out there who just want

(09:33):
no i don't want a little kernel module. I totally get that.
Very early, very opt-in, hard to get. We'll get more into that.
But this does seem to be a trend that we just keep seeing is that hypervisors
over time are just adopting Linux's native virtualization stack and just saying,
ah, fuck, you can just screw it. You can just use that.
I mean, VMware did a similar thing. I just think that's fascinating.

(09:57):
I mean, this is one of the main reasons I moved away from VirtualBox.
It was the first virtualization software that I used way back when I was stuck
on, let's just say, other operating systems.
um but then once i discovered like why there's
all these kernel modules and stuff that was the reason
for me to move away from it so i would assume for other linux

(10:18):
users this is about reducing kernel friction so less
reliance on those proprietary kernel drivers
that you have to load and sometimes break better compatibility
with hardened kernels secure boot and distro
updates right right that is a good point would be a great point and i think
from a privacy and freedom angle basically kvm is part of the kernel so it's

(10:40):
audited it's upstream transparent and running virtual box on kvm shifts that
trust towards the kernel rather than vendor specific modules what.
Brent you don't trust oracle.
Well let's just say i have less reasons to trust oracle than i do the kernel fair.
Yeah good point.
So they're pretty much Oracle's positioning this as a fallback,

(11:03):
so not the preferred path for VirtualBox.
So the messaging still centers Oracle's hypervisor as the, quote,
better choice, especially for legacy workloads.
Yeah, isn't that interesting? Like, I mean, I guess I get it.
They're very proud of what they've done, and they've specialized it over the
years to address their users.
They're not just chucking the old one on the ground, throwing it behind the bag.

(11:23):
But it's not often where somebody submits an upstream patch so that way their
software can take advantage of something in the kernel and then includes like
a five bullet point. But this is why ours is better, right?
Did anything in there stand out to you that was reasonable? I mean, there must be some.
Yeah, a lot of it is like legacy and exotic guests, right?
So if you think about KVM, when it came of age, which was after VirtualBox,

(11:46):
and it's been very Linux native, and it's been used a lot by hyperscalers to
run cloud businesses, running a lot of Linux guests,
whereas VirtualBox can run a whole bunch of stuff.
It's got accurate A20 gate emulation, which is important for some DOS stuff.
It's got advanced instruction emulation, ring zero device emulation tricks,
aggressive VM exit optimizations.

(12:08):
For modern guests, you really don't notice a ton of difference for most situations.
But if you do have some particular legacy workloads, you might find some areas
where the old driver would be better.
I still think even partial KVM support is, you know, a win for us.
It is, as you said, hard to get.
It's in the latest VirtualBox Git and Test Builds, Linux only for now.

(12:29):
So you can't, you know, obviously.
So you have to go build it yourself.
Okay. All right.
Or I'd be comfortable with getting one of those test builds from somewhere else.
You can opt into it explicitly if you want, or as Brent was saying,
I think probably what is probably an upgrade for folks and end users who maybe
don't know what a kernel module is, is this could be a easy fallback,

(12:50):
right? So it could try to run with the virtual box stuff.
Sees KVM's already loaded, kernel conflict, just go use KVM.
Maybe you won't have all the features. Maybe it won't be quite the same, but it'll work.
That's going to be the main use case initially for this.
Probably would. I think it's going to convince them to do it, right?
Yeah, yeah. I wonder if long-term, if there isn't some potential for VirtualBox

(13:10):
to essentially become one of the recommended user space VM managers for KVM.
That was my first thought is, oh, it would be a nice way to manage a bunch of
KVM systems, especially if I could remote connect from my desktop,
if I could have the VirtualBox VM management UI on my desktop and then I connect
to my KVM server and I could manage all my virtual machines at KVM with the VirtualBox UI, I mean,

(13:34):
I think I would at least give that a try.
It has been, right? Like it is in some of the best ways of open source,
even with its own licensing complications in Oracle and all the rest,
like it's been around for a long time.
It was early at having a good, consistent cross-platform experience.
It's just this like very available if you need virtualization software.
And that has a lot of utility, especially now if it can kind of adapt.

(13:55):
I do also think you kind of hit on the Linux side trend of folks using more
of this existing in-kernel infrastructure.
And I think that's a trend that is beyond Linux, right?
Like you've seen Apple offer a lot more robust virtualization primitives in their system.
Right, they provide the plumbing and then you write to the API, essentially.
And same on the Microsoft stack, right? You got Hyper-V side,

(14:16):
you've got the WSL stuff.
There's just a lot more primitives that you can do. There are still products,
especially Microsoft, but like there are still products built on it,
but you get a lot more of that base infrastructure that you can plumb yourself.
And it makes sense, right? They're in control of the kernel and all of that.
We've made and heard some wild theories in the past of Microsoft,
maybe Windows is just going to take this approach and become Linux under the

(14:37):
hood with a nice, fancy interface.
So my big question becomes, do all roads lead to Linux?
Yeah, it seems like it.
That's what it seems like here.
Here we are, right? I've been talking about Linux for 20 years.
And can you believe it's still relevant after 20 years? And it's more relevant than ever?
It's more relevant.
Yeah. There's not a lot of technology like that.

(15:00):
And it is this trend of all things just kind of moving to the kernel.
And they stop their own way of doing things.
Now, VirtualBox is going to keep going for a while with their own modules, to be clear.
Did you see this week's unrelated, but that someone proposed like a machine
learning framework offload for the kernel? So it really is everything in the kernel.
Whatever you want. Right? Use eBPF. I don't care. Well, speaking of things in

(15:24):
the kernel and sometimes outside the kernel, let's talk about BcacheFS,
big update in just the last couple of days over there.
And I have one particular feature that I am very excited about.
I'll hold that because I'm going to let you take the stage for a moment on this Bcache update.
Yeah, so this is on the heels. We had 1.36 back in the end of January.

(15:48):
That had a lot of internal stuff.
This time we get a little bit of user-facing stuff with the new BcacheFSFS timestats
command. It's an interactive TUI for monitoring various file system internals,
slow paths, and device performance, duration, frequency tracking for various events.
Wow.
Helpful for diagnosing performance issues.
Brent, did you catch the part in there that I'm excited about? Did you catch that?

(16:11):
Uh-huh. I think the light bulb turned on for me, too. A little TUI to do this stuff.
This is great.
It's good for the people like you and I.
A file system repair tool with a TUI? Are you kidding me? To look at some of
the internals and device performance and what's going on?
That is so up my alley.
There's also some other improvements around in the output, like improved BKeshFS

(16:33):
reconcile status output.
I saw on Reddit just some users in Kent chatting about, in general,
some of the interface and the outputs, because there's kind of an array of,
there's some up-to-date stuff, there's some older stuff, so there may be one
area of porcelain that gets some more attention.
But something we know is getting more attention is that in the release announcement,
Kent said 136.1 is out, which we were just talking about.

(16:57):
The next release will be Erasure Coding.
Yeah, so tell me about this.
So that would be the Parity Raid kind of style stuff, right?
We made 5.6 in the Bcash of S world coming for Bcash of S, or in the Butterfest
world coming for Bcash of S.
That's massive.
Yeah.
And a lot of the base stuff has been there, but all the user side stuff,

(17:19):
and especially like you can make these file systems if you do experimental things
or enable flags, but there hasn't been a lot of tooling support for actually
doing anything if a disk dies.
So you could test it and use it, but you don't want to run a prod system on it.
And the last little BcacheFS bit for today is, it's not all good things.
It is still an experimental developing file system.

(17:39):
A little bit of a PSA here.
Indeed. 19 hours ago on rbcachefs early
reconcile had a serious bug in the data update path
if an extent lives on devices that are all being evacuated while being evacuated
they're considered to have durability zero and the old code for reconciling
the existing extent with what the data update path wrote would drop those replicas

(18:00):
too soon okay so if you're on 1.33 through 1.35.
You need to upgrade you.
Need to upgrade,
Good news is the new code is much more rigorous with how it decides when to
drop replicas. And so far, only like a handful of people have been hit by it.
As usual, I have seen Kent doing a lot of on-the-ground support,

(18:20):
both in the IRC and on the subreddit.
So if you do have file system issues for your sake and for everyone else's sake
as this thing gets developed, don't be afraid to reach out.
Yeah, he's very engaged. I just realized, I told the story to the members recently,
but I'm just around the two-year mark of BcacheFS on one of my absolute most

(18:40):
important production systems, runs 24-7,
and it's been using BcacheFS for its critical data drive.
And I do that not necessarily saying that you should and not necessarily recommending
it, but so that way I can be kind of on the front line and report to you how it's going.
And so if we're five years down the road and I'm talking about Bcash FS,
you know, I've been using it for five, six years at that point.

(19:02):
Right. So I think there's some credibility in actually deploying it and testing
it with data that is literally putting my money where my mouth is.
But I don't know if everybody should do that yet, but I'm very impressed because
two years ago it was in a much different state than it is now.
And now it doesn't feel risky at all.
Yeah, definitely. And a lot of improvements, right? And we're still getting
some really nice, like, being able to do more upgrades in the background or

(19:22):
without, you know, not having to have the drive offline to do them.
It's now a very robust file system in a lot of ways, which is great.
I mean, I, again, don't do as I do, but I'm at the point where if I can,
I'm going to make it my default file system on every root install for workstations
and laptops going forward. You already have been doing that.
You've been running it for a long time on that laptop.
Yeah, I think it's summer 2024.

(19:46):
Wow. And you've just been going through the kernel upgrades kind of regularly and just...
And then I've also got it running on my home router box at the moment.
I don't have any RAID systems.
So I do want to set that. I mean, I've dabbled with some, but no permanent ones.
I don't know. The router one tickles me the most, right, Brent?
It's like, that's the one.
Because, like, why?
Yeah. You can do extended four on a router.

(20:07):
You can do anything on a router.
That was actually one of the first ones I built, I think. I think I was just...
Oh, yeah, why not?
I was, like, redoing the system.
Yeah.
It was there. And there was... That did buy me one time, I will admit.
with what wasn't a byte. It was just, it had been an old enough file system
that I had to go through some of those on-disk upgrades.

(20:28):
So I did have to do one update where my network was offline for like 20 minutes while it did that.
I want to ask right now, if you're listening, what is your router file system of choice?
If you're building a router or a system like that, boost in or send us an email.
What is your router file system of choice? Let us know.
Yeah, I have ridden the file system way for a long time. So I think that's kind
of one of the reasons why I'm a little bit more comfortable using BcacheFS.

(20:50):
I switched to SUSE back in the day because they supported Riser FS.
And I needed extended attributes for Samba shares. And I needed support for
a lot of little files because I was doing images of checks, JPEGs.
And so I went with Riser FS way back in the day.
And then when ButterFS came around, I adopted it and got so burned early on.

(21:11):
Some of the early Linux unplugs are me ranting about losing my machine to ButterFS.
Yep.
And now I have it everywhere. And I'm starting to do that again with BcashFS.
And then I have a lot of my scary raids around, which you guys remember my scary raid, right?
Oh, yeah. How could you forget?
Which is my, it's a raid zero of just a bunch of spinning rust.
Well, you forget everything if it disfails.

(21:31):
That's true. And I have a scary raid here in the studio on the studio machine.
And I have a scary raid on my workstation upstairs.
And I name it slash scary raid. So I always remind myself this could blow up at any time.
Anything you put here is ephemeral. And so that's my little mental trick is scary raid.
And that right now on all my systems, all my scary raids are XFS.

(21:57):
Wow.
Yeah. All my Scary Raids are XFS. And I don't know, I just, it's legacy because
they've been around for years because I reload the boxes and then I just remount the Scary Raid.
You know, because I've always got the Scary Raid. It's just sitting right there.
So are you going to upgrade that to a BcacheFS Scary Raid?
Well, this is what I'm thinking, is the next generation.
The next generation of a Scary Raid would be like a bunch of used SSDs that

(22:21):
I just slam into one big volume and use BcacheFS for that.
I should say that was the other factor on the router box is it's pretty much
just a NixOS config in Git, so there wasn't a lot of data on there.
Yeah, yeah, yeah, yeah, yeah. I think if I were to, you know,
because all my scary raids are identical disks.
And if I were to do a mix of drive size and just sort of mush all that together,
I think I would call that messy raid. That would be a messy raid.

(22:48):
Well, I just want to take a moment and thank our members. We don't have an advertiser
for this spot yet, although we do have the world's best Linux audience,
the world's largest Linux audience.
We've been around for over 12 years doing this show. I've been doing podcasting for 20 years.
So if you would like to reach one of the best audiences with somebody that knows
how to do podcast ads, send me an email at chris at jupiterbroadcasting.com.
In the meantime, thank you, members, jupiterbroadcasting.com slash membership.

(23:12):
I don't know. I know you can get the Linux unplugged. Jupiter.party for the whole network.
I don't need to go through the whole thing. You guys know it,
so I'll just say thank you very much.
There's not a lot of big commercial demand for a Linux podcast that is talking
about file system nuances like this.
I don't know if that surprises you, but it turns out people that are selling

(23:34):
ads on podcasts and YouTube, they don't find file system discussion particularly
interesting and doesn't really reach their radar. are.
So we do have to lean more on listener support than a typical podcast that you might listen to.
Because what we do is we use that listener support to give us the runway to
actually nerd out on these topics and go for the stuff that is never going to

(23:56):
get us any play on YouTube.
We're never going to get a clip on TikTok. We're never going to show up in some
sort of advertiser's keyword search dashboard thing, right? It's never going to happen for us.
And that's okay. We're fine with that because we have listener support.
So there's a couple of ways you can do it.
We have the show membership. Those are our core contributors at linuxunplugged.com slash membership.
We have the jupyter.party membership that gives you access to all the shows

(24:19):
and their special features. Every show has special features.
And then you can boost us. And that is not only a signal, but it also supports
that particular production and gives us an idea of like that topic worked or didn't work.
And so there's several ways where you can participate. and we really do appreciate
it because we couldn't make this kind of content where we talk about these nerdy

(24:39):
esoteric things that actually do matter.
It's not our fault that the advertisers don't realize this stuff matters.
Right? It's not our fault.
This stuff does matter and it matters to you just like it matters to us.
So thank you very much for the support and you can find membership links in the show notes as well.

(25:00):
A little while ago, we took a moment on the show to plant our flag and say all
this assisted AI stuff was coming for Linux administration.
And the last few weeks, with all the OpenClaw excitement, might be proving that out.
But there's also been a lot of pushback to all these big tech commercial models.
And for some of us, it's made us more excited about local open source models.

(25:23):
Over on ItsFoss, Bhwan Mishra wrote about ditching Clawed code and using the
open-source Quen model for real sysadmin work.
I liked hearing about this, guys.
Yeah, me too.
Yeah, so the author dropped Cloud Code, went with local Quen code because it
behaves more like a proper Linux tool.
He says he could install it locally. It was open-source, and it shows every

(25:45):
command it's going to execute before it actually runs it.
And you can describe a task in plain English, and the Quen model,
which is a free model that you can run on your own machine, will just turn that
into reviewable shell commands that you can then authorize, and then it'll execute them.
Yeah, it's pretty neat. So Quen comes from Alibaba, so it is a Chinese model,

(26:07):
but it is also open-weight.
Some folks have found some kinds of censorship or other sort of things you might
expect from some of these Chinese models.
But it has also been optimized for agent stuff, so has Quen code.
They have various specific models, especially like the code-focused models itself
sort of aimed at exactly these tasks.

(26:28):
And in fact, even the app itself is a fork of Gemini CLI. So it's under the
Apache 2 license. They look pretty much exactly the same.
Oh, interesting. Okay.
But the Quen folks wanted to add in, like Gemini CLI does a lot of stuff because
it's attached to this very broad Gemini product and stuff, and you can only use it with that.
But they added support for using local stuff. They also optimized it more for

(26:48):
doing these coding and agent tasks.
Very nice. Okay.
That's, I didn't realize that you could, I guess I never looked at Gemini CLI.
I didn't realize they had made that open source.
Yeah, isn't that nice? I mean, the back end isn't, but at least the front end is.
Yeah, yeah.
And, you know, it is like, maybe you don't do everything with one of these particular
local models, depending on what kind of system and GPU and what you can actually run.

(27:14):
But as these things get better and better, like writing shell scripts,
or especially like, you know, especially even silly stuff like,
Like, oh, dang, I downloaded a bunch of files with spaces in there,
and they got weird names, and I just want it cleaned up.
That happens to me all the time.
Even stuff you can run locally can definitely handle a spitting out of Bash script to do that.
Yeah, it is really exciting to see where these local models are going.

(27:34):
I feel like I'm in this tortured position where I recognize the utility now
in a way that I kind of missed the boat on when hardware was reasonably priced.
Where's that open source time machine?
Now I'm so GPU starved, and I've been looking at Venice AI, which is a pretty
good privacy-focused model system that you can use with agents and whatnot.

(27:56):
But the tokens are more expensive because of the privacy layer.
And this gets really expensive very fast when you want to do more extensive things.
So it is with a particular eye we're watching these local models and seeing where they go.
Just a little background on something we tried last week. This was a very interesting
experiment, and we wanted to share it with you. We have wanted to stand up a

(28:16):
production Mattermost server for years.
We've spun up a few. We've toyed with them kind of as just experiments.
We've even ran a few project-specific Mattermost when we're working on something with people outside.
JB will spin up a Mattermost real quick, but they really just throw away Mattermost,
and we don't really spend a lot of time with them.
We wanted something we could use long-term. So last week before the show,

(28:41):
I had been working with one of these open claw agents, just going over best
system practices, focus on this, consider this, security.
And the stuff we normally want or don't want to see in a system.
Yeah, exactly. And so also I've been working to delegate some subtasks to agents and stuff like that.
So naturally, I thought, let's do something really stupid and give this thing

(29:05):
API access to Cloudflare.
I figured, why not? I had an old domain that I'd sat around for years.
I don't use. I could set up a limited scope API credentials.
I was about to say that we've already gotten to the point of maturity in that
side of the industry is really useful if you're trying to use agents.
Like, don't give it everything.
No. So I gave it this old domain I had sitting around that I bought on a lark.

(29:30):
Probably had too many beers and decided, I'm going to buy this domain.
And so before the show last week, I told the agent, go SSH into this VPS because
the host that it's running on has an SSH key.
SSH into this VPS. Go look around.
Learn the system. The Docker Compose files are over here. And then commit to
your memory what you've learned. That was before the show, just Sunday morning last week.

(29:53):
A little scouting mission. Reconnaissance.
After the show, when we were all done with our post-show duties, I...
I didn't even know you did that, by the way. You're sneaky.
Yeah. After the show, I said to Wes, how fast do you think an agent could deploy
a secure Mattermost server?
And so I brought up Telegram and I sent my agent the API key for its Cloudflare stuff.

(30:15):
And I gave it a clear goal. I said, go deploy a Mattermost server on the VPS
I told you about earlier. That's all I said.
and follow best security practices and use Cloudflare to handle things like
DNS caching and other security best practices.
And that was the entire Telegram message. Five minutes later,

(30:36):
The agent had deployed Mattermost via Docker Compose, integrated a Cloudflare
tunnel sidecar, set up DNS with optimal caching based on upstream Cloudflare
docs for best security practices when deploying a Mattermost server,
which is really neat because it set up the sidecar.
Wes, talk about the sidecar tunnel. We talked about these sidecars before. You're a sidecar guy.

(30:57):
Oh, yeah. I mean, it's a handy pattern, right? It's just because you're using
network namespaces with the containers, You can put two containers in the same
network namespace. And so they share a network.
And then you can have, say, Tailscale, Netbird, Nebula, any of these mesh VPNs.
Cloudflare Tunnel.
Or you can have a Cloudflare Tunnel. And so Cloudflare Tunnel's agent handles

(31:18):
binding a VPN, a tunneling connection to them.
And then because they're also controlling all of the front side,
right, the load balancing, the DNS, the caching, then they can automatically
just route it through that tunnel back to you.
And so the Docker and namespace layer make sure that you don't have to mess
with any of the host stuff.
But as long as the actual Cloudflare tunnel agent itself can get outside via

(31:40):
the Internet, then everyone else can connect.
And you don't have to have an open port. You don't have to worry about that kind of thing.
That's the key thing. It doesn't even talk to the host network.
It's not even talking to the VPS network at all. It's just talking over this tunnel.
So then what I did, after it had stood up this entire thing,
as I reviewed the Docker Compose, it was clean. It was lean.
I looked at the Mattermost config. It was pretty basic. So I said to the bot,

(32:03):
I said, I'm going to go create you an account.
I'll go create you a bot token for the agent. Then what I want you to do is go finish the work.
Go set up the rooms, set up the permissions, dial in a full configuration,
give the rooms the descriptions, set descriptions for all the individual accounts, everything.
And then two minutes later, it was back, and we had everything there.

(32:24):
A complete lounge, rooms for the bots to talk, room for the people to talk, permission structure.
The entire process was probably eight to ten minutes.
Also, having the bots fiddle with the actual admin settings,
so much nicer than doing it yourself.
Especially for, like, I'm not a Mattermost UI expert, right?
But, like, we needed to rename some accounts, or, like, I wanted to make sure

(32:45):
that when Brent got in, he was going to be an admin on the instance.
And bots handled that just fine.
Yeah. And so the point that I'm trying to make here is this is not going away
for Linux system administration.
And it didn't deploy some vibe coded piece of crap insecure setup either.
It actually did a fantastic expert job. We went through and reviewed it.

(33:06):
It's good. It's solid. It's a it's a good setup.
And you have to kind of let that soak in for a moment because we have heard
these big tech CEOs promise all this ridiculous crap for the last three years.
But this is really working.
I did three things. In the morning, I told the bot to go check out the VPS.
In the afternoon, I gave it an API key, and I told it to go deploy Mattermost.

(33:30):
And then after that, I said, go set up all the rooms, the permissions,
the descriptions, give them emojis, all that crap.
And it did all of that. And it's all because these things all have API endpoints.
The documentation on how to set them up is extremely well documented and clear.
Yep. And in the case of open source stuff, you can also go have your bot spelunk

(33:53):
the source code and figure out exactly what it needs.
Yeah. And the API to then realize how to connect to it so then it can participate in the chat.
And Brent, we pulled you into some of that shenanigans. And you could see the
bots are actually coordinating with each other back and forth in our Mattermost chat.
Can I just say how much I wasn't expecting to be pulled into that kind of environment?

(34:13):
But it was impressive. Like, I don't know, this is a topic we've been talking
about for years to replace our sort of legacy internal communications tool.
And you guys just kind of pulled that out while you were doing the post-show
work? I kind of couldn't believe it.
But we were actually just playing with agents and not doing the post-show work,
but we were doing other stuff and not supervising this bot.

(34:35):
Like what a force multiplier if you look at it that way. And to stand up something
we've been wanting for quite a long time and having,
like you said, some confidence that it's been done with best practices better
than we could have in our little busy schedule.
Because this would have taken us longer to do it in a way that we were satisfied
that it was good enough to be a long-term tool for us.

(34:59):
and secure, and safe enough, and resilient.
That's just awesome.
Yeah, I agree. But I think there's another part I was thinking about,
which is like, this varies per person and group and all that,
but we weren't going to learn that much from standing up a Mattermouth server.
Like, it's sidecar patterns, we've done it before, and there's this whole mass,
at least for me personally, of work where I want to do it,

(35:23):
It's either not quite important enough, it's not going to rise to the top 10,
and I'm not especially motivated to do it, especially because I know it's kind
of just grinding it out. It's not a new thing.
I'm not learning some new application or some new way to do things.
And I'm totally happy to delegate that. Another thing I was doing just this
week was I had an old code base.
I ran a linter across it to get a bunch of lint errors. I know how to fix that

(35:46):
stuff. I don't need to do that again.
But a bot can fix it, and then I can just review the diff. And if it didn't
break anything, the tests all passed, like, totally fine.
Yeah. Yeah, and for me, I feel like it's in the experimental,
okay, I'm learning about a stage, but it goes into the this is something that
I will use for year stage when I can run a model that's competent locally,
you know, and privately.
Yeah, where you don't have to pay out the arm and leg for the credits,

(36:08):
where you don't have to worry that every little detail of your life is being sucked up.
Yep, yep. But there is something here. It is not going away.
even if like open AI were to crash.
Like this is not going away. And what we're going to see, and we're already
seeing it, is services are going to have to develop and deliver agent-specific endpoints.

(36:31):
You hit on the importance of APIs, and I think that is...
going to be helpful for us open source people because
if you have an open api system that's documented that's easy to
learn these agents can figure it out like when we
were playing around last week i had my agent figure out how to use this web
socket irc thing but then even just after we got the mattermost stuff i wanted

(36:52):
my agent as a test to send your agent a direct message but the security on the
open cloud would not let that happen my agent figured out how to use the Mattermost API directly.
Now, in that case, a little concerning, but in general, like in the old world, right?
Like the proprietary provider would provide the bindings.
So you had to rely on whatever set of bindings were going to be there,

(37:15):
but you don't have to rely on the default set.
If you can dynamically add the capability of new bindings yourself.
Yeah.
So here's, so it doesn't matter if they only support Slack, we can add Mattermost if we need that.
Yes. This is the head shift that people need to get around and it's going to
impact open source projects and they're not going to take it gracefully.

(37:35):
We're already seeing it happen. I'll give you an example.
These agents are going to become an extension of whatever somebody can do online,
this agent will be tasked to do.
And that is from everything from hiring people to mow lawns,
to topping off its own API credits, to going through GitHub and looking for issues.
And we are going to see a lot of transitions between crap code and all that.

(37:58):
But focus on something that the open source projects need to think about.
And this is something that Debian's struggling with right now,
is Debian's had a problem with their CI system getting overloaded with what
they say are LLM scrapers,
essentially going through their infrastructure via a web browser and kind of
just going through page by page and generating so much load by doing this that

(38:23):
it's making the service unavailable for their existing developers.
Yeah, it's not just hitting, say, like the actual output text file from the build run.
They're like walking through the whole interface to go browse through it.
And then as a result, I guess there's not that much caching and just,
you know, it's a volunteer project. It's open source infrastructure. It's Debian.
So then it's going and pulling a bunch of results for like years ago build systems.

(38:44):
So now the system's turning on that instead of focusing on, you know,
the actual builds for the next release.
And one of the things that's challenging about this is it's difficult to distinguish
an LLM scraper, which might be training, questionable.
a bot that is just scraping this stuff or an agent that somebody has tasked
on their behalf to help them with their development in Debian.

(39:05):
You might want to go pull in those logs, say, so that you have your own personal
dashboard of the upstream builds to keep track of issues.
Exactly. And so maybe you've tasked your agent to go do that.
And what's going to have to happen here, what Debian has done for now is they've
just essentially taken this out of the public.
And you have to have special access and special whitelists and all of that to
be able to get access to their CI system now.

(39:26):
But the solution is not going to be to block these agents that are operating
on behalf of other developers.
The solution is going to be to develop API and data pipelines and then for these
bots, agents, and LLMs to respect those and use those data pipelines and APIs
instead of just crawling the website.
Yeah, and there's certainly some operator responsibility here,

(39:47):
right, to respect those things, to respect stuff like robots.txt and other.
Like, you know, there is an element of you put it on the web and there is some,
you know, depending on your law and jurisdiction, but there's some right to
just go do a curl request and get the results.
But you also need to respect that that costs people money and can interfere
with other people's right to do the same access.

(40:08):
I don't know if people remember, maybe I'm old enough, but there was a time
when web search engines first came out and they were crawling the web.
And even though they weren't permanently caching before, like,
Archive and the Google Archive stuff, even though they weren't permanently caching,
there were lawsuits over the fact that the web indexers had a copy of the information

(40:28):
in RAM temporarily while they indexed the website.
Often copyrighted things. Yeah.
Yeah. So it takes a little bit to figure this out.
And I don't think projects like Debian are going to want to completely sit this
out because there is use case for these agents. and if they can be more intelligent
about how to manage a Debian system, that's good for Debian as a project and

(40:50):
it's good for Debian as adoption in the enterprise.
I do think or suppose we might see a rise in more structured outputs and outputs
designed specifically for LLMs as an aside to the human-centric interface.
Just give them their own thing to ingest that is low resource.
Don't have them muck up with the human side, yeah.
Pretend like, and.
Honestly- It's less efficient for both sides.

(41:10):
Right, because they're spending a bunch of tokens to run that web browser to
browse the website like a human.
To parse out the HTML to go figure out where the links are.
And then they're probably just producing a JSON file on the back end anyway.
That's all they wanted.
So everybody wins, but we're not there yet. So what the Debian projects had
to do is they've had to go kind of private.
But I don't think that solves the problem long-term. But we really had a moment

(41:32):
with our Mattermost server.
We had a, you know, let's call it 10 minutes.
In 10 minutes, we had a fully working Mattermost server, like an expert had
deployed it, with clean configs, tight security, great performance,
because I sort of glossed over this.
But like you mentioned, it set up all of the caching and best practices with Cloudflare.
So this thing is running on a super old low-end VPS that is already doing other

(41:56):
stuff. And it is a faster chat experience than, say, Slack.
It was a really nice, responsive, performance system. And using this Cloudflare
tunnel made it really easy for security purposes. I didn't have to worry about
having a VPS address and all that kind of stuff.
Also makes it portable. You can move it around. It doesn't matter where it lives.
Especially the way this tunnel works. It's portable per machine, too.

(42:17):
So you could actually lift the entire Docker Compose, drop it on another machine,
and start it. And the tunnel would reconnect.
That kind of stuff is really impressive. and on a low end vps it's even better
because you're getting more bang for your buck.
And you you wouldn't you know you don't have to use cloud for you can tunnel
it over your existing mesh.
Network you can do an nginx.
Proxy and the bots can help.
Oh yeah for sure this is just because i was experimenting with using the api

(42:37):
to just do a complete totally end-to-end solution there's a lot of ways you
could solve it with networking and uh you know maybe it was two dollars and
50 cents in tokens for the entire thing in 10 minutes and now we have a server
that we could use in production for years,
So there is something there. And when you get that kind of utility and that
kind of optimization, you know enterprises are going to be drawn to that.
And you know that's the kind of thing that Red Hat and SUSE are going to be,

(43:00):
and Canonical, are going to be leaning into. It isn't going away.
And it's important, I think, that the open source side stay engaged and pushing
on the values that we bring to technology generally.
And I think that's the most interesting and exciting thing about the last three
weeks is that it has been driven by open source.
We've been talking about AI, well, we haven't, but people have been talking

(43:22):
about AI for years now, sort of to ad nauseum.
It's just, ugh. And this is the first time where we've actually been interested
because open source is really biting in and making a difference.
And you can see it, like, just if you go look on Hacker News or other spots,
like, OpenClaw came out, and now there's a bunch of different versions.
There's smaller versions, there's Rust versions.
And especially because the implementations are open source, you don't have to
figure out how to give the bot memory.

(43:43):
I mean, you might need to customize it or tweak it or change it,
But you can go follow the patterns that are emerging in the upstream open source.
So what Wes is saying there is one of the things that's different this time
around with these new agents is the memory and the soul and the tools and all
the things that it memorizes.
So, for example, last night I messaged my bot. I didn't say anything else other

(44:04):
than, can you go get the weather report from Home Assistant?
That's all I said. And it knows, well, OK, Home Assistant lives on this host. This is the API.
This is my API key. the weather is this and it goes and gets all of that and
it just comes back with a rather weather report from home assistant.
That's another one of the big changes here is the agent is the interface.
Right and that memory and all of that is yours it's on your file system and

(44:28):
these other bots that are being built these other agent frameworks like you
mentioned can use this so it is portable because it's just text and other agents
can ingest this and learn from it so when you are building this,
you're taking something that is vendor agnostic, model agnostic,
and gateway implementation agnostic.
And you can change it. They can change it.

(44:50):
So that's a big difference. It's a big shift. And we're done talking about it,
but we wanted you to understand it because, man, does it matter.
Well, no spot, no ad right here. But thank you, members, and thank you,
boosters. We really do appreciate you for making this show possible.
You're doing the heavy lifting, that's for sure.

(45:12):
We got a few little fantastic pieces of feedback this week. Joe sent in this
awesome case for your ThinkCenter.
It's a ThinkBox V2 custom 4-bay serial ATA or SAS hard drive NAS that you can print yourself.
Yes, you heard me. It's a case for your system with a case.

(45:33):
It's basically, it's a custom 4-bay NAS that the ThinkCenter small form factor PCs just slide into.
So using an LSI HBA card
and the powerful brains of are much loved by
our community m720q or m920q by
lenovo it leverages many improvements over alternative

(45:54):
custom nas builds by basically providing
a stable direct disc pass through with proper air handling reliable drive identification
the drive bays to the think box also support both serial ata and sas drives
on a 12 gigabit per second backplane and everything just slides right into this

(46:15):
thing. This specifically.
Is exactly what I've been thinking about for the last two weeks.
And I have to say, Joe, how did you get into my brain? Because I've been deeply
investigating the M720Qs and all of the possibilities that these little machines
can do for us, including a NAS.

(46:36):
And I was like, ah, geez, I really would just want to throw one of these into
a case. I can support a couple hard drives and here you come along with a version two of this project.
Doesn't it? It looks OEM, right? It looks mint.
This is such a great find. Thank you, Joe. He says that he found this and thought
we would be interested, and boy, was he right.

(46:57):
Upgrade your pizza box to a cube.
We will link this in the show notes. I know what I want for Christmas.
The creator says, Lenovo's use of the coffee-like generation of CPU in the M720Q
and M920Q line gives access to native hardware transcoding through Intel QuickSync.
yeah it just it makes for such a nice home media server but it is a little bit

(47:18):
tight on the storage so that is a great find thanks for sending that in,
Our buddy Michael Dominick from the Coda Radio podcast has a great giveaway
going on right now for students. He's calling all students.
He has an Earth Day open source challenge. If you build something that helps
the planet, you could win a big prize.

(47:39):
Some System 76 hardware.
Oh, hey, we know that. We know that stuff.
Yeah, so if you've got a kid that's K-12 or even college students that are listening
to the podcast, the deadline is Earth Day.
I'll put a link to Mr. Dominick's post about it.
He's done this. His company, MadBotter Inc., has done this for a while now,
and they often give away some really nice System76 hardware for this.

(48:01):
So check that out. We'll have a link in the show notes.
Code for Climate 2026, the MadBotter Earth Day Open Source Challenge.
And the deadline is Earth Day.
You've got to code something up that helps the Earth. Go check it out.
Ooh, looks like we got a little feedback from our buddy, Only Mike.
Hey, Olympia Mike!
Guys, wow. I just wanted to say thanks for sharing my mini computer giveaway.

(48:25):
I received hundreds of emails from this amazing community with positive messages
and excitement for some free hardware.
I had all 35 units claimed in a couple of days and have mailed them all out as fast as I could.
I'm now on a first name basis with the post office employees.
I bet.
I bet.
I also wanted to say thanks to the many recipients that donated above the shipping

(48:47):
cost, allowing the Computer Upcycle project to have some extra funds for SSDs
and power cords to fix more computers.
Best audience ever.
This community is literally the best, and I'll be sure to do this again if,
when, I have more good home server hardware.
Thanks again, love community. I love you all and cannot wait to see you all
at LinuxFest Northwest.
Oh man, Mike, that's great.

(49:08):
Amen.
Thank you, Mike. Thank you for the update. Really, I love that.
I just love the growth of that too.
It's so great to see and appreciate the updates. You know what I mean? It's very good.
Well, the dude is definitely abiding this week, and he is coming in with our
baller boost, 77,777 sats.

(49:38):
He says, thanks for the wake-up call. I deployed Prometheus and Grafana,
and now I'm hooking up everything. My TrueNAS, ProxMax, my Unify.
That's right. And Home Assistant. I have no idea what I'll use all that data for.
The next step is to set up alerts, I guess. great episode as always and happy
belated birthday oh thank you the dude yeah uh i have been finding it very useful

(50:00):
just to monitor disk space and the cpu load on my home assistant those are like
the main things i have i have basic sort.
Of metrics and monitoring stuff.
Stuff that i'm like is this box struggling is it or is it okay.
Well anonymous comes in with 2021 sets but no sets just the value So also we
get a boost from Outdoor Geek. 5,000 sets.

(50:24):
Ooh, Scott, he's hot to trot.
Open WRT as access point tip. By default, if you connect another router to a
LAN port of an Open WRT Wi-Fi device, the Open WRT Wi-Fi device will operate as an AP.
However, if someone fiddles with it and forgets or doesn't know to use LAN port
versus the LAN, better to configure as AP for permanent installs.

(50:46):
Good to know. Outdoor geek coming in. Good tip.
Sounds like maybe you've seen that happen one or two times, huh?
Brent, did you do this? Are you okay?
I didn't even know this was possible. I like this little default behavior.
Although, you know, sometimes these kind of behind-the-scenes behaviors that
are trying to be helpful can be unhelpful if you don't know they're happening.
This didn't happen to me because I didn't know this was such a useful feature.

(51:09):
So maybe I'll try that next time.
Now you know.
Well, Zach Attack came in with 4,500 Satoshis.
It's been a long time. I never stopped listening, but just haven't found funds to send over.
I'm curious as to how your AI adventures go. So I look forward to the continued reporting on that.

(51:31):
I've been back on the distro hopping train with one laptop running Nick's book.
Ooh. And the other running Linux Mint, my main machine staying with Bluefin for now.
That's interesting fleet you have there. I like that. Good, good testing around,
You know, so talking about the Asian stuff just a bit more, just briefly, it ain't all great.
Because, you know.

(51:52):
Why does your wallet look so thin?
There's that. I'm basically an alpha tester, right? Because I installed it when it was called Clodbot.
Then it became Maltbot. Then it became OpenClaw. And then they've had three releases of OpenClaw.
And the first two releases of OpenClaw completely blew out my sub agents,
blew out my Mattermost config on the second install.

(52:13):
I mean, just like and then the stereotypical thing where you're talking to the
agents like, I've got it figured out now. and then it kills its own gateway
and then i'm in there repairing the gateway not not.
Just once on that one.
No i was getting pretty spicy about it so it hasn't all been great,
i've definitely learned what i do and don't like about the open claw uh infrastructure
but uh ultimately the uh

(52:36):
you know the reality is just that when you are learning
something when it's really new you learn a bunch you
have to throw it out you get burned you learn a bunch and then you throw
it out and you learn again and you learn again and you learn
again and being an early adopter has a bit of work to
it but you then at least know the language
you know the technology you understand the direction you know what's hype versus

(52:57):
what's actually just practically good and it it actually is valuable to experiment
with this stuff early on even if it is somewhat personally expensive in time
and money but i kind of consider it like this is this is my ongoing education,
And I don't go to college anymore, but I do invest in my ongoing education in

(53:17):
some of these ways within reasonable means, right?
I can't go crazy with it, and I sometimes just cut myself off.
And another reason, we need more local solutions.
We do. We do. And maybe somebody will even create a better agent framework one
day. Thank you, Zach Attack.
Gene Bean's back with 2020 Sats.
This is happy birthday to JB. Thank you, Gene. Always good to hear from you.

(53:40):
Hope you're doing good out there. Wonderful to see, Gene, in March.
I hope so.
I hope so. I hope so.
Well, Red 5D comes in with a row of ducks.
You mentioned the idea of having an agent read the show RSS feed for summaries
and stuff, but with the size of the RSS data, especially ours,
that would be a lot of tokens to process.

(54:01):
So I'd actually recommend using MCP tools to process and search the data first.
Here's an MCP server that I wrote to handle retrieving episode and transcript
data from podcasting 2.0 compatible RSS feeds.
I've really only tested it with JB feeds so far, though.
Well, that's a big one. The Linux Unplugged one's a little ridiculous, I will admit.
This is awesome. It's MIT licensed. It's written in Python over on GitHub.

(54:26):
We will definitely have this in the show notes.
Yeah, it lets your agent list shows, search episodes, get episodes, and get the transcript.
So my agent provides me a morning report and it surfaces certain boosts and
it put red 5Ds right to the top, right to the top.
And it said it had a specific call out.
Hey, he created this and this looks really useful. It saved tokens.

(54:49):
You should let me have it.
Yeah, that's exactly what Laura said. He's like, I think this would be really
useful. Let's set this up.
So it's funny because I have it monitoring the boost just so I can see what people are saying.
and you know it's good for signal and stuff like that and and and really good
sense of direction so and but this is the first time where you started surfacing
interesting bits like this and reds came right to the top so appreciate that

(55:11):
very very much thank you sir.
Well turd ferguson boosts in 13,333 sets,
So let's say you guys were right about agents managing Linux systems in the future.
Makes you wonder how the big cloud providers would take advantage of a technology like that.

(55:32):
Yeah, you do wonder, you know, what changes about fleet management and just
wrangling APIs. And there's already so much YAML to manage in like declarative systems.
Do they sit on top of like an Ansible and Kubernetes and then go out and deploy things?
Or do we develop other new agent first things or like whole systems to sort
of enforce more security layers or more review layers or adversarial layers

(55:58):
on top of that to make sure like that your change sets are really heavily scrutinized?
Do you think we would see cloud providers that lean into no AI agentic,
no agent stuff? Do you think that could be a market?
I doubt it. They want to take advantage of all of this.
They do, because it just sells more of their stuff. So they're going to lean in, you're right.

(56:21):
Well, I opt out of that. That's a good point.
Okay, all right. Well, there goes that dream. All right. Well,
that's an interesting question, Turd. Thank you.
Tomato comes in with a row of ducks, 2,222 sats.
I love the discussion on the bots and especially the talk with Abe.
Yeah, that was a lot of fun. Abe's been updating us, too, in the Matrix.
There's more going on over there in Abeverse.

(56:45):
Planet of the Abe.
Mm-hmm. Mm-hmm. I don't want to spoil it, but I'm on, I think,
the fifth book of Bobverse or fourth book.
Oh, fun.
And just a really, like a real, oh, crap moment just went down with AI.
Fascinating timing on that. Thank you, everybody who boosted the show.
Those of you who streamed them sats, we had 28 of you stream them,
and you stacked collectively 38,385 sats amongst y'all.

(57:09):
Very good job. It's a nice little showing right there. We really do appreciate that.
When you combine that with our boosters who sent a message, including even those
that were beloved below, we beloved them, below the 2,000-sat cutoff,
which we do it for timing. But we read them all. We love them all.
So our grand total, when you combine it all together, for this episode was 149,531 sats.

(57:35):
Now, I will say we have some big stuff coming up. We have our trip to scale
and Planet Nix. We have LinuxFest Northwest.
There's always a ton of expenses around that.
And we're always doing it on a really super lean budget. So if you want to boost
the dip while the sats are cheap and support our upcoming events,
this could be a great time to get a message on the show.
Even if you don't have anything that profound to say, if you just want to send

(57:56):
the value, we'll stack that and use that towards our upcoming trips.
And we really do appreciate it very much.
Fountain FM makes it really easy to get started. But there are a bunch of ways,
including self-hosted ways, they use things like Albi and Podverse and whatnot.
That's all open source, top to bottom, from the software to the payment infrastructure.
Thank everybody who supports us, including our members.

(58:19):
All right. So let's go through these picks because I found one and Brent found
one. And I think Brent's is going to be really handy for anybody that isn't using a stock ROM.
I sent this app immediately to Jeff because it fits his personality perfectly.
This app is called Plexus, and I just found it browsing the FDroid repository,

(58:39):
as you should do from time to time.
I do.
I like that.
It's calming and fun, which is also why I have like 400 apps on my phone.
Yeah. You need to also, I think you need a rule. One in, one out, Chris.
That's a good idea. That's a good idea.
Plexus provides insights into
app compatibility using the Google Play services. so it's a crowdsourced,

(59:05):
I guess, definition of which apps work really well without Google Play services,
also with the Micro G service, and which apps may or may not encounter issues
if you have these installed or not.
So if you're using a custom ROM like Lineage OS or maybe something like Calix

(59:25):
OS, you can see how compatible your current apps are if you are migrating from one system to another.
Can I ask you something?
Oh, yeah, sure.
Did you give it a go on your Paizo?
Well, I was really hoping Jeff would do that for me.
You didn't give it a go.
No, I didn't give it a go.
You're a paranoid guy when it comes to this kind of stuff. Not to use the word,

(59:47):
but you were kind of paranoid.
I have to report that using Giraffe in OS has considerably changed my level
of anxiety about this particular problem.
because Graphene has a sandbox Google Play, and that has softened me.
And I don't know if that's good.
Right.

(01:00:07):
But I'm going to blame you, Chris, because you brought me on this bandwagon,
and it's made me way more willing to install apps I would have never done before.
So maybe the community could tell me whether that was a good idea or if I should
go back to being a little bit more paranoid.
I have felt pretty safe about the Giraffe, you know, a sandbox Google stuff
as well, which is probably why I have 300 apps on my phone. why is.

(01:00:30):
It going down how many of them would work without it.
You said 400 last time.
I was exaggerating it is i think it's closer to five it is no it is closer to 300 you're.
Updating let's be terrible.
Oh no it's the optimizing that's terrible the updating's not so bad because
that just happens in the background okay so you guys know me i love markdown
if i could think in markdown i would and i often will write myself a little

(01:00:54):
to do for the day just in a text document and i'll just format it in Markdown.
And I came across an app. I think it's German developers. Anybody have a guess
on how to pronounce this one? Maybe Brenton, you have a.
I'm going to go with Reinshrift. Reinshrift to do.
Reinshrift to do. And it is a Rust, Eduardia app. That's right. That's right.

(01:01:15):
That is a front end to very, very simple markdown to do management.
So if you like to manage your to do's with the markdown, this is a front end
to it that is designed to connect to a web dev instance, particularly Nextcloud.
So you could have ongoing sync
to-do notes formatted in Markdown that just save to any web dev share.

(01:01:37):
Oh, that's nice.
Yeah. Obviously, the idea is to lean into NextCloud, but any web dev share would
work. You can make it versionable via Git.
And if you're so inclined, there's an optional download to download a local
Whisper open source audio model so you can dictate your tasks to the application.
And it will write them in Markdown for you.

(01:01:58):
Nice.
And it's lean it's mean it could fit on your machine no problem at all and because
the back end is all text should you decide to stop using it,
Bob's your uncle. You got your files right there.
Looks like it's a CC by SA 4.0 license, Rust, Python, JavaScript,
HTML, all in there. There is a Docker file.

(01:02:18):
And no NICs yet, but it's cargo, so that should be easy.
It is a little bit of all of the above. And you don't often see an app these
days with the Creative Commons license, but it works.
So we'll check it in the show notes. We'll chuck it in the show notes for you.
It's like Plexus is GPL3.
Yes. Plexus is a good one. Nice find, Brent.
Okay, I have to say I felt bad about not trying Plexus. So in the time that

(01:02:41):
you gave us a second pick, I tried it. And it's pretty fabulous, I've got to say.
You can just see the entire database of the apps that have been tracked or looked at.
Or you can just filter by installed apps. So Chris, you can look at your 300
apps and have a little homework when you're trying to fall asleep.
It is fabulous. And they rate, kind of like rating video games with how they work on wine.

(01:03:03):
It has like silver ratings, gold ratings, that kind of thing for various apps.
depending on if you're using Micro G or without Google Play services.
Oh, look, Audiobookshelf is gold. That's nice.
Mm-hmm.
Okay.
I'll be go. Not so great.
Yeah, well, that's fine by me. In a way, it's a badge of honor, I say.
The least compatible I am with Google Play, the better. It's a badge of honor.

(01:03:25):
So if you'd like to get links to our picks or anything else we've talked about,
linuxunplugged.com slash 653 is where you go for that.
In fact, we have a whole bunch of episodes over there. You could say a whole back catalog, perhaps.
In fact, maybe there's even hidden metadata that they can't see on the website,
but it lurks in the RSS feed.
Well, how else would that dope MCP server work if we didn't have fancy podcasting

(01:03:48):
2.0 namespace tags for chapters and transcripts?
That's right. So the chapters are available as high-resolution data. Let's call it that.
And the JSON.
Yep, as well as the transcript, also available for you, all in the RSS feed.
And often a video version of the show is tucked in that RSS feed as well if you go through there.
And if you've got a podcasting 2.0 app, it just exposes all of that for you.

(01:04:11):
Just whisper alternate enclosure to yourself at night.
And of course, we also stream live.
Yeah, join us on a Sunday, 10 a.m. Pacific, 1 p.m. Eastern.
Make it a Tuesday on a Sunday. Hang out with our Mumble crew,
our Matrix room. Give it that live vibe.
jblive.tv, jupiterbroadcasting.com slash calendar for your local time.

(01:04:36):
And remember, if you want more show, that's how you do it. You feel like you
need more, there's a bootleg, which is clocking in at a whole lot of extra show right now.
Oh, my gosh. That's a lot of show.
All right. Links to everything we talked about. Links to Unplugged.com,
Matrix Room, all of that details are there. It's a website.
Thanks so much for joining us on this week's episode of Your Unplugged Program.

(01:04:56):
And we'll see you back here next Tuesday.
As in Sunday.
Advertise With Us

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

The Breakfast Club

The Breakfast Club

The World's Most Dangerous Morning Show, The Breakfast Club, With DJ Envy, Jess Hilarious, And Charlamagne Tha God!

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2026 iHeartMedia, Inc.

  • Help
  • Privacy Policy
  • Terms of Use
  • AdChoicesAd Choices