All Episodes

November 2, 2025 โ€ข 61 mins

After all the AI hype is over, one change for Linux will be sticking around; we put it to the test.

Sponsored By:

Support LINUX Unplugged

Links:

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:11):
Hello, friends, and welcome back to your weekly Linux talk show. My name is Chris.
My name is Wes.
And my name is Brent.
Hello, gentlemen. Coming up on the show today, you know, after all the AI hype
is finally over, I think there will be one real game changer that stands and
sticks around, and it impacts Linux.
We're going to tell you what it is, and we're actually going to test it out

(00:33):
and see where it's at today.
And then we'll round it out with some great picks, some boosts,
some shoutouts, and a lot more. So before we get to all of that,
we've got to do the right thing and say time-appropriate greetings to our virtual
lug. Hello, Mumble Room.
Hello. Hey, Chris. Hello, Brent.
Hello. Thank you for being in there. We'd love to have you join us.
You can join us on a Sunday morning or whatever it is in your time.

(00:57):
JupyterBroadcasting.com slash mumble for that. And then, of course,
jblive.fm for the stream. It's a vibe. That's what I always say.
You don't think so?
No, I do think so. I'm here for the vibe.
You know so.
Well, I've, you know, it's got a different feel. You never know what we're going
to say, and we don't either.
You never know what Neil's going to say, you know?

(01:19):
Hi!
Also, good morning to our friends at defined.net slash unplugged.
Go meet Manage Nebula from Defined Networking.
It's a decentralized VPN that's built on a true open source platform that you
completely self-host and own yourself, or you can lean into their managed system,
which makes it really easy.
You go to define.net slash unplugged. You can try it on 100 devices for free.

(01:43):
So unlike traditional VPNs, Nebula is decentralized.
And that means there's a certain resiliency you can choose to build into this.
And there's also a community of resources to help you make it more resilient.
So this is great for a home lab, of course.
This is also fantastic for a global enterprise. Best in class encryption.
Fantastic community. and battle-tested.

(02:06):
You know, in history of being my teacher and all, when it comes to something
as foundational as how I network everything, I want to own that stack,
and I want to own that stack end-to-end because when I'm building something
for myself and my wife and my kids to use...
I'm trying to take like five-year views, 10-year views if I can.
And I'm not digging on anybody. It's just a fact. When you tap venture capital

(02:29):
over and over again, it gets mixed into the tools that you rely on.
And sometimes your core infrastructure tools go different directions over time
because the priorities start to shift as they tap more and more VC money.
And when I thought about it long term when I think about what I want to be running
what I want to build on years from now for both JB and my personal infrastructure,

(02:49):
I want a project that's truly built around the value of ownership it's a big
deal when you think about it long term and there are a lot of options out there,
and some of them have like self hosting options kind of begrudgingly like you
know there's like different variations of it,
nobody does it like Nebula and so you know You can make the mistake of like I did.

(03:10):
I made a rookie mistake of linking a big tech login to my VPN provider.
And I don't like that at all. And now in retrospect, years later,
I wish I hadn't done that. That type of stuff is what I'm talking about.
Nothing does it like Nebula. And nothing has Nebula's level of resilience,
speed, and scalability.
Go get started with 100 hosts. Absolutely free. No credit card required.

(03:32):
Go to defined.net slash unplugged.
Well we got a wee bit of housekeeping this week because some events are coming
up that we'd like you to know about it's the 13th edition of Seagull,
And that's happening next week at 8 a.m. to 6 p.m.
Friday and Saturday, November 7th and 8th at the University of Washington.

(03:53):
What do you know about it, Wes?
Well, it's our local Seattle Linux Fest, you know?
Yeah, downtown.
Well, it has been kind of downtown or Capitol Hill area, but now it's over at
the University of Washington, which is kind of its own little corner.
Okay, that could be nice.
Yeah. So there should be lots of stuff nearby. I think they've got some meetups
and stuff going on after the fest. There's like a tea swap thing.

(04:13):
If you want to get some interesting teas, that sounds like it's a pretty good time.
Is that like gossip or is that the kind you drink?
Like I think the kind you drink.
Oh, okay.
Yeah, it's called Teagle. That's like a little subcom.
That's so Seattle. That's great. All right. Okay, that's good to know.
So seagull.org slash schedule. We'll have a link in the show notes.
Over 50 sessions or something around 50 sessions is going to be there.

(04:34):
Yeah, it looks like some good stuff. Dev stuff, DevOps stuff,
of course, Linux stuff, and then just general community and open culture.
And then probably should be on people's radar, LinuxFest Northwest,
Scale, PlanetNix26, all putting out their call for papers.
Yeah, and they all close sooner than you'd think. So if you're interested in
contributing, talking, speaking at any of those kind of events or volunteering

(04:56):
maybe too, but you might consider taking a peek.
Yeah, we want to let you know now because what's going to happen to you is the
holidays are going to hit you.
Yes.
Right in the face and you're not going to think about the call for papers and
then the events coming around. And, you know, if you can get a talk accepted,
a lot of employers will pay for that to get you there to, you know,
do your talk and maybe promote the thing a little bit that you do.

(05:19):
It's totally worth it. So, Scale, Planet Nix 26, and Linux Fest Northwest all
call for papers. And Siegel is next weekend.
And we hope to see you there.
I have a couple of other things to get to before we start the show.
We have decided we are going to do another config confessions.
So please send your configs in either boost with a link or go to our contact page and add a link,

(05:40):
And we'll start collecting those. We already have some in the bag.
If you already sent them in, we still have those. Most of those,
at least. Depends if Brent lost them.
And we'll get to config confessions very soon. So send those in.
And a great way to support the show would be send us a link with a boost and,
you know, two birds and all that.
Also, while you got that boost button hot, I got something I have to just level set with you guys.

(06:03):
I'm having a hard time understanding something. And I really want to take the
temperature here. When we first started talking about what everybody now calls
AI on Linux Unplugged, it was years ago.
And we really referred to it at the time as machine learning because the tools
that we were really looking at were machine learning tools.
And in the context of Linux, it really wasn't a huge topic, seemingly didn't

(06:27):
deserve its own episode.
And then three years ago, October of 2022, we finally did a dedicated episode on the topic.
And it was all about AI generation. Episode 481, the internet is going crazy
with AI-generated media.
What's the open source story and is Linux being left out?

(06:49):
And that's when we dedicated an episode to this thing called stable diffusion,
where people could generate images.
And we talked about the morality of it. Three years ago, we talked about the
power use issues in that episode. And of course, we talked about the Linux story around these tools.
And we even stood up a live instance of a web version of Stable Diffusion and

(07:11):
unleashed it on the live stream and let them crank out issues or images on our
VPS. Do you remember that?
Yeah, that was a lot of fun.
It was three years ago. So back then, it just didn't have the hype around the topic.
And it wasn't as charged as it is now.
And you didn't see the counter reaction to the hype that you see now.
And I cannot really, to be honest with everyone, wrap my noodle around how controversial

(07:35):
every little new technology is these days.
From programming languages to technology platforms to AI, it's just, it's remarkable.
And I'll explain my position on AI next week if people are interested,
but I wanted to take the colony's temperature on where you are at with AI. Do you hate it?
Explain yourself. Are you ambivalent? Explain yourself. Are you excited?

(07:57):
I want to know why. So help me, help us wrap our noodle around this because
as people that have been talking about this since like 2021-ish,
these were just tools and then all of a sudden they got really heated.
And I'd like to know where you're at on this stuff.
Because the premise of this episode this week is despite how much you hate it,
there's at the end of this, what we might call an AI bubble or AI hype session,

(08:21):
there's no doubt going to be a few tools that remain standing.
That's where we're going to find what worked and didn't work,
what was hype, what was silly, what LLMs were horrible at, and what LLMs were great at.
And that might be a little while from now.
But the people that hate this stuff have got to realize it's not going away.

(08:43):
Some of this stuff's going to stick around.
And what I've realized recently, and I'll get to why soon, is that Linux in
particular is going to be one of the most affected areas.
And you start to see hints of it this week when both Red Hat and SUSE made announcements
around their enterprise-grade distros.

(09:06):
So Red Hat has announced an AI assistant designed for application and migration
and modernization tools on the RHEL platform.
They said the launch of the Red Hat developer Lightspeed platform,
a portfolio of AI solutions, will equip developer teams with,
quote, intelligent context-aware assistance through virtual assistance.
The company said this will help speed up non-coding-related tasks,

(09:29):
including development of test plans, troubleshooting applications,
and creating documentation.
And then even more recently, Sousa says, with the release of Enterprise 6,
it is the Enterprise Linux that, quote, integrates agentic AI.
Sorry, I can't help not laugh a little.
I know. I know. I agree. Go ahead, Brent.

(09:51):
A release of Enterprise 16. You mentioned Enterprise 6, but I think we're moving
a little further. 16. You did.
You did.
See, 6 AI can solve that for you, Chris.
Well, I'm reading with the LLMs. No, I'm kidding. um i
i want yeah i agree with wes and probably a lot of you listening like
this stuff that integrates agentic ai it's it's

(10:11):
okay anyways they say quote this is the industry's first enterprise linux that
integrates agentic ai and reduces operational costs and complexity through ai
readiness so this is the phase we're in jargon heavy hype heavy gotta gotta
slap ai on your product in order to sell it do you think it's a coincidence
that these two announcements were made so close to each other?

(10:32):
No, that's no coincidence at all. That was very intentional.
It is very much each position trying to jockey or each company trying to jockey
their position in the AI race.
And all of that is exhausting. And all of that is tiring. And all of it seems
unsustainable. And that's all true.
That is all true. But what we want to talk about today, I think,

(10:54):
is the stuff that will remain after all of this passes.
So as those of you who've been listening to the show for a while,
you know that I have been rolling my own distro that I call Hypervibe for a while.
I'm just at like the three-month mark now, if you can believe it.
On average, I've made two or three notable changes per week as I've used it.

(11:14):
Some weeks more, some weeks less, kind of averages out.
And all of this has been done as an experiment using an LLM.
Every little tiny change.
And I started as a joke because I thought it was going to be a total disaster.
And I somehow walked away with a working system.

(11:37):
And then having refined this more as I actually start to use it on my day-to-day
to get job work done and start taking it seriously and less as a joke.
Actually depending on it to be your desktop.
I've realized that LLMs are good at text.
And everything I'm doing is a configuration file. And between the ability for
the LLM to do web searches and

(11:58):
to write configuration files and understand simple YAML or config files.
This is an area it's actually particularly strong at.
And I think we will see...
A future where if you don't take advantage of these tools, you're not going
to have your job replaced by AI, but you will have your job replaced by people

(12:19):
that are taking advantage of these tools.
Last week, I was having an issue with my network card dropping off the network.
And especially bad in the morning for some reason. I don't know what that's
about. I'd come in in the morning and I'd just have issues for a couple of hours every few minutes.
Looks like my nick would just drop off the network. And one of the ways I'd
have to fix it is unplug and plug it back in.
And then it would come back to life and it would start working again.

(12:42):
And I was a, it was a morning before the launch. I was rushed to get to the
show live because we do it.
We do a little bit earlier or no, we do a little bit later, but it's just a
more complete, complicated show. So it feels like it comes earlier.
And, um, I needed to fix this issue because I had to get my job done.
And so while I was going about prepping the show, getting voicemails,
doing all that stuff, I opened up another application and in the, and I just had a prompt.

(13:06):
I said, I want you to review my logs, look at my network driver and the recent
Linux kernel releases and figure out why my network card is dropping.
And then I completely forgot about it. And when I came back,
it had figured out what it was. There was a change in the upstream in the Linux kernel.
And now I need to make an adjustments to my power settings for my particular Intel NIC.

(13:29):
And it did all of that and figured all of that out while I was going about my
other work. And I came back and it's like, OK, when you reboot,
the fix is in and you're good to go.
Has it been?
That's great.
And it went through, it read the log files, it read the kernel change log,
it did all of that work, and then it made that change to my system.
And because it's just changing the output in my Nix config, and it's apparently

(13:51):
very adept at Nix, it handles it just fine.
And you could extrapolate that out to a network engineer who needs to make a
modification so every machine uses a new IP for something, right?
Yeah, I mean, if you try to read through some of the buzzwords in the ASUS stuff, right?

(14:12):
They're making an MCP server, which is basically a JSON API to interface with
pieces of the OS, including, I guess, some more stuff on top of Cockpit to help
you do, you know, LLM-enabled changes and updates or check in on what your system's doing.
And just as an aside, doing all of this, it's given me an appreciation for how
much edge case solving distro builders have to deal with and maintainers.

(14:32):
Like, it's constantly, like, things change upstream.
Sometimes it's just the name of a package, but you have to go through and constantly
make those little tiny changes.
Yeah, the integration is real work, right?
Not for me. I just have the LLM do it. I do. I just have it check it,
and then it goes through, oh, yeah, these things have changed upstream.
Okay, I'll go through. I'll make sure I change these for you,
and then it's ready to go for the next build.

(14:53):
It makes maintaining my own distribution possible.
And I thought, okay, let's put this to the test. And so that was one of the
things we wanted to do this week is we wanted to put it to the test and see
how far we could push this and kind of figure out where it breaks and really
how realistic is this now.
But I think this isn't going to work for every Linux. This isn't going to work

(15:15):
for every problem. It's going to work specifically well for Ansible,
for NICs, those types of things, declarative systems, it already is here. It's here.
Yeah, especially where it's like a lot of the same common repeated patterns
that maybe just need to be squeezed into particular context.
It is fair to say that, you know, we're at a stage where it doesn't always produce
the cleanest system, you know?

(15:37):
Yeah, that is for sure.
There's sometimes things are a little hacky and a mess. I mean,
it works, but it might not be how you would do it.
And it often works kind of the best for things like that, where you care more
about in our testing sort of from a black box perspective where you're just
like, okay, I can verify the things that I need do work.
I don't care that much about exactly how it's doing it or exactly how,

(15:57):
if it did it the way I would do it.
And I think the other thing to acknowledge at this point is it can be very expensive,
if you're doing a lot of this, depending on how you pay for it.
After we spent hours, we spent all day on a call yesterday working this out,
which we're about to get into.
And after we got off the call, I wanted to see how feasible it would be to just do it with a local LM.

(16:21):
And there's a lot of ways to crack that nut. But to really make it simple that
anybody could do is I went and I downloaded the app image of LM Studio.
Oh, nice, yeah.
Classic, right? Which has Hugging Face integrated. And it supports downloading
a, well, a bunch of different models, obviously. But one of them it supports
downloading is DeepSeq.
And also like the Quinn ones, which are like 8 billion parameters. you

(16:44):
need a lot of parameters and you can just download them and
then in the system tray icon you can just enable the server
and then you go over to something like zed zed editor
which is great and one of the options is just
a local connection to lm studio zed just automatically connects to lm studio
and all of the prompting is done locally with whatever model you've loaded in

(17:05):
lm studio be it deep seek or quen or whatever it is and it's slow on my system
of course but But it's actually using the AMD Vulcan acceleration.
So if you're not in a rush, it's usable. So after we got off the call,
I had a couple of things to fix, which I'll talk about more,
Went and had dinner while it was doing stuff and came back and checked on it
like a half hour later and it was done.

(17:27):
So you can absolutely do this depending on the system with local tools.
But there is still a quality gap. Like the big stuff, the big models hosted
by your chat chippities or your clods are still superior.
They're still faster and they're very expensive.
So these are real limitations. As right now it works, but it'll build a system

(17:48):
maybe the way you wouldn't if you're not really on top of it.
And if you don't know to catch something there was something you brought up
yesterday west it was like oh i'm really glad you said that because i wouldn't
have caught it and the ai wouldn't have caught uh.
Maybe it was the the bit about like all the extra work it was doing with uh a rofie setup.
Yeah something it was something like it.
Introduced like basically an abstraction to deal with having to use multiple
multiple rofie packages that just no longer was needed.

(18:10):
Whatsoever yeah it was a solution to a problem that no longer existed and it
hadn't caught that that's a great example and And so there's still that level of human engagement.
But we learned some lessons with this. And it is, in my mind,
sticking around. And I don't know what this is going to be called.
This is one of these things I always take a lot of crap for.

(18:31):
It's not obvious to everyone two to three years out. But as we get two or three
years down this road, it's going to be so obvious.
And it's going to be called something like prompt ops or DevOps prompts or Linux by prompt.
Really? Like, you know, like you're going to have a Linux machine as a system
administrator and you're just
you're going to have something on the command line that you work with.

(18:52):
If you go over to GitHub.com and you look at the CLI tag and you look at the
most popular projects in the CLI category, every other project practically is
some sort of LLM tool on the command line.
I mean, even back at Summit, Red Hat came out with Lightspeed and their little
C tool for the command line.
Yeah.
That kind of stuff is going to be the norm. It's just going to be one of the

(19:14):
tools on your, you know, like you have some stuff that can suggest commands.
There's going to be LM tools that do that.
And there's just going to be a name for it. Like DevOps became a thing.
I mean, that's already one of the uses, just especially that like throwaway
scripts, quick little things to clean up files or like do a particular manipulation
that it's maybe not worth me scripting, but like a script would be better than
me doing it manually. I love that.

(19:36):
And I actually think there is a future where small local micro LLMs or whatever
you want to call them are actually going to be better at this particular stuff
that we're about to talk about than the chat GPTs or the clods.
There will be in the future models that you can run in your text editor that

(19:57):
will be possible on a desktop machine that is just super focused on config files
or PHP or whatever it is that you do. That's its whole world.
That's kind of some of what Red Hat is offering with some of this new development.
Like they have stuff targeted at enterprise migrations of legacy software or
migrate into containers and stuff like that.

(20:17):
But it's, you know, you have models you can self host that they've figured out,
optimized for these particular types of problems.
And right now, man, if you get LM Studio and you load DeepSeq or whatever the
hell you want and you open up Zed, it's all happening locally on your machine.
It's all completely self hosted with just two desktop applications.
You didn't have to start a single container or anything.

(20:38):
And it's got a nice GUI to help you find the new models. You don't even have
to know what the hell you're doing.
And it's approachable right the F now.
And so this is only going to accelerate. You're seeing Red Hat and SUSE lean into it.
It is going to be so obnoxious for a while. And I really sympathize for those
of you that are so sick and tired of this stuff.
But I'm trying to give you your medicine with a little bit of sugar this episode

(21:00):
because what we're about to talk about ain't ever going away.
Onepassword.com slash unplugged. That's the number one password.com and then lowercase unplugged.
Take the first steps to better security for your team by securing credentials
and protecting every application, even the unmanaged ones you didn't know about.

(21:23):
There's more to secure than just passwords.
Managed and unmanaged SaaS applications, for instance, are a huge issue these days.
That's where Trellica by one password secures your apps without leaving your
employees behind, without creating that odd and difficult tension between IT and your end users.
And if you are an IT professional or if you're in IT security,

(21:45):
you know about the mountain of assets that's growing all the time and the sprawling
applications that are out there as a service that users are signing up for all the time.
It's a big problem. That's where Trellica will help you to discover and secure these applications.
It'll find out where you have redundancies, where maybe you could cut back on spend.
And Trellica by 1Password has pre-populated app profiles that'll assess the

(22:08):
SaaS risks, let you have a better understanding of really what you're dealing with.
And like I say, optimize that spend while you're enforcing best practices across every app.
It really is truly the missing piece, something that I have struggled with when
I was in IT because there wasn't anything like this on the radar.
When 1Password came along, getting better password hygiene seemed like a huge leap forward.

(22:30):
And of course, you know about their award-winning password manager,
but they're securing a lot more than just passwords now.
So check out 1Password Extended Access Management.
You get started by going to 1Password.com slash unplugged. That's where you'll find out more.
You'll learn if your employees are bypassing your best practices to use unapproved
apps, how you can get your hands around that, and how you can get one dashboard to manage it all.

(22:55):
So take a look at 1password.com slash unplugged. Take the first steps to better
security for your team by securing credentials and protecting every application,
even the unmanaged shadow IT.
Go right now, 1password.com slash unplugged. Support the show and learn more.
That's 1password.com slash unplugged, all lowercase.
1password.com slash unplugged.

(23:20):
Well, your three hosts yesterday got on a call because, well,
I had an inclination at least that Chris's dear Hypervibe needed some love on the back end.
We had a couple of listeners say, I think things could be better.
And we had some PRs come in and we figured we should have a look at this.
And Wes, what did we find?
Well, first off, as usual with our community, we found like just incredible

(23:44):
pull requests and issues and just a lot of wonderful engagement,
including someone had like sopsified it with
sopsnix and written just an incredible readme about
how to use it and work with it so we'll have to look more in that for
sure but you know hypervibe has has done
well it's obviously been working well from from chris's experience reports
here but it's really kind of it started as a

(24:05):
one system thing that got extended to a two system thing right
rvb and next station is are the names here yep and
then at one point earlier we did try kind of a first round of
this where we were like well you have a lot of shared functionality between these
two configs that you probably don't including a whole bunch
of janky activation script stuff that is sort
of a bunch of crappy bash yes i'll own that yeah i guess you vibed into existence

(24:27):
to not use home i did not want to use home manager i will own that yeah and
so that each one had their own version of that was but it was pretty much identical
right so it's like took an early stab at trying to refactor some of that into
actual modules that you could then i was use in your host config like.
I was doing a backwards approach of taking two linux boxes that We're totally
separate systems set up years apart from each other.

(24:49):
With very different, you know, display setups. And I wanted to unify them into one experience.
You know, try to create one ultimate Linux desktop for myself that I use across all my machines.
So I essentially tried to backwards integrate them. And I got to a working point
for like those two systems.
But it was not in a very good state if you wanted to onboard a new system.

(25:09):
Yeah. Which eventually I do. And I wanted you to try it too. Right.
It also meant like if anyone else wanted to use it, right? There was a lot of
work you had to figure out and like copy your structure.
Not really do it the nix native way at all and or i mean not the like easiest
way anyway and to boot right we talked about the problems of the stuff that
you know we wanted to refactor but there's also stuff we needed to make more
variable just in that like you had hard coded in a few things including your

(25:31):
username you know the wonderful chris f yeah.
I mean i thought everybody would just run as chris f but turned out they didn't want to do that.
Yeah so we needed to expose stuff there and there were a couple pull requests
already to do that so people had taken a few stabs at basically using the module
system and exposing those as configuration options for a Hypervibe module that
would say, like, what user are you using?

(25:53):
Because we need to, like, put that in some of the scripts and plan for that.
Yeah, so it's something you could add to an existing system.
It'd be a Hypervibe module that you could add to a Nix box that already exists,
and then you could define run as this user. Run as Wes instead of Chris F.
The goal was like you could import this module from Chris and then enable it
and set whatever required options like what your local username you're going

(26:16):
to use it with is and just have it work.
But to get there, we had the problem that in our first attempt,
well, one, we didn't have the optionality on username at all. That was hard coded.
But then in our attempt to simplify things before, it kind of pulled some stuff
out into shared modules, but it put it under a shared namespace instead of hyper
vibe namespace. So we're going to have to replumb.
Yeah.

(26:36):
That's more of like a find and replace almost, so stuff that it should be pretty decent at.
It's not, at first it wasn't obvious to me what should be machine-specific configurations
and what do I want on all my systems.
Right, and that's where either you have to put your judgment or rely on it's...
Quote unquote judgment.
And it's something I hadn't thought a lot about. So it is an example of if you

(26:58):
don't think of it, it won't necessarily.
It doesn't mean you can't go back and refactor, which is what we decided to
do is essentially re-architect the way this thing is completely done.
In order to make it possible for, say, somebody like Wes to run it on their system,
we had to extract out machine-specific stuff and try to put the overall Hypervibe

(27:19):
experience of Hyperland, the chosen application, the theming, the Waybar,
the performance optimizations, the Zen kernel.
We wanted all of that to be something that would be a shared thing across all systems.
Yeah, and that's where it's you as the crafter here who's trying to figure out
enough of what your vision is if you're going to have it be able to be executed by something else.
Part of the problem, too, was the first attempt had pulled some stuff out,

(27:43):
not the way we wanted, but it hadn't necessarily always deduped that, right?
It didn't always update all the configs to use that new functionality.
So then there was now, instead of deduping, it just duped again.
So we had even more cleanup to start with, really.
Yeah, that is true.
And that's just kind of stuff you got to, you know, watch for if you're going
to be using some of these tools sometimes.

(28:03):
The goal that we wanted to pull off was to task the machine to re-implement
this into a way that could be shared and not break the existing system while doing so.
Because we were basically switching. You were redefining and picking an interface
that would be Hypervibe and you needed your two existing machines to still use

(28:25):
that and not use their old hard-coded previous version.
That's kind of a tall ask. I mean, that is not managing a simple Nix OS config
for one workstation. That is a little more abstract.
And you're really trying to
change the plumbing of how this entire thing is built while it's in use.
And the more you realize, oh, I have to solve for this edge case.

(28:46):
Oh, I have to solve for this edge case because these machines have this resolution
and these machines have this resolution.
It just keeps increasing the scope. And as we were doing it,
I know, Brent, you were like, I don't think this is going to work.
This is, we're getting too far out there.
Oh, I was skeptical. I mean, just the expressions that you had watching some

(29:06):
of its progress go by and you're the quote unquote expert at this particular,
uh, let's call it an operating system.
Right. And you're like, well, I don't know if we should be doing that, but I'll say, okay.
And you were freaking like the higher, the number of files touched count.
You're like, well, why is it touching that? Why that one?
And all level headed West was like, no, no, it's okay. It'll be okay. We'll just fix it later.

(29:29):
I was getting more, because it was getting more and more complicated.
And I thought the more complicated this gets, it feels like the more little
side edge pieces it won't think of.
Can you talk a little bit about the setup that we had to make all of this work?
What did you use to actually like make these modifications?
I know it seemed like a pretty cool setup from what I was looking at,

(29:50):
but give us the nitty gritty.
You know, there is a thousand ways to do this. So don't take this as the blessed
path, but this is what's worked for me, is I got the Cursor application and I connected it to GPT-5.
And then later I connected it to a local LM after we got off the call.
But that's a whole other process and it was really a pain in the ass.
And I wouldn't recommend it with Cursor.

(30:12):
But back to my point, I wanted something that would work quickly for us while we were on the call.
And so what's great about Cursor is it's essentially a re-skinned, modified VS Code.
So if you have used VS Code before, you'll be at home in Cursor,
and you know how it can open up entire directories.
Well, that entire directory becomes the context for Cursor and the LLM.

(30:32):
And so it becomes, and that would be all my config files, and they're all in
a different hierarchy in one directory, and it knows all of them.
And it can read across all of them.
And when you ask it a question, it considers the entire scope.
And so one of the ways I've made this work is I've put all of this for all of
my hosts in a build directory in my home folder.

(30:53):
And so everything I'm doing is in that build folder. That's what gets checked in and out of GitHub.
And so it has the context of all of the machines because it's looking at that one directory.
And then you can just open up a chat session and say, tell me about this.
Or it can be something as simple as What key bind am I using for XYZ to,
all right, let's refactor this thing,

(31:14):
which we, I would say it took longer in the sense that I thought we'd be done in a couple of hours.
I thought we'd be done at noon and I think we wrapped up around 530.
Yeah. Maybe 430.
Okay. However, I think if we had done that manually, I think it would have taken us three days.

(31:35):
You know possibly what do you think.
I think i think we're taking me three days maybe
you one day yeah there we go i think that's about right yeah yeah i think
it probably would have taken about a day maybe regardless just because there
was a certain amount of like i had to catch up on what had all because i took
a snapshot look at it a couple months ago but you've continued
vibing right along so like enough stuff had changed i need
to get enough exploratory work to like wrap my head around what was happening and

(31:59):
then just like some of the stuff are just kind of mechanical changes that you
got to do that are actually kind of perfect work for the element because it
is kind of thoughtless really um and then some of it that maybe would have gone
faster is if we had used more human abstract reasoning for some of the like
more stuff to be obvious to us that it was struggling with.
What really struck me chris throughout was would you even attempt something

(32:23):
like this without this tool because now that you're in it you're like okay well
I could probably have figured it out,
but I'm not so convinced you would have started the project in the first place.
I would have started over from scratch. I would have started over and just sort
of rebuilt the whole thing with this new model in mind.
And I don't think I would have ever had the time for that, so I don't think

(32:43):
that's actually what would have happened.
But if I were to take this on without a tool like this, that's how I would have had to go about it.
So beforehand, we kind of had like this scattered configuration,
activation scripts that were defined in different machines and sort of a lot
of duplicate efforts, duplicate paths for stuff. We had my username hard coded.
We made it really hard for somebody

(33:03):
to come along and just add it to their machine and just define a user.
And what we got to at the end of our call was a single declarative source of
truth for Hyperland, Waybar, Shells, all the system fine tuning,
and then per host overrides for certain config options,
users, maybe resolution monitor, 3D, you know, driver stuff.

(33:23):
And there's definitely a lot of work that could still be done,
you know, like one, I think just in general would be good to kind of just audit
the whole thing and delete as much as you can without breaking it just as a, you know, cleanup item.
But then mostly, yeah, I think there's some improvements to be made on the interface
of like what stuff that we have as fallbacks and isn't required,
but it's more just like a good default because I had to override or comment

(33:46):
out some stuff in my code to make that work super easily, as easily as we'd like.
So we got really close to the interface we want, but it's not quite there.
But that said, you know, after 250 lines moved from per host stuff to a single
module or a couple of modules, we got a new namespace for Hypervibe.
Like, we got it pretty far. So the real test was, after we got it working in

(34:08):
a VM, could you get it working on physical hardware?
And I'm happy to report, Wes Payne, ladies and gentlemen, running it right now
on the laptop, switched during the pre-show.
How's it going so far? What do you think?
Yeah, it's been fast.
Does it feel faster?
You know, I got to do more tests.
It feels faster to me. That's all I know.

(34:29):
It was easy, though. Like, I did have to, like, there's some stuff you have
for, like, the garbage collection for the Nix Demon that I had slightly different settings.
So, like, that's a conflict, right? So, some things like that is what I mean.
Oh, yeah, yeah. I suppose because you weren't starting from, like, a fresh.
No, I had to put it on. It's just my existing NixOS configuration.
Right.
But even then, right, it's mostly, there's a couple of changes.
I don't know, under five things I had to come in on my config or adapt or override.

(34:52):
You could do a make force kind of thing, too. and then in the flake you just
kind of you know i had to add your hypervibe as an input i had that uh hypervibe
module and then i set enable equals true user equals west.
So it's four or five lines you added to your nix configure or your flake and
now you have a hypervibe system that's pretty great right i mean you took it
you it was a plasma system for years you added three or four lines to your flake

(35:17):
oh yeah i did also commented out some stuff i did also.
Comment out the plasma stuff.
Yeah but if you were on a fresh box i mean you wouldn't even have to do that yeah.
For our first test we just got a vm going from like a default uh no graphical
environment nix os install from the installer so.
Yeah and.
That worked pretty well too.
The machine had produced us a success a successful reproducible version of hypervibe

(35:37):
which we had not gotten to before so that was very impressive that it actually
did manage to properly refactor it into shared modules into you know host overrides
and then deliver us something that we could actually get to.
A working state yeah.
That and it's that's on your system right now that's incredible,
To do that, we had to completely rip apart my home system, the system that was

(36:00):
working, sort of the original box.
Oh, did you want that to keep working?
And because we were on a call through that system, I couldn't build and reboot and see if it worked.
We could build it. We just couldn't boot.
I couldn't reboot because I would drop off the call. And so I had to wait till
we were all done and so find out if I had a working box or not.

(36:21):
And I don't know how to properly convey the amount of rearranging of the guts
of this system that we did.
I don't know if I โ€“ it's like an entirely new distribution.
Oh, yeah, we should definitely โ€“ I don't think we updated the readme at all, so that's โ€“ Oh.
I know.
Add that to the to-do list.
Yeah, I realized that as I was looking at it this morning. I had no idea if

(36:44):
anything was going to work anymore.
I didn't even expect it to boot. So I got off the call, went ahead,
did one last rebuild boot, hit the old reboot button.
And it came up. I think I have one error message during the activation phase.
But GDM launched, and it came up, and I thought, oh, my God,

(37:05):
I can't even believe I got this far.
Okay, that's positive.
I got GDM, and it booted.
And everything works fine except for one thing, and it's so funny.
GDM launches, and instead of saying my name, it now says Hypervibe.
It's still Chris F for the user.
Yeah, okay, that's another thing we need to set.

(37:26):
We're.
Setting that user description and to make it work i think we had it probably
uncommented just like i did.
But you don't get.
Your username you get a generic hypervibe user.
Because all my dot files were the same and all the software was the same but
the system was completely reconfigured that was the only thing that was wrong
is that my full name for my username was hypervibe,
and i was sitting there going wow i cannot i cannot believe we just vibed our

(37:49):
way to a completely re-architected system, and that's the only problem I have?
That's the only problem I have? You've got to be kidding me.
And I sat there just like my jaw.
We ripped this thing apart and just walked away. I just couldn't believe it.
And so it's working just fine. It's working absolutely. I later on used DeepSeek to fix the username.

(38:13):
But we should fix it in a way where it's set for everybody. That would be the better way to go.
Oh, yeah, did you just override it locally?
Yeah, and I just, you can change the display name.
But what do you think, Wes? What do you think about sticking with it for a little
bit? See what you think trying out the Hypervibe lifestyle for a few days.
Okay.
Yeah? Are you going to do it?
Yeah, I'll do it.
I think if you and I try it out.
I might have to ask you some questions, though, that you can vibe for me.

(38:35):
What do we need? Yeah, I need to update the readme, put a cheat sheet on there
of all the key binds, because there's a lot of key binds to know about.
You and I battle test it for a little bit. Then, you know, the next stage,
Brent, we've got to get you to do a bug test on it. We've got to get you to
run it for a little while.
Well, that's really up to you when you want to pull me in if you're ready for it or not.
Right? Or are you ready for it?
Oh, I'm ready. Are you ready?
I don't know, actually. I don't know. I haven't been so good with keeping up

(38:59):
with the issues and PRs, but I do appreciate people that sent them in.
One of the things we were able to do is Cursor can also, you can add links to GitHub as context.
So where we actually started is we pulled in a few of the pull requests and
a couple of the issues and had it analyze those and compare it to the existing
system because there was a pretty big change since those issues were submitted.

(39:21):
Three months ago almost and we had it look at the differences
and see what would be practically possible to apply and how we might
apply it to the existing configuration and so we started with an analysis of
those poll requests and issues that people submitted and were able to kind of
iterate from there that's really how we even started down this direction and
we just had to give us an overview what they said what would what we would need
to change and that was kind of our launching off point which is pretty powerful just that right there.

(39:45):
Yeah thanks to the community really.
Yeah, it was longer than I wanted, but we ended up with something that is much
more shareable, much more reproducible for me on a new system.
So if I one day finally do get a new laptop, this is now completely deployable for me.
In particular here, props to
FBit818, SAMH, and Shift because they've all made excellent looking PRs.

(40:07):
Yes. Thank you, everybody. Go check it out. If you want to see it,
it's at github.com slash chrislasset. We'll put a link in the show notes and
try to get the read me updated pretty soon to explain how to get it working.
It's been a lot of fun and I've learned a whole new appreciation for the people
out there that are maintaining distributions for us and how much of it is just
little chicken ass that they have to deal with on a week to week basis that

(40:28):
keeps them busy along with all the other stuff you have to think about with
the architecture level.
It's been a great experiment to kind of not put myself anywhere near their shoes,
but I'm in the same room as their shoes and I can smell them.
You know, and I've gotten I've got a much better appreciation for the smell of their shoes now.

(40:49):
Join crowdhealth.com promo code unplugged.
The open enrollment is now so take your power back and join crowd health to
get started for just $99 for your first three months.
I struggled to solve healthcare as a small business owner with just a really
small team. There wasn't a great option for me. And I looked for years.
I tried everything but the cost just kept getting absolutely bonkers.

(41:12):
And I needed to make an informed decision.
And so I did a deep dive into crowd health. I have been a CrowdHealth member
for over three years, and it has been a peace of mind for myself and for my wife.
And we've participated in the Crowd helping others with their health needs, too.
Don't take my word for it. Trust yourself. Go take control of your future with
CrowdHealth. It is a health care alternative for people who want to make their own decisions.

(41:34):
So you don't have to play the insurance game. You join CrowdHealth,
which is a community of people like myself, funding each other's medical bills
directly. No middleman, no networks, no nonsense.
And I can tell you, it works better than I initially expected.
I was just hoping for anything that would be functional.
And I would say it's far beyond my expectations.

(41:55):
There's a great app to let you manage all of this, including looking at your
status in the community, seeing what the requests that come in,
where things are at, and also getting information.
Including, like, you know, taking care of things when they come up,
unfortunately, and all that kind of stuff.
It has dramatically saved my family so much money. This is CrowdHealth.

(42:16):
It's a health insurance alternative.
It's health care for under $100.
You get access to a team of health bill negotiators, low-cost prescriptions,
and lab testing tools, as well as a database of low-cost, high-quality doctors
that have been vetted by CrowdHealth.
And it works. And if something major happens, you pay the first $500 and the
crowd steps in and helps fund the rest.

(42:37):
It feels like everything has been messed up for the few years.
And with health care, it's just getting so much worse this year in particular.
So if you join the crowd, you take care of each other. You get outside that system.
That system is going to be overpriced. It's not really taking care of your health.
It doesn't incentivize you to take care of your health.
And it's so, so complicated now with all the subsidies and the things that are expiring.

(43:01):
It just is not something I even want to have to participate in anymore.
CrowdHealth has saved members over $40 million in healthcare expenses because
they just refused to overpay for healthcare.
They do it right, they figured it out, and it's working for me.
The open enrollment is now, so take your power back. Go join CrowdHealth.
Get started for just $99 for your first three months.
Use the promo code UNPLUGGED at joincrowdhealth.com. That's joincrowdhealth.com,

(43:26):
and then our promo code is UNPLUGGED.
CrowdHealth is not insurance. opt out, take your power back.
This is how we win. Join crowdhealth.com promo code unplugged.
Unraid.net slash unplugged. Unleash your hardware.
Go check out Unraid, the powerful, easy-to-use NAS operating system for those

(43:48):
of you that want control, flexibility, efficiency, and you just want to play
around real quick with the stuff we're talking about. Unraid is your gateway to that.
And Unraid 7.2.0 just landed. Yes, the new stable release of Unraid is here. New, fresh features.
First and foremost, the web GUI is now responsive. So it's going to look great on a lot of devices.

(44:09):
And then you ZFS users, you're going to love the fact that you can expand a RAID Zed pool one by one.
Whatever that means, I know you're going to love it. And it's here.
That's right. You now have solid NTFS support. If you have a bunch of disks,
like I have a couple of old Windows disks that I want to use,
but I want to copy the data off, boom!
NTFS support's in there. You would be blown away.

(44:31):
You would be blown away. Can I just mention how blown away you would be if you
knew about the just excellent file system support in Unraid?
I mean, we talk a lot about the awesome virtualization support for passing through
hardware and doing VMs and containers next to each other.
We talk about the luxurious community application catalog and the fact that
they're always maintaining this thing and putting out new versions and making

(44:54):
it super easy to upgrade and safe. Your data is always safe.
Like that's stuff I talk about.
But what I don't really mention enough is like, it's got you covered on file systems.
And one of the things that a lot of people who script are going to be happy
to see, they now have a built-in open source API.
And I've already seen the community working on some apps around this. It's chef's kiss.

(45:18):
And I don't know, I guess the community is at a point with maturity where these
applications are just bangers. It's just really impressive.
So the new Unraid's great. And if you haven't checked out Unraid yet,
we've got a deal for you Because not only can you support the show by going
to unraid.net slash unplugged, but you can check it out 30 days for free,
no credit card required, unraid.net slash unplugged.

(45:38):
And if you decide to pull the trigger, they got a lot of nice price options
at different points you're going to like.
And that's just kind of locking in the guaranteed maintenance,
the continued improvement.
I mean, they just hit 20 years and they're still going strong.
So this has got a long runway and it's something you can run for a very long
time with the hardware you have today. It's that great.
Unraid.net slash unplugged.

(46:02):
We have a little piece of mail here from our dear olympia mike hey mike it's been a while,
mike writes hey guys um i'd love to get
in on that roast my nix config action this
isn't my personal config of course but it's the main
nix module for the nix book project that i've been working
on for nearly a year the nix book install script

(46:22):
basically just adds this base.nix as an
import and before you jump all over this
no the project doesn't use flakes yet mainly
because one technically flakes are still experimental
and i'm trying to be conservative here it's
also just complicates the installer and number two flakes
seem to also be very host specific but

(46:45):
i won't know what host name a user of nixbook wants
either way this has been running well for the most
part notable parts of this config is the
nixbook config updating itself sending notifications
to users when you don't know the usernames of the
users on the system and the automatic way to switch channels when i bump the

(47:05):
channel version biggest issue nixbook users are having currently is printing
oh yeah yeah i have a vahi uh enabled and it finds the printers but for some reason,
the user still needs to go into CUPS, modify the printer, select the driver,
and then enter their password.
Curious how this can be more automatic, like the way Linux Mint or other distros do it.

(47:29):
That could be something anybody out there knows. Let us know,
because I don't think I have an answer for that one.
Please roast away, boys. Call me out on my jank and make the Nixbook project even better.
Well, I think this email seals the deal, gentlemen. Yeah.
Yeah, I think we're going to do another episode of Config Confessions.

(47:50):
It doesn't have to be a NixConfig.
It could be whatever you are working on, including a Docker Compose file,
maybe an Ansible. I'd love to see a few Docker Composes.
And, of course, your NixConfig. Send them in either via a Boost link or at the contact page.
There'll be a Mike's in there. We've got a few others in there we'll be talking
about. And, you know, of course, I think you better prepare yourself because
a lot of the answers is flake it up, Mike.

(48:11):
Like, number one, the number one problem you listed you wouldn't even be having.
And number two, you need to knock that experimental stuff off.
But we'll get to that we'll get to that he does I'm just telling it like it is that is that is you.
Gotta listen to the episode.
That's big channel propaganda is what it is it's big channel propaganda and
I'm not standing for it on this show I just feel I just feel strongly about

(48:33):
that alright but thank you and prepare prepare yourself,
we got some boosts into the show this week as well KS Koba comes in as our baller
booster with 88,887 that's.
I like it. I like it a lot. Thank you very much.

(48:55):
And I feel like that number means something, but what it really is is a bunch of ducks.
He says, I watched a great interview with DHH and Primogen about the perfect
storm of Windows 11 dropping the ball,
macOS getting stale, and Linux getting good, or at least good enough to finally
make the 2025-2026 the year of the Linux desktop, desktop, desktop.

(49:18):
Kind of echoes what Chris was saying recently about a new demographic of users.
I've been able to shift fully in my life now to Oma Archie on all of my laptops and desktops.
Still some learning to do, like how do I install from a tarball?
But overall, it feels so fresh and I'd say unobtrusive. And OS just gets out of the way.
That's great to hear. And thank you for the boost.

(49:40):
I really appreciate that. But I love knowing that it's working for more and more people out there.
I do think there has been a particular audience that Oma Archie has locked in on.
And I think that's fantastic. As far as running from a tarball,
well, that's a little complicated.
It depends on what you downloaded. You might need to extract it and mark it

(50:01):
executable and then run it.
But generally, you want to try to install something from your package manager
if you can and not from a tarball.
Because you're not really installing it. You're just running it.
So there's that little hot tip for you but thank you Kay, appreciate that baller
boost you're the best around.
Well turd Ferguson boosts in with 22,200 and 22 sets,

(50:24):
I already thought we were called the Jupiter Colony we have colonyevents.com
how quickly we forget ouch that's.
A good point that's true and the matrix server is the Jupiter Colony Yeah,
I'm going with it. I'm leaning in.
I'm leaning in. We're calling the audience a colony, and I am fine with that.

(50:45):
Colonized.
Yeah, I agree.
Well, Oppie 1984 boosted in 4,000 sets.
Here's a quote lifted from last episode's boost. A quote, I'm assuming home
assistant here. And Chris, that was your response to some feedback.
Well, Oppie says, insert Picard face bomb here.

(51:07):
I only happen to leave out the most important details in my feedback.
Yep, I'm switching a home assistant.
Little update on my mom using Mint, though. She's trying a live USB,
and so far it's positive about it.
That's an interesting way to
slip it in for old mom there, is give her a USB live stick. I like that.
She does, of course, like that it looks and feels just like Windows 7.

(51:30):
She's still not ready to take that plunge, though, but I'm on the lookout for
one of those Windows 10 laptops people are getting rid of when moving to Windows
11, and then I'll just do a full install so she can try the full experience
before making the switch.
She needs a new laptop anyways, so two netbirds with one gemstone.
Ah, that's big kidneys right there. I like that. Good thinking.
That's a perfect little snipe. You know, those laptops are still going to be plenty good.

(51:53):
Nice thinking. Thank you for the update, Oppie. It's good to hear from you.
Not the one comes in with 2,000 sats.
Should be lup rats instead of lab rats. Oh, and plus one for another config
confession. I love the deep dives.
Even if it's a topic I don't have a use for. Now that is the perfect listener.
Thank you. We really appreciate that.

(52:16):
Hybrid sarcasm boots in with 10,000 sats.
Are you serious? Thank you, hybrid.
Love the recent baller boost. And just a reminder that a free Jupiter Party
membership goes to the listener that boosts the most total sets in 2025.
Right.
There's still time to get those boosts in.
Yeah. We're going to have to put something together for the end of the year
because we're not, well, we haven't, we technically ended the tuxies last year.

(52:39):
That doesn't mean we'd have to do something, but we should, you know,
get together, have a beer, and discuss what we're going to do.
And maybe eat some food too. You know what I mean? We should probably eat some food too.
I mean, do listeners have ideas of what we should do? I'd be open to that.
I mean, we could party, we could road trip, we can go to space, whatever you want.
What? Okay. All right. Okay.

(52:59):
The suggestions have to come with funding proposals.
Well, Moon and I boosted in 5,135 sats.
Now, this is a live boost from a train running under the San Francisco Bay,
135 feet below sea level. Is that a new record?

(53:21):
I wonder what the record is for lowest and highest elevation. Live boost, anyone?
That's got to be it. At least below sea level, 135 feet below sea level live during the show.
Impressive.
I bet you somebody could beat them on altitude for sure.
And, you know, we welcome all elevation boosts.
This is a great idea. If anybody is above 1,000 feet, we're at sea level right

(53:42):
now. So if you're above us, boost in and let us know your elevation.
I wonder if anybody's on a mountain out there. And can I come live with you?
Thank you, everybody who boosts. We have the 2000sat cutoff just for on-the-air
timing and all of that, but we save all of them in our show notes and we read them.
We also had a nice batch of you stream. 24 of you just streamed those sats as you listened.
You collectively stacked a nice humble 21,058 sats for the show.

(54:05):
It's not a strong week for us, but, you know, it's a showing up,
and it's still appreciated.
The episode, which gets split between myself, Wes, Brent,
Editor Drew, the podcast index, and the creator of the app, we all
collectively stacked 153,802 sats
thanks to you that's like an investment in the future of the show and we really

(54:28):
appreciate that and of course it's also a signal if you like that episode gives
us an idea of what content really works for you there's no better vote than
a boost and you can use fountain fm to do that or albi hub there's lots of options there.

(54:53):
And thank you to our members, our core contributors, and the Jupyter.party.
You put that support on Autopilot, and it's our foundation. We appreciate you
very much. Thank you, everyone.
And we do have some picks before we get out of here. And we've mentioned this on air once before.

(55:16):
It was a sly mention, but it's never made it into the pick category.
And I need to elevate it up because the team's done great work.
It's a very useful application, and it's got a great name.
They had a release on September 8th. It's called Duff, D-U-F,
and it is a free disk usage utility that is, I think, the best visualization,

(55:39):
especially if you've got some media shares or photo shares.
If you have a NAS and you need to kind of get your head around what on your
NAS is eating up a bunch of space, Duff is the way to go.
It's MIT licensed, and they've just been doing great stuff.
And so think less DU, more DF in terms of where its role is.
Because you get just a really nice breakdown of what file systems you have mounted,

(56:02):
including like it'll call out special devices, like places like slash dev and
slash run and slash sys differently.
So then you kind of get to see like your actual more physical real disks all in one place.
Handy rendering in the terminal, including like little progress style bars to
indicate how full your disk is, color coded. It's just really easy to read.
And a nice breakdown. Also, like the type of the file system is so handy to

(56:25):
have just right there. I just love that.
And you'll love this Wes, outputs to JSON, packaged for just about all the distros
you might possibly want, including Arch and Nix, Fedora and Ubuntu and others.
So Duff, D-U-F, that's the first pick.
But we've kind of fallen into this habit of having more than one pick because
there's so much good stuff these days.

(56:46):
A cup runneth over.
It does. And this one's a little bit different. It's called cheat.sh,
and it bills itself as the only cheat sheet you will ever need.
And the idea is you install this on your machine, and you forget a command,
and you use cheat sheet to pull it up.
And one of the things that the project talks about here is they focus on crazy great performance.

(57:08):
Like they want it to be back with an answer in like you know two milliseconds
or something like that and it covers 56 different programming languages a thousand
of the most important Unix and Linux commands a bunch of other stuff is in there,
and it's called cheat.sh it's pretty nice and I think it's probably one of the

(57:29):
handiest tools I've seen in a while it's mostly written in Python,
and did I mention it's MIT licensed I'm not sure if I did or not No.
You got the last one, though.
Yeah.
Now, Wes, on our call yesterday, I specifically remember you disabling,
was it man pages on the Hypervibe?
I was wondering, would you install this instead?

(57:50):
You know, I haven't. That's a good question. As long as you have a system that's
constantly connected, yeah, it might work pretty decently.
Yeah, you want to be able to look up. It's very fast, but it does require internet
connection, so there's that.
It can be used on the command line for command completion. It could be used inside code editors.
They aim for sub 100 millisecond response times, so you don't have to sit there and wait for it.

(58:15):
I think there might be a way to do it. Yeah, there is a way to do it offline
as well. You can store it for offline.
Oh, that's nice.
I thought so. Yeah, it's pretty great. So it's called cheat.sh.
So two great apps we'll have linked in the show notes, D-U-F, Duff, and cheat.sh.
Yeah, I'll be honest. I've been leaning into these types of things recently

(58:35):
just because like fish showed me the way.
And if I can get another fish-like experience that makes it just,
oh, yeah, that command I do every six months.
That kind of stuff love that love things that make that simpler i don't remember
all that stuff like i used to
i want to print out a cheat sheet actually and put it to my monitor just.
Like all that for learning neovim.

(58:56):
Well done sir well done maybe one day wes you never know could always be a challenge well.
Maybe with the lm helping you it'll go a little.
Easy nice nice i think that's a burn i think that was a sick burn actually call.
It a burn.
Yeah but topic relevant oh Well, let's see. What should we tell people about?
Should we tell them about our fancy features that we have?

(59:17):
Yeah, absolutely. Not only do we have chapter markers, you can go right to the
stuff that you like or don't like or skip around or listen in reverse order. I don't know.
We also have transcripts of the whole thing with who said what dumb stuff.
Yeah, and a lot of that stuff is compatible with the OG podcast apps,
the 1.0 apps. We'll bake it into the MP3.
And if they support the standard for the...

(59:40):
And TenantPod does a great job with the transcripts.
Yeah, TenantPod's great for the transcripts. If they support the transcript
standard, like Apple, actually, Apple Podcasts does of all. They support the transcript standard.
So it just kind of depends on the player. And then if you have a nicer 2.0 player,
you get even more features.
You get better chapters. You get perhaps more features with the transcript depending on the client.
And then additionally, you get the lit support and potentially the boost support too.

(01:00:01):
So check us out. Yes, we are live. We do a Sunday, Tuesday show.
Sundays, 10 a.m. Pacific, 1 p.m. Eastern over at jblive.tv.
We have that mumble room, too. You can join us. There's always more in that
mumble room. And if you're a member, be sure to get the bootleg version.
It's clocking in at over an hour and 42 minutes right now of content just for our members.

(01:00:24):
Now, links to everything we
talked about today, those are over at linuxunplugged.com slash 639. Woo!
Woo! Almost to 640. How about that? Also, our RSS feed, our contact form, all that good stuff.
Matrix room, all linked over there. You can find it. It's a website.
It's got links. You're going to love it. Thank you so much for joining us on
this week's episode of Your Unplugged Program.

(01:00:45):
And we're going to see you right back here next Tuesday, as in Sunday.
Advertise With Us

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Are You A Charlotte?

Are You A Charlotte?

In 1997, actress Kristin Davisโ€™ life was forever changed when she took on the role of Charlotte York in Sex and the City. As we watched Carrie, Samantha, Miranda and Charlotte navigate relationships in NYC, the show helped push once unacceptable conversation topics out of the shadows and altered the narrative around women and sex. We all saw ourselves in them as they searched for fulfillment in life, sex and friendships. Now, Kristin Davis wants to connect with you, the fans, and share untold stories and all the behind the scenes. Together, with Kristin and special guests, what will begin with Sex and the City will evolve into talks about themes that are still so relevant today. "Are you a Charlotte?" is much more than just rewatching this beloved show, it brings the past and the present together as we talk with heart, humor and of course some optimism.

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

ยฉ 2025 iHeartMedia, Inc.