All Episodes

May 25, 2025 • 90 mins

Fresh off Red Hat Summit, Chris is eyeing an exit from NixOS. What's luring him back to the mainstream? Our highlights, and the signal from the noise from open source's biggest event of the year.

Sponsored By:

Support LINUX Unplugged

Links:

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
I think we have officially come full circle. We are recording in the master bedroom of an Airbnb.
You know, we went around, did a scientific testing, and determined acoustically
this was the best location to record the show.
We don't want to get any lectures from Drew.
No. No. And thankfully, I don't think we had to tear apart any beds for this one.

(00:20):
But it's funny because the studio where we record is actually my former master
bedroom converted into a studio.
Yeah, I do think we'll have to let Brent tear apart a bed after this just to
get that energy out because he was ready to go.
I was planning ahead and we're only using one mattress.

(00:45):
Hello, friends, and welcome back to your weekly Linux talk show. My name is Chris.
My name is Wes.
And my name is Brent.
Hello, gentlemen. Well, coming up on the show today, we're reporting from Red
Hat Summit, and we're going to bring you the signal from the noise and why this
summit has me flirting with something new, plus your boost, a great pick, and more.

(01:05):
So before we go any further, let's say good morning to our friends at Tailscale.
Tailscale.com slash unplugged. They are the easiest way to connect your devices
and services to each other wherever they are. And when you go to tailscale.com
slash unplugged, not only to support the show, but you get 100 devices for free,
three user accounts, no credit card required.
Tailscale is modern, secure mesh networking protected by...

(01:30):
Easy to deploy, zero config, no fuss VPN. The personal plan stays free.
I started using it, totally changed the way I do networking.
Everything is essentially local for me now.
Tailscale has bridged multiple different complex networks. I mean,
I'm talking stuff behind carrier grade NAT, VPSs, VMs that run at the studio,
my laptop, my mobile devices, all on one flat mesh network.

(01:52):
It works so darn good that now we use it for the back end of Jupyter Broadcasting
server communication as well.
So do thousands of other companies like Instacart, Hugging Face, Duolingo.
They've all switched to Tailscale. So you can use enterprise grade mesh networking
for yourself or your business. Try it for free.
Go to tailscale.com slash unplugged. That is tailscale.com slash unplugged.

(02:17):
Well, we are here in our Airbnb.
Yeah, why are we doing housekeeping at someone else's Airbnb?
I know. How do they? You know, these Airbnbs, they just get more and more out
of you every single time. But maybe it's because we brought our own mess.
Well, actually, it's not so much of a mess. It's actually going really well.
People are getting excited about our Terminal User Interface Challenge.
We are still looking for feedback on the rules. We have it up on our GitHub.

(02:39):
We've seen some already good engagement, though, people talking about it in the Matrix room.
So we're getting really close to launching it when we get back.
It's the final week before we launch, essentially.
We're going to get back in the studio next Sunday. We're going to sort of set
up the final parameters of the challenge, give you one week,
and then the following episode is going to actually launch.
Get ready to uninstall Wayland.
Yeah, take the Dewey Challenge with us. There's a lot there.

(03:02):
It's looking like it's going to be a lot of fun, and we're going to learn about
a bunch of new apps I never knew about.
We have a lot of work to do because the listeners, they're way ahead.
Also, a call out to the completionists. We're doing this for a couple of episodes.
We know a lot of you listen to the back catalog, listening in the past,
if you will, and then you're catching up.
We recently heard from somebody that was about 15 episodes behind,
and it got us thinking, how many of you out there are listening in the past?

(03:25):
So when you hear this, boost in and tell us where you're at in the back catalog and what the date is.
Until those darn scientists finish up. This is the closest thing we have to time travel, okay?
And then one last call out for feedback. This episode, I'm getting into why
I am switching off of NixOS.
And this isn't a negative thing about NixOS but I thought I'd collect some information

(03:45):
if you tried it and bounced off of NixOS boost in and tell me why I'll be sharing
my story later but also if you're sticking with NixOS,
I'd be curious to know what it is about it that's absolutely mandatory that
you wouldn't give up boost that in as well or linuxunplug.com slash contact
we'll have more information about that
down the road because really it's just ancillary to what this episode is all

(04:07):
about and that's Red Hat Summit,
So we were flown out to cover Red Hat Summit, as we have done for the past few years.
And the ones where there's a Red Hat Enterprise release are always really the most exciting.
And Red Hat Summit 2025 here in Boston at the Boston Convention and Exhibition
Center was May 19th through the 22nd of May.

(04:30):
And they did something a little different this year. They decided to make what
they really referred to as Day Zero Community Day.
So this was a track that was sort of ran adjacent to Red Hat Summit in the past
is now a dedicated entire day. And I thought I'd go check it out.

(04:54):
Welcome to Community Day at Red Hat Summit, day one. And it's all about,
you guessed it, artificial intelligence.
Well, okay, and Linux. But they made a pretty good call.
They said, hey, Red Hat is working to set the open standards for Red Hat and for data and for models.
And here at Summit, you can interact with us directly and inform how we participate in those.

(05:18):
So sort of like get involved in AI through Red Hat, a call to action,
as well as just general information about today's event.
You knew right from the beginning, okay, it's going to be another year where
we focus on AI quite a bit.
But this was a kind of a different call. It was, there's a lot of impact still
to be made for open source AI.
And we're really, as a company, Red Hat's really making a push.

(05:41):
So why don't you get on board with our open source initiatives and inform the
conversation there? And we'll push the wider industry based on your feedback.
I mean, I do think that's a trend we see play out over and over,
both between, you know, Red Hat interfacing with the industry,
but also really leveraging and in many cases, sometimes being driven by what's
available and what's happening in the open source side because they really have

(06:02):
skill sets and how to, you know, turn that into an enterprise product.
So the better the open source side gets, the better their product gets.
I wasn't really sure what the focus would be this year. I mean,
I knew RHEL 10 was coming, but I, you know, last year was really focused on local AIs.
Could you do two years of Summit on AI? And this was Brent's first Red Hat Summit.
Which is hard to believe, really.

(06:22):
And we wanted to capture his first impressions sort of right there after he'd
had a chance to walk around on what they're calling Day Zero.
Well, this is my first time here at Red Hat Summit. And I got to say,
you guys warned me about the scale of this thing.
Wow. Just the infrastructure and the number of booths and the number of people
and how organized it is to get everybody all here and doing the things they're supposed to be doing.

(06:47):
I am a little overwhelmed by just the size.
I bet you they spend more on hotel rooms than I probably make in five years.
I don't know, maybe more.
Oh gosh. I do have a nice hotel room, but even just the layout,
like how everything's so close. You don't have to go very far.
You don't have to travel.
Everybody knows sort There's people just standing around helping with wayfinding.

(07:10):
Like it's super impressive.
The vibe's a bit different. At LinuxFest, they weren't even doing registration.
They weren't counting attendees.
There wasn't anything like that. And here, you have to get your badge scan to
enter every area and every room. And the security is definitely a higher presence.
What impression does that leave on you?
Well, I guess there are relationships being built here that are very different

(07:32):
than the relationships being built at other conferences, right?
Like we saw some negotiation booths, didn't see those at LinuxFest.
So it's a bit of a different feel, but there's some real stuff happening here,
some real connections being made.
Day two should be even more interesting. Really, it feels like maybe things
are just kind of slow rolling today.

(07:52):
Did you get that impression that it's just sort of not quite started yet?
But it seems like people are still arriving and warming up to the whole situation,
getting the lay of the land.
So I'm excited to see tomorrow. That's when all the exciting stuff happens.
Yeah, just wait. You get to wake up real early for a bright and early keynote, Brent.
I forgot about how we have a time zone disadvantage.
So day zero, if you will, was sort of the ideal day to go see the expo hall.

(08:17):
These expo halls are just quite the spectacle. I mean, the crews that come in
and set these up in an amazing amount of time, they also have all of this racking
they do for the lighting.
I learned it took them two days to put all that together. And apparently that
was like quite a miracle.
Yeah. I mean, these booths are structures with like areas inside them and,

(08:41):
you know, massive displays and LED lighting embedded everywhere.
These are your highest of the high-end display booth type stuff.
I mean, this is really nice stuff. and we wanted to see it before it got too crowded.
Well, you can't do day one without doing the expo hall and it's an expo hall.
Let me tell you, it's a whole other scale than, well, LinuxFest Northwest or scale.

(09:02):
Lots of production, lots of money, lots of lighting.
Right now, we're standing out front of the DevZone, which seems to be one of
the more popular areas, and in particular, the DevZone Theater.
And what do they seem to be going over, Wes?
Yeah, they're talking about the marriage of GitOps and Red Hat Enterprise Linux Image Mode.

(09:22):
And despite us just being in a packed talk, I think there might be more people
trying to watch this here on the Expo Hall floor.
I think there's a lot of excitement around image mode and the things you're
going to be able to do or can already do.
We're tying it to existing declarative workflows with patterns that developers

(09:43):
like that now can meet the infrastructure.
It does seem to be a real hunger for it here.
It's standing only room right now. And they're doing a live presentation too.
So there's a screen and everybody's trying to see it.
But there's so many people in the way. We're here in the back.
We can barely see the screen.
So Brent, what do you think of this expo hall compared to other experiences you've had?

(10:04):
It is very large. I got to say it's very well spaced. Like you can see a ton.
It's not like these little cubes. Many expo halls just feel closed in.
This is open and breathy and tons of people but doesn't feel squished together
and it's bright and I don't know. Innovative?
You know, it does feel squishy. This floor.
Why is this like this? So we're in the like dev room cloudy space?

(10:28):
I know, we're at app services, Brett.
Oh, see, I'm confused. But the flooring, they've added extra cush. It's like very cloudy.
Feels good on the tired feet.
Now, something that you caught in there was image mode, and there was buzz on
the expo hall floor about image mode, but RHEL 10 hadn't actually been announced yet.

(10:49):
So we hadn't officially heard the news about image mode, but staff were walking
around and literally asking, have you heard any leaks about RHEL 10?
You heard anything? Because there's some things going around.
And then we were like, and what, I can't remember what we said.
It was something like, why don't you tell us what the leak is,
and we'll tell you if we heard it, was our answer.
So there was some anticipation around day two and the keynote,

(11:11):
because that's where we expected to get the official news of REL10 and Matt
Hicks, the CEO of Red Hat, kick things off.
Welcome to Red Hat Summit 2025. This is our favorite week of the year,
and it's great to have so many customers and partners here with us in Boston.

(11:32):
There's so much to learn this week, and we hope that each of you can come away
with a new insight to improve your business,
yourself, and hopefully strengthen one of the things that brings many of us here, open source.
He had an analogy pretty quickly after that, where we all three looked at each

(11:55):
other in the dimly lit keynote room, and we're like, what?
So I wanted to play it again for us so we could actually have a conversation about it.
This isn't about replacing your expertise. This is about amplifying.
I recently had to explain this tension to my 10-year-old son who loves basketball.

(12:17):
This is how I explained it to him. Imagine a new sports drink comes out,
and when you drink it, every shot you take goes in.
How would this change the world of basketball?
Now, an extreme position is it would kill the world of basketball.

(12:38):
How can you have competition when a middle schooler could shoot better than Steph Curry?
But I don't think that is necessarily true.
Strength still matters. Just getting the ball to the rim from half court is no easy feat.

(12:59):
Defense still matters. Your shot can be blocked.
Speed still matters. You have to get open just to take a shot.
So yes, a sports drink like this would drastically affect one aspect of the game. Accuracy.

(13:19):
But how can we possibly understand the impact on a game just by removing one
factor when there are so many others in regards to height,
speed, endurance, athleticism, strength.
A change like that would fundamentally change the world of basketball that my son knows and loves.

(13:45):
It would change who could be great at the game.
It would change the focus of the game. It might change the rules of the game,
but it would not eliminate the game.
I believe we would take these factors, we would shape them into a new game,
and given just the inherent creativity in people, that new game would be better.

(14:12):
Right now, that's exactly where we are with AI.
We're in the moment of uncertainty between games, between worlds.
We have to simultaneously understand that while the fundamentals that we know
are changing, maybe beyond the point of recognition,

(14:36):
there are so many other factors that come into play in terms of creating true business value.
So there's a couple of things that jumped out at me during the keynote when
he said that. And I think the first one was, this is again, the CEO of Red Hat.
And I think he just gave us an analogy to what they view AI as,

(14:57):
as this almost magic sports drink that means that if they can get everything else to line up,
all the other supporting players in the game to line up, that then they have
this solution that's going to let them get nothing but net.
That is that's essentially like a makeup company saying they have discovered
the fountain of youth and they're going to bottle it right i mean that is the

(15:19):
biggest of the biggest statements so i'm just starting there before we even
get into the other aspect of the analogy what are your impressions of that well.
I think it's a big deal like ai is all
uncertainty currently but this statement feels like we know exactly the direction
we want to go in we are already working towards it and it's already doing things

(15:40):
for us and there's still a lot of vision here from a company that otherwise
didn't work on ai right until recently and.
It does seem like they have a lot of the supporting products in place to realize
this idea that he put out there and we can get into some of that later but they
they have several product pieces that sit on top of rel that are trying to enable

(16:01):
this vendorless, accelerator-neutral,
backend-neutral AI system that's local or in the cloud.
Also, when it's in the cloud, it's completely vendor-neutral from Oracle to
Azure, or you can run it on your own infrastructure and pick your backend models.
So they're trying to put all the supporting players in place,
but to me it still feels like a real wild analogy.

(16:23):
See, I think I see it more as trying to
acknowledge the the fears of folks around ai
and the uncertainty um but making a
pitch on the sort of human enablement side
right like kind of talking to the people who have to work with their products
and administer them and saying like we think this will make you more effective
in that goal and then to your point on the other side they're then working to

(16:47):
make sure that their technology is is ready to meet that and interface with
whatever ai power up you are able to get.
The way i interpreted this re-listening i think the look we gave each other
live was like i what what how does this what's this trying to say and re-listening to it here live,

(17:08):
I got a little confused because at first he set it up as like, open source is great.
Here's the basketball, you know, sports drink thing that gives you superpowers.
And I thought, okay, open source is the sports drink and it allows all sorts
of new things to happen and all sorts of new technologies to flourish because
you've solved that problem in a way that is collaborative, et cetera, et cetera.

(17:28):
And then he's quickly shifted to the AI piece, which almost reflects,
for me, the trajectory of Red Hat.
Yeah, they very much came to the point of saying, we see the path that AI is
on right now as a similar path that open source was on and Linux was on 10 to 20 years ago.

(17:48):
While this might feel new for
many of us, this isn't the first time we've experienced this in software.
In fact, when open source emerged, there were a lot of people that felt the same way about it.
Open source challenged how software created value, even what competition meant.

(18:16):
It removed barriers that defined proprietary software. It even added a new factor
around collaboration being critical for success.
And in that challenge, it was feared, resisted, ridiculed, attacked.

(18:40):
And yet, last year, there were over 5 billion contributions made to open source software.
Despite the fear, despite the attacks, despite the disruption,
open source still changed the world of software.

(19:01):
I felt that potential in my first experience with open source.
It captured my imagination along with millions of others.
It defined my career along with millions of others. Where others saw fear or disruption.
I saw potential, along with millions of others.

(19:27):
That is exactly what we're experiencing with AI right now.
The world that many of us know is open source and software and IT.
We have shaped this world over decades, and now the rules are changing.

(19:47):
And while that can be scary and that can be disruptive, if we take a step back,
the potential is also undeniable.
I would be really interested in the audience's thoughts on the parallels and
analogies that Matt was drawing here.
Boosting with your thoughts, if you agree, if you strongly disagree,
I'd really like to hear that as well.

(20:08):
But I think the news we were sitting there waiting for was actually RHEL 10.
And so Matt steps off the stage for the first time, and we get into the news.
Please welcome Red Hat Senior Vice President and Chief Product Officer, Ashesh Badani.

(20:34):
Everywhere you turn, the world is running on Linux.
Tens of millions of people trust Linux to power the critical infrastructure.
And trillions of dollars a day is dependent on Linux.
For more than 20 years, Red Hat Enterprise Linux, or RHEL, has been the trusted

(20:59):
platform for organizations around the world.
It is the heart of Red Hat's portfolio and the foundation of our core technologies.
But Linux is often managed the same way it was 10 or 15 years ago.
Today, we're changing that.

(21:20):
We're giving Linux admins new superpowers that allow them to wait less and do more.
That's why I am so excited to announce REL10.

(21:42):
This is the most impactful, most innovative release we've had in a long time.
And image mode is one of those reasons.
We'll get to that in a moment. But there was another announcement up on stage
that I wanted to include too. and that was something they're calling LLMD.
Reasoning models produce far more tokens as they think.

(22:04):
So just as Red Hat pioneered the open enterprise by transforming Linux into
the bedrock of modern IT, we're now poised to architect the future of AI inference.
Red Hat's answer to this challenge is LLMD, a new open-source project we've just launched today.

(22:33):
LLMD's vision is to amplify the power of VLM to transcend from single-server
limitations to enable distributed inference at scale for production.
Using the orchestration prowess of Kubernetes, LMD integrates advanced inference
capabilities into existing enterprise IT fabrics.

(22:55):
We're bringing distributed inference to VLM where the output tokens generated,
from a single inference request can now be generated by multiple accelerators
across the entire cluster.
So congratulations to all of you. You came here to learn about the future of
Linux, and now you know what disaggregated pre-filled decode for autoregressive transformers is.

(23:18):
It's actually a really significant contribution. So you could think of it as
you submit a job to an LLM and then this system sort of sorts out the best backend
execution based on resources, the type of job, the accelerator you might need.
So it's taking something that is a real single pipeline and breaking it up with

(23:38):
all of this backend flexibility.
Here's how they describe it on the GitHub readme. LOMD is a Kubernetes native
distributed inference serving stack,
a well-lit path for anyone to serve large language models at scale with the
fastest time to value and competitive performance per dollar for most models
across most hardware accelerators.
So bringing that home and, you know, what it actually means in a practical sense

(24:00):
for like a small business like myself,
it would be maybe we have a few jobs that run on Olama locally on our LAN hardware,
but every now and then we have a big job and we We want to execute that out
on cloud infrastructure, and this can help us do all of that and the orchestration of it.
So it's actually a pretty significant contribution, and it works with VLM,
which we'll talk about more later or now.

(24:21):
No, no, I was just going to say it is a big contribution, and Red Hat's playing
a huge part, but they also list right here folks like CoreWeave,
Google, IBM Research, of course, as well as NVIDIA.
Yeah, yeah.
And there's been some news about AMD's interest and involvement as well.
And the NVIDIA involvement is particularly interesting to me because this doesn't

(24:41):
serve NVIDIA in selling more hardware.
This project actually enables people to distribute workloads to other things
that are not NVIDIA hardware, that are cheaper things when not needed.
And so it's pretty interesting to see NVIDIA actually engage in this process. I get why AMD is.
But it's interesting to see NVIDIA engaged, even though it kind of,
in a way, eats away at their hardware mode.

(25:02):
And I think it's exactly things like that that are maybe drawing some of the
parallels to the Linux evolution that we've been talking about.
Yeah, and so the behind-the-scenes conversations I had with Red Hat staff is
essentially, this is where the users are.
NVIDIA is doing this because their customers are asking them to,
just like their customers asked them to support Linux years ago.
So, yeah, that's the parallel there.

(25:22):
So it was a long keynote. I'm not going to lie. It was two hours.
And what we just shared with you were some of the highlights,
but there are also moments where they're trying to address multiple audiences.
You have your technical people there.
You have your sales people there. You have your chief technology officers there.
And so in one keynote, they're trying to speak to all of these different diverse
audiences that just don't really get the same messaging.

(25:44):
And so you'd often have guests come up on stage that kind of essentially say roughly the same thing.
And it gets really business jargon heavy because you're speaking to that audience.
So we sat there for a while listening to a lot of that and then also interspersed
with like these really interesting technical moments.
We just stepped out of the keynote this was the big keynote there will be a

(26:04):
keynote every day but this was the big one it was a two-hour chonker
and we got red hat enterprise 10 which is pretty great and image mode was a
big part of that there was essentially four key things that they listed that
they're really excited about rel 10 and i think image mode is what they led
with and it's probably the one that stuck with me the most They talked about
how vendors like Visa Want to be able to update their infrastructure As if it was a smartphone,

(26:27):
And just flip a switch And they've got the new updates And it'll streamline
updating security And I'm actually here for it I hope it makes RHEL a little
more maintainable For shops that are deploying it But of course For years.
Two in a row the big topic was artificial intelligence and ai was really baked into,

(26:49):
everything and i'm just curious brent as a first timer
what your impression of all of the ai talk was because i mean you just can't
prepare a guy for this much ai talk the scale of the ai i did notice that they
basically took each of their products and added ai on the end of it which i
didn't expect and nobody really addressed that but they're just sort of spreading the ai.

(27:13):
Throughout i think maybe that's a more of a strategic plan to
i don't know be part of the future um but
i'm curious how that dilutes the current products or where they're headed with
it it is a lot of brands now to keep track of and like i said we're in year
two of this and i'm not 100 convinced that all of the people watching in that

(27:35):
room actually have the needs they're addressing up on stage.
I think some people do, you know, airlines and visa, I think they do.
But I'm not sure everyone in that room was really feeling the urgent pressure
to deploy AI to get a return on, you know, investment or total cost of ownership
lowered for whatever they might have.
And that's not to say that Red Hat doesn't seem to have found a more refined

(27:56):
focus for their AI implementation.
I think year two of this AI focus is actually a lot more practical.
It's about shrinking the size of some of these models.
It does seem like they've found a few areas that they can bring some of the
special Red Hat sauce to.
Yeah, you know, okay, I think you're right that there are definitely questions
around, is there this low-hanging fruit of like, you got to meet this AI need,

(28:20):
AI can do it today, you just have to figure out how to deploy it?
Yes, for some, everywhere, maybe an open question. But I do think you have to give Red Hat credit.
Like if you were trying to, if you are solving that problem,
they have a lot of nice things in the works from like you're saying, right?
Like quantized and optimized models that you can just get from Hugging Face
or via catalog in your Red Hat integrated products.

(28:41):
They've also been talking a lot about VLLM and turning that via the new LLMD
into a distributed solution, right?
So now you can do inference that isn't just running from a single process.
It's doing inference across your whole cluster of GPUs.
And, you know, we saw folks today from Intel and AMD and, of course, NVIDIA.

(29:02):
But it's nice to see, at least, whether or not you're really using in your business,
if you were to, that you would, in the future at least, have real options not
only between different models, but also different accelerators, as they put it.
That distributed model stuff you were talking about, that was an opportunity
for them to bring Google up on stage.
And the comment was, Google was our partner in crime in creating this.

(29:23):
So really leaning Microsoft Azure, got a mention, up on stage.
So they're trying to present themselves as a vendor-neutral AI solution.
And when I say trying to present, I think they are doing it.
They're doing it successfully.
So if someone out there is in this market, I mean, Red Hat is killing it.
But for me, as somebody who's looking at the more practicals,
RHEL 10 is it, right? You get...

(29:45):
Improved security, you get image mode. And the other thing that they talked
about, almost as if it was new, is virtualization.
RHEL 10 is clearly making a pitch to shops that want to migrate off VMware. Did you catch this too?
Oh, yeah. I mean, the whole product offering and really the rise of OpenShift
virtualization, you know, it's not necessarily net new.

(30:06):
And things like KubeVirt have been around for a while to let you run VMs as containers.
But they didn't quite come out and say the word Broadcom. But you got the feeling,
you could tell there were stories around like, oh, a year ago,
we really needed to modernize or look into our virtualization spend.
And last year, there was a lot of talk about the potentials,
I think, at Summit, right?
And folks were talking about OpenShift being well positioned.

(30:28):
And this year was a bit of a, let's show you all the successful customers now
deploying and have migrated or are in the process of successfully migrating
to an OpenShift and a Kubernetes-based virtualization platform.
And we even saw like a variant of OpenShift announced that is basically just
OpenShift tailored for only running VMs. So that's that's full circle.

(30:50):
The Emirates Bank was up on stage. I think they mentioned they had something
like 9500 and 9800 virtual machines running under OpenShift virtualization.
And they also announced the OpenShift cloud virtualization is available on all
major cloud platforms, including Azure and Oracle.
Wow. So when you think you've got a solution that works on-premises and something

(31:11):
you can easily offload to the cloud, it actually kind of left me feeling like
we need to play around with OpenShift virtualization and just kind of wrap our heads around it.
Just give me your Oracle API key, we'll get started.
And I wasn't kidding either. I really felt like they made a good pitch for Real10
and the OpenShift virtualization platform. I think it's something we are going
to experiment more with and get more hands-on experience.

(31:33):
It was actually a good, solid product. We got a hands-on demo for the press
that they went through the dashboard.
And it looked just as easy to use as Proxmox.
Or if anybody's familiar with the later iterations of VMware,
ESX, and things like that, it really sort of met those expectations as far as
management and dashboard went. It looked good.

(31:54):
Yeah, you can tell it works really well if you have an existing containers workflow
in OpenShift and you want to add virtualization.
But now they're even targeting it for folks that maybe haven't yet tried out
OpenShift, but they're looking for a virtualization solution.
And you can get yourself an OpenShift cluster pretty much just tailored to run virtualization.
And then maybe later you expand down to containers too.
So there was something that really got my attention. And I am thrilled to see

(32:17):
Red Hat pushing further down this path. and you see it also becoming really
popular with Fedora Silverblue.
You see it with Bluefin and Bazite and the UBlue universe of operating systems.
It's using images to manage and deploy your infrastructure to get immutability.
And image mode is something that Red Hat is focused on. They're taking Bootsy

(32:39):
and they're bringing it even further.
And we had an opportunity to sit down with the product manager of image mode
for RHEL and we got all the inside deets.
Well, I'm standing here with Ben and he's the product manager for ImageMode for RHEL.
And I asked him to try to give us the elevator pitch of what ImageMode is.
Yeah. Well, that's a great question. So, okay, we know containers,

(33:00):
right? We've been building containers for applications for a decade now.
All the same ways that you build containers and manage them,
we now can do that for full operating systems.
So we're going to change one important detail, right? We all know a Docker container,
it's going to share the kernel, right?
Well, these base images that we use for this, they're now bootable containers.
So the kernel is going to live and be versioned in that container, right?

(33:23):
And so now we're going to take that, we're going to write it to Metal,
we're going to write it to the VM or cloud instance, whatever.
And now that server is going to update from the container registry.
So now all of your container build pipelines, whatever automation you're using
for testing verification, now you can do that for operating systems.
So it's really the same tooling, tool set, language, same everything for your

(33:45):
applications you can now use for your operating system.
The world we're living in is complicated enough. It's only getting more complicated.
So anything we can do to simplify and reuse and just get people to value faster is the way to do it.
And that's what you get with Image Mode for REL.
That does sound very nice. So how are you booting an image? Is this BootSea involved here? Yes.

(34:09):
BootSea is the core of the technology, which stands for Boot Container.
It's the magic that kind of closes the gap between the tarball that your container
image is and like the system.
It gives you like an AB boot feel to the system, right?
So when you update, you stage the next one in the background and you can reboot
and now you're in the new one, right?

(34:29):
So Bootsy is the core of this and the core command line when you need to update
the image or switch to a different one or reprovision the system.
So yeah, and Bootsy went into the CNCF. It's a sandbox project now.
We're working on getting the incubator status.
So yeah, that's the core. My recollection is that we got Bootsy at the last summit.

(34:50):
Bootsy was announced. So has this been kind of in the works since that announcement?
Yeah, exactly. So we did a big announcement last year.
Since then, we've been working with a lot of customers on getting them to production,
right? We just had one mentioned in the keynote. We had another one speaking yesterday.
I don't know if I can say names on this, so I'm not going to leave it out.
But I don't know. It was great. We have another one speaking later today.

(35:13):
And then one of the hyperscalers is demoing it right now.
So yeah, I would say just the traction we're seeing has been awesome.
So it definitely feels like that fit to where it's the right tech at the right
time for people to be using it.
Yeah, I'm curious. I felt like when we kind of heard stuff last year,
it was co-announced or at least sort of pitched a bit as being motivated by

(35:34):
like faster problems specifically around AI workloads.
You know, like here's this new mode of operations we think would be a really good fit.
But I'm curious, last year we heard a lot of sort of like, okay,
we're starting there, but we think, you know, the applicability is a lot broader.
And I'm wondering if that's kind of showing out in customer adoption. Yeah, it's way broader.
I think I almost look at this just kind of an image flow is very general purpose,

(35:57):
right, is where you can get to quite quickly.
So yes, it's still very relevant for AI. Rel AI actually ships as a Bootsy image,
right? And we run it that way.
I'll say the big, one of the big values is anytime you're connecting a complicated
stack, I'm versioning a kernel, kernel modules, different frameworks,
libraries, where it's a Jenga stack, which a lot of AI looks like these days.

(36:20):
Building with containers solves a huge amount of versioning problems.
We want to get people out of the state where I DNF update a package and oh,
my storage doesn't work because there's a lag over there and blah, blah, blah.
Like, no, if the build fails, it'll never hit your server.
This is, when you use containers, that just becomes so easy, right?

(36:41):
Again, it's about going back to simplifying all the complexity we have and getting
to value is the whole thing, right?
I'm just curious, what does it look like, you know, for folks maybe who have
never tried image mode but have experienced regular RHEL deployments,
how do you get started with, like, a new system that's full-on image ready?
Great question. So there's different paths.

(37:02):
It depends on your environment. So the answer may change a little bit depending
on what your needs are. But in general, I think Podman Desktop is probably the easiest tool.
It's no cost. It runs on any platform. So if you're working on a Mac or Windows,
we'd love to upgrade you to RHEL. But, you know, we get it, right?
So you can put this on. There's a Bootsy extension.
You can build containers. You can convert them to images. You can boot them

(37:24):
as a VM all from Podman Desktop. It's amazing.
I use that today. Now, I immediately then switch to versioning everything in Git.
And I have GitHub Actions as everything.
So my good buddy Matt here and some other colleagues put together templates
for all the big CI CD systems.
So if you want to just get started with, say, you do GitHub Actions,

(37:46):
GitLab CI, Jenkins, Tecton, Ansible, you get the idea. It's infrastructure agnostic,
right, is the whole thing.
We've got all the templates, cloned the one. It's so easy.
So we kind of have a good path if you want to work locally or if you want to
work in, like, a Git model.
Those are the two paths I would steer you towards.
Given Bootsy and ImageMode are relatively new, what are the challenges coming

(38:09):
up that your team's going to be working on?
Well, we've got a big roadmap. We're adding more security capabilities.
You know, I mean, there's multiple ways to answer your question.
But let me talk about security, right? Because this is forward-looking stuff here.
We have all the pieces, and we're working on stitching them together.

(38:29):
Because what we want to do is the way you sign applications with like cosine
for your container image, we can have the same basic key insert,
actually inject that into firmware, if it's UEFI or inject it into the cloud image, right?
And then from there, we can have a post-process step on your container that
makes a UK unified kernel image, right? That is signed, we get full measure boot.

(38:51):
And then the root FS of that container, that digest is in the UKI as well.
So if your root file system gets modified at all, it's the holy grail security
story, that tamper-proof OS that we've been chasing.
So Bootsy gives us all the things we need to stitch that together in the Linux and make it easy.

(39:11):
Because today this stuff is possible, but you have to be like,
there's like five people on earth that can do it today, right?
And I want like me to be able to do it, right? And so...
So my goal, again, it's forward-looking statements, so all that.
But I hope next year at Summit that's what we're talking about and everyone is like, wow.

(39:34):
That'd be great. I'd love to catch up at next Summit and see how it went. Thanks, Ben.
I'm particularly interested in Red Hat adopting this further because it brings
a lot of what I like about NixOS and what I like about Bluefin and Bazite.
But it brings it to the enterprise operating system and it could solve so many problems.
And you guys know I've talked about this, but the other reason why I kind of

(39:57):
like this approach that they're doing is while it is a top-down system,
it is leaning into workflows that people already understand.
They're already deploying containers. They're already using GitHub Actions or
whatever they're using locally.
There's tens of thousands of DevOps engineers out there that could start deploying

(40:17):
their own custom bespoke Linux systems.
And this is why I got into Gentoo back in the days, because I needed very bespoke
custom systems. And there was no tooling around this. There was nothing.
I didn't, you know, I didn't really have a lot of options. So I went with Gentoo
100 years ago to build these really bespoke custom systems that then I would
manage and orchestrate from like this crazy scripting thing that I had set up.

(40:38):
But this brings this to everybody using systems that are maintainable with Red
Hat's backing and their whole CYA when it comes to certifications,
licensing, compliance.
I mean, it just makes me think other ecosystem here, but think about setting
up a bootstrap system for just going from the base up, trying to get that going.
And then for an RPM style, it's going to be different. And for an arch system,

(41:02):
it's like packstrap or whatever.
And there's all these different things. And then in this new world,
you just change what base image you pull from. And it's just so much simpler.
As somebody who used to really really get frustrated managing you know systems
where all your only options were rpms and maybe you know an rpm repo that got
you what you need this is just such a huge land shift um and it was nice to

(41:23):
be able to pick ben's brain.
One question i ended up having in all of this is how old are these new packages
like rel 10 just came out.
But, you know, in enterprise, things are slightly more glacial than,
let's say, Nix OS, which we visited last week.
So what are we looking at here, boys? Like, what does Rail 10 actually have under the hood?

(41:46):
Well, I believe it was branched off from Fedora 41.
I think during the beta, maybe there was a 6.11 kernel, but it's shipping with Linux kernel 6.12.
And then I believe GNOME 47.0.
We also got DNF5 in Fedora 41.
Which is probably a big change.

(42:07):
When you look back at the Fedora releases, you can see, oh, Red Hat was trying
to get this pipe wire milestone in.
Red Hat was trying to get this DNF milestone in because ultimately that became RHEL.
And sometimes you see these things get packed into a Fedora release for that reason.
And DNF5 is great. So, you know, for the parts where you're maybe not doing
it with image mode, that will be killer.
And also Boot C initially shipped in Fedora 41.

(42:29):
So there you go. See, to me, it's like, if you like Fedora 41,
well, now you get that in RHEL. It's basically Fedora 41 LTS,
which is kind of appealing.
You get Kenome 47 or KDE 6.2.
You know, I had just a quick thought here on image mode and if it sees wider deployment.
One small benefit of the approach, maybe it's a big benefit,
is the A, B style and rollbacks that this really easily enables.

(42:52):
And I was just thinking, you know, when we've seen recent issues,
big problems with Windows deployments in the enterprise, where maybe something
like a quick, easy boot, undo, boot into the last version rollback would have
saved just billions of dollars of agony.
And we know, right, like RHEL is deployed at or above the scale of Windows in
these types of backend enterprise applications. So this could be huge.

(43:16):
And I think so. I think it's so monumental that it's making me seriously consider
the Red Hat ecosystem for what I do, for what we do.
Whoa.
Yeah, we'll get into it.
Onepassword.com slash unplugged. Now, imagine your company's security kind of
like the quad of a college campus.

(43:36):
Okay, you've got these nice, ideal, designed paths between the buildings.
That's your company-owned devices and applications.
IT has managed all of it and curated it, even your employee identities.
And then you have these other paths. These are the ones people actually use,
the ones that are worn through the grass.

(43:57):
And actually if we're honest with ourselves they are the straightest line from
point a to point b those are your unmanaged devices your shadow it apps your
non-employee identities like me a contractor i used to come in and be one of
those i was always shocked because they're not designed to work with the grass
paths they're designed to work with the company approved paths,
that's how these systems were built back in the day and the reality is a lot

(44:19):
of security problems take place on the shortcuts the past users have created.
That's where 1Password Extended Access Management comes in.
It's the first security solution that brings all these unmanaged devices,
apps, and identities under your control.
It ensures that every user credential is strong and protected,
every device is known and healthy, and every app is visible.
The truth is 1Password Extended Access Management just solves the problems traditional

(44:42):
IEMs and MDMs weren't built to touch.
It is security for the way we actually work today, and it's generally available
for companies that have Okta, Microsoft Entra, and it's in beta for Google Workspace customers as well.
You know what a difference good password hygiene made in a company.
Now imagine zooming out and applying
that to the entire organization with 1Password's award-winning recipe.

(45:02):
1Password is the way to go. Secure every app, every device, and every identity,
even those unmanaged ones. Go to 1Password.com slash unplugged.
That's all lowercase. It's the number 1Password.com slash unplugged.
Now, if we hadn't had enough of two days of interesting stuff,
that was a third day with a brand new keynote.

(45:24):
Well, here we go. It's day three. We're walking to the keynote right now.
I don't know what to expect because all the big announcements like RHEL 10 and
things like that were announced yesterday.
So I'm kind of going in blank, not sure what to expect. We'll find out together.
One thing they came back around during the keynote on day three was the security
enhancements in Red Hat Enterprise Linux. and there is one particular area they really focused on.

(45:48):
Please welcome Red Hat Senior Vice President and Chief Product Officer, Ashesh Badani.
RHEL 10.

(46:08):
RHEL 10 is the biggest leap forward in Linux in over a decade.
And we didn't just get here accidentally. Two decades of server innovations.
Virtualization, containers, public clouds, and each and every stage,

(46:31):
RHEL has been the enterprise Linux standard.
And now, the AI era is here. And around the world, there are uncertainties.
But in a world of uncertainties, one thing is certain.
Yeah, that's good. That sells it. Now, there was, of course, the...

(46:57):
Just general positioning of RHEL, right? It's an AI-first distribution,
but also it is a post-quantum encryption distribution.
That's a mouthful. We've talked a little bit about post-quantum cryptography.
Let's go into that in some more detail.
Can you tell us about the impact of quantum computing, which I'm sure the audience
is really interested in, and why we need to prepare for a post-quantum future?

(47:22):
Sure. So in the not-so-distant future, quantum computers will be more readily available.
And they'll be leveraged by bad actors to break today's encryption technologies.
When that happens, sensitive data will no longer be considered safe.
But organizations like NIST and the IETF are already working on draft requirements

(47:43):
and standards of what will be needed in a post-quantum world.
And Red Hat is ahead of the game here.
We are leaders in post-quantum security, and we've been working on those requirements
to meet post-quantum cryptographic challenges for some time now,
because we know that we need to help our customers protect their data against

(48:03):
future attacks and fulfill future regulatory requirements.
REL10 has the libraries, tools, and toolchains ready, so you can rely on us
when you're ready to transition and start into a post-quantum world.
This is obviously early days, right? You hear the wording there,
when you're ready to start transitioning.
To a post-quantum world. These standards are very early, obviously.

(48:27):
Yeah, I mean, we don't really even have the kind of quantum computers to really
sort of test these fully out.
So some very smart people have done some very clever math and devised,
so far, our best takes on how we might defend against this.
Yeah.
And Red Hat's there if you want to, you know, try to get ahead of the game.
There's two things here. So number one is they're kind of pegging to the standard.
So as the standard evolves, they will likely evolve their support for it,

(48:51):
right? So that's kind of what the real... That's a beachhead here.
The second thing is you have to realize... I mean, I know you guys do,
but you've got to just think...
It takes 10 years sometimes for these distributions to really,
these enterprise distributions to really work their way out into the ecosystem.
And so 10, 20 years from now, this could be a problem.
This could be a problem 10, 20 years from now. And so if you start in RHEL 10,

(49:14):
well, by the time people are running RHEL 13, hopefully it's in,
it's baked in, and it's working.
The other thing that occurred to me yesterday is you have to think about the
information that you're storing today and that might get cracked let's say in
the future by quantum stuff just because it's sitting on disk yeah so getting
in early i guess is the name of the game in this case.

(49:34):
You know and uh i'm not trying to trivialize it but there is also i think real
value sometimes in just having one more checkbox that may get added to security
questionnaires that become standard in the coming years that is true out of
the box you're good to go you.
Can say linux covers this This is not just something Microsoft is doing or whatever,
or Oracle or whatever it might be.
Yeah, there's a supported Linux platform that you can do that will be first

(49:55):
class in that ecosystem.
Now, day three, we wanted to just knock a couple of things off because we're at Red Hat Summit.
And so we had access to folks that you just normally wouldn't have access to in person.
And we wanted to chat with the outgoing Fedora project manager and the incoming
Fedora project manager because both Matt and Jeff, Matthew as he likes to be
called, Matthew and Jeff were at Summit.

(50:16):
And so we went to the community section, found the Fedora booth,
and got these guys to sit down.
Well, I have two quite important folks here. Gentlemen, can you introduce yourselves?
I'm Matthew Miller. I am the current Fedora Project leader for about two more weeks.
Two weeks. And you? I'm Jeff Spaletta. I will be the Fedora Project leader in about two weeks.

(50:38):
So I see you guys are hanging around together. Is there like a transitional
period that you're spending together for this transition?
Yeah, basically. Jeff started at Red Hat two weeks ago, and now we're trying
to not scare him away, but maybe not doing it. I don't know. How's that going?
Yeah, I basically am looking at this as I am Matthew's shadow man,

(51:01):
as it were, as a callback to some previous branding.
But yeah, I'm here for the last couple of weeks.
There's a fire hose of just Red Hat onboarding. And this week,
I'm trying to meet as many stakeholders that would like to leverage Fedora to
get some innovation done.
And instead of opining myself, I'm really in a mode where I'm taking in information from as many people.

(51:27):
And part of that is getting as much headspace mapping from Matthew as I can.
Yeah, like literally just taking his brain and trying to shove it into mine
before Flock to when the actual handover happens. And is being here at the summit
the first time you spent time together in person? Well, for many years.
I've hung out with Jeff before. Jeff was active in the Fedora project.

(51:51):
At the beginning of time, as I was, and then he went off to do real jobs and stuff.
Jeff, I was gonna say, why Fedora, but it sounds like you've been involved for a long time.
Yeah, I was, you know, the first, I mean.
Eight years of the project, I mean, I was there before it was Fedora Linux,
when it was Fedora US as a contributor,

(52:12):
so I was an external contributor through the first critical period when the
project was being spun up, and then I took one of those past less traveled situations in life.
I went to Alaska to study the Aurora and then eventually got to the point where
I was off the grid for several weeks at a time doing research and I just couldn't

(52:34):
contribute anymore and so I had to step away from the project which is actually
pretty interesting because I have the deep project knowledge, the foundations,
I understand what the project's supposed to be but I've also stepped away And
after being an academic,
I've done three different startups, three different sizes.
Like I did a small startup with a telemetry project, actually a wearable project,

(52:56):
for a couple of years. I then worked for a company as a DevRel for doing monitoring, Sensu.
They no longer exist. They were acquired. And then I worked for iSurveillance, and they got acquired.
And so it's really interesting. I was getting ready to move back east from Alaska
to follow my wife, who's got a job in Virginia. and it just so happened it lined

(53:17):
up when Matthew announced that he was stepping down.
So it was like the stars aligned, right?
So I come back east, basically pick up my life that I left when I went to Alaska,
and it's like I'm right back where I started, like back into Fedora,
and now this time as the project lead. It seems almost meant to be.
Did you get nominated by this gentleman, or how did that process work?

(53:40):
We had a lot of really good candidates, and it was a super, super hard decision,
And I think in the end, we agreed the stars aligned here for this to be the best.
Very nice. Matt, why the decision to change things?
Well, I've been, so it will have been 11 years as Fedora Project Leader when we do the handover.
It will be the beginning of June there. So that's a long time.

(54:03):
And I honestly, I love it and I really could keep doing it, but I think,
It's good for the project to have someone else kind of looking over things.
And it's good for me to find something else to do. Although I'm not going to go very far.
I'm actually going to be still in the same group at Red Hat that does Fedora,
Linux, community things.

(54:24):
Does this just mean you get to play on things that are maybe less planned?
Or you get to just kind of spend your time somewhere that you would like to?
Well, I think planned is pretty ambitious for anything I've spent. it.
But the first thing I'm going to do is sleep for a week.

(54:45):
And then so I'm actually going to be a manager in there because I actually don't
have any experience as being a full-time people manager.
And I thought I'd see how that goes and see how that broadens,
you know, my view into working in open source world. And we'll see where we go from there.
And then gentlemen, how does the like, is there a mentorship process that's going on here?

(55:07):
I know you said you're spending two weeks together, but is there anything more formal or less formal?
Yeah. So that's also, I think a lot of times, I mean, it's been 10 years,
so we don't really have a process for FPL transition that's there,
but a lot of times it's been kind of thrown into the deep end.
Robin Bergeron, my predecessor, helped me a lot, but was also very ready to

(55:30):
be done with the job at the time.
So I did a lot of like making things up as I was going along.
And I think Jeff will get to do a lot of that as well, but I want to make sure
I'm going to be there so I can share anything, my thoughts on things without trying to, you know,
I don't want to be one of those, I'm pulling the puppet strings behind the scenes kind of thing.

(55:51):
I'd be very respectful of the new role, but I also want to make sure that I'm accessible.
Because I do have a lot of knowledge about things that Jeff keeps telling me,
did you make slides for this? Did you write this down?
No, I have not. But I can tell you all about it.
So we'll try and get that transferred in a formal way rather than just,

(56:15):
oh, yeah, I should tell you this.
Nice. And Jeff, what are you looking forward to when you get your feet dirty here?
Well, I guess, like I said, I don't want to opine too much just yet.
But initially, what I'm really looking forward to is getting a sense of the health of the project.
Because I think Fedora is now at that time where it's now a generational project.

(56:37):
And as I tell people who meet me, if you remember my name and you're still involved
in the project, you may be a risk.
Like, you may be an institutional bus factor, or what's the better way of saying that?
Champagne factor? Or Desert Island factor. We talk about llama farming.

(57:00):
So I am concerned that people who are doing it for the full length of the project,
they probably have institutional knowledge that we don't have a process to change over.
And we may be relying on them too much to do what I consider hero work. And I want to find that.
I want to get a sense of where that is so we can have an appropriate process

(57:22):
to get mentor new contributors in.
So that's my first thing, not technology, just get a sense of the health of the project.
Because even though it is very stable in terms of output now,
which was not what it was when I was working on it.
And everyone says, yes, it's a rock-solid deliverable, I want to get a sense

(57:45):
of where the contributors are at and where the creaky bits are, right?
So we're not burning out some people to make sure that that deliverable is happening.
I mean, as I tell people this week, like my mental model for this job is I'm
the president of a weird university.
Right. Like this job to me is I'm not doing the work like the people in the

(58:08):
community or the faculty and the students doing the work in the university.
But Red Hat is sort of like the equivalent of the state legislature.
Like they are investing in funding. And so I have to bridge that.
And it's so it's important for me to get
face time with as many red hat stakeholders as
I can so that I can build bridges and

(58:28):
make sure that the community ethos and the process by
which technology works its way through from Fedora up is something that they're
getting the best value out of without disrupting the community right because
it's like I said like the university model in my head every time I say it I'm
like this is the right model for this job because it's like state legislature and faculty,

(58:49):
you know, are not on the same page all the time and that's where the president
of a university basically sits and that's what it feels like.
Matthew, Jeff, thank you so much for joining us, and come on Linux Unplugged anytime.
It's always nice to talk to you, and yeah, I'd be happy to talk more.
Even when I'm out of the role, I'll probably have more spare time for just sitting

(59:12):
around pontificating about things, so that'll be fun.
Sounds good. And Jeff, thanks for joining us, and we'll surely hear from you
in the future. Yeah, absolutely. Thanks for having me on.
Yeah, Matthew, that invitation to that mumble room is open all the time.
Come pontificate with us anytime. time.
I'm also really glad we made that connection. I think it's going to be interesting
to have Jeff on the show after he's got, you know, some time under his belt

(59:34):
at the helm of the Fedora project. I know you boys are looking forward to that too.
He just has such perspective if you think about all the time put in.
Yeah, yeah, really. I mean, it's pretty neat to have somebody originally connected
with the project, took some time away to really get some perspective and come back.
And I like his model of a university. That's an interesting thought model, at least going in.
It'll be fascinating to follow up with him and find out if that played out for him.

(59:56):
I think the next few years should be fun for the Fedora folks.
So on our last day, you know, you have to knock out the fun stuff,
like seeing our buddies at the Fedora booth. And they had this machine that they were teasing.
I had to try it. It's called the AI Wish Machine.
Okay, so we have a little experience here. Wes, do you want to explain what we're about to do?
Yeah, it's the one thing I think so far at Summit that there's been a lot of

(01:00:18):
hype around. We saw it advertised at the keynote on stage.
And Chris has yet to try it. It's the spectacular AI wish machine.
Magic promises AI wish is granted.
Your wish, Chris, is its consideration.
Chris, what are your expectations here? I mean, it was featured in today's keynote.

(01:00:39):
Well, it was before the keynote started. You know, like when you go to the movie
theater and they have advertisements up on the screen?
This was up on the screen. It's something you've got to try.
So I've got a lot of questions.
You know, I've seen a lot of things here at Summit. So I assume this is going
to kind of connect a few dots for me.
And if nothing else, give me some advice on how perhaps OpenShift could help
revolutionize the JV infrastructure and really drive innovation and lower total

(01:01:01):
cost of ownership. So that's what I expect is going to tell me.
You know, the other thing, we've been to summits before. In particular,
last year, there was some pretty cool AI-powered stuff, you know,
like walls and visualizations and changing your photo kind of thing.
Could be something like that, maybe. So should we go over?
So I attempted, of course, but everybody wanted their token because after you

(01:01:23):
complete the vending machine experience, the AI Wish machine dispenses a token,
and everybody loves their little swag.
Okay, Chris, you've stepped up to the machine here, the AI Wish machine.
What's your first impression?
It's popular. Two different people cut in front of us to use this thing.
People apparently have questions.
So the first thing I've got to do is I've got to scan my badge to make an AI

(01:01:44):
Wish. I'm going to go ahead and do that.
Is it scanning? I don't think it's scanning. Try scanning harder.
I didn't see other people struggle with this. Why is it not working?
I got my badge in the hole. What is it? There we go. Right? Is it doing it now?
Yes. Okay. Hello, human. Hello, human.
She's rolling something. Scan your badge. Nope, it didn't get it. Oh, gosh.

(01:02:07):
Now we got it. Okay, you may now make your AI wish. Okay, I wish to be rich.
Oh, no, you have to actually choose from these options.
I wish to train models without compromising my private data.
I wish to build and deploy my AI wherever I need it.
I wish to easily scale my AI across my company. I wish to use my preferred AI models and hardware.
Well, clearly I wish to... None of these.

(01:02:29):
I'm going to scale it across my company because it's the last of the thing I
want to do, so I'm just going to pick that one.
Easily scale your AI across your company. Okay, that's what I wished.
And AI says, with some slow frames, I tried, but you'll need to insert a gazillion dollars.
What? Why is AI hustling me for money?

(01:02:51):
Processing your wish. Why is the frame rate like 15 frames per second?
If your AI solution won't work with you, it won't work for you.
When you need your AI to scale on your terms, yeah, you need Red Hat.
Thanks for playing.
That's it? Grab your pin and then visit the booth to talk to a Red Hatter. Well, where's my pin?

(01:03:16):
Oh. Oh, okay, let's get this.
Oh, it's a red hat with AI sparkles. Okay.
Well, Chris, come over here. I'm so excited to learn. How was your experience?
I'm not sure what was answered.
I think that just told me to go to a booth and I got a pin.

(01:03:36):
I like pins, I guess. But how was your AI experience? Bad, man.
That wasn't really the best experience. But one thing that was kind of low-key
talked about at the keynote that I think you picked up on, Wes,
as maybe going to have larger implications down the road is Red Hat seems to
be embracing MCP at all the different levels.

(01:03:58):
Yeah, definitely. This is something we had on our little buzzword bingo chart going into this.
I'm not sure if we'd see it or not because it's kind of relatively new even
in just the broader AI universe.
It's the model context protocol and it's a standard that came out of Anthropic
for sort of letting the AI systems interface with the rest of the world.

(01:04:18):
As you've heard, we believe that
openness leads to flexibility and flexibility leads to choice with AI.
And to ensure that, it's critical that we have industry-wide standards that
all companies can build around.
Now, as we discussed yesterday, MCP or Model Context Protocol is one of those

(01:04:42):
core standards that's just poised to take off.
Now, the letter P, protocol, is really important in this case.
Vint Cerf, the godfather of the internet,
describes protocol as a clearly delineated line that allows for independent

(01:05:02):
innovation on either side of that line, what he calls permissionless innovation.
Allowing anyone to experiment and innovate, no approvals required.
This is what we're striving for at Red Hat.
I like that messaging. I'm going to be curious to see what their actual rollout is.

(01:05:25):
It does sound like they're working on the back end to sort of have MCP implementations
for a lot of Red Hat products and services, right?
So if you want to be able to interface these things from a chatbot or hook it
into other agentic AI systems, Red Hat will be ready.
You could see maybe a practical use case of this is somewhere where you could
review your system resources utilization, disk usage, things like that from

(01:05:49):
a single interface. So you log into a dashboard, hey, what is the status of the web servers?
And the system just comes back with a whole sheet of information.
And even maybe down to like, you know, applications that are installed and their
usage and things like that.
And you could also then, they talked about hooking it up into the event-driven
side of the Ansible automation platform, right?
So from your AI-driven interface, whatever that may be, you can go trigger an

(01:06:10):
event that's going to go restart that server that the AI showed you was malfunctioning.
And this is, you know, the question I have is, So is this something that is
of an interest to the RHEL base?
I mean, I'm not trying to typecast, but it seems like they're traditionally
a pretty conservative user base.
Is this something people are pushing for? And I was trying to get a sense of
that at the keynote or after the keynote, so as people were leaving.
And I also would like to get a sense of that from the audience because this

(01:06:31):
is an area they're clearly pushing on for two years straight.
And I think everyone maybe at this point has seen, you know,
AI shoved into interfaces in a poor way and also in an actually helpful way.
And so there's, you know, there's always the question of like,
does this actually make you more efficient in your tasks or is it a new way to do the same thing?
I think regardless of how you break it down though, it's nice to see a large, well-positioned,

(01:06:56):
well-known brand in this space really working hard to bring something that is
not vendor locked you know like i like a lot of the different solutions that
are out there but it's like you're all in on the open ai ecosystem or you got all in on anthropic.
I was also impressed i don't know how you guys
feel about this but just you know every company is

(01:07:16):
talking about ai it feels like at least if you're even vaguely associated with
tech these days but talking with some of the folks in a few different places
around the summit it seems like reddit is very credible on ai i mean they have
a lot of people who are legitimate actors in various open source ai communities
working there working with them like they know what they're doing they.
Also to me felt very well informed and very well connected with other businesses who are leading the way.

(01:07:41):
Yeah i mean we saw nvidia up there amd intel you know generally people that
are competing all collaborating together on this stuff and of course it's always
fun for us to run in with old friends of the show and Carl was there at the community booth.
All right, Carl, what do you got for me right here? I got a little pocket meat,
a little bit of beef jerky and some beef and pork dried sausage.

(01:08:02):
Get a little pocket meat on the expo floor. Thanks, Carl.
I hit that pocket meat twice. I got to go to that pocket meat source twice while we were there.
This is now like conference tradition for us. If we go to a conference and don't
find Carl's special meat, then I think we're just going to feel like we left out.
We do have to be careful, though, because at some point, the event organizers
might get keyed off that Carl is competing with the catering.

(01:08:27):
Well, if you'd like to support the show, we sure would appreciate the support.
And you can become a member at linuxunplugged.com slash membership.
You get access to the ad-free version of the show or the bootleg,
which I'm very proud of. I think the bootleg is a whole other show in itself.
And so you get more content, stuff that didn't fit in the focus show.
And you also get to hang out with your boys as we're getting set up.

(01:08:48):
And then you get all the post-show stuff where we sort out all of the things.
But you can also support us with a boost, and that's a great way to directly
support in a particular episode or production.
Fountain.fm makes this the easiest because they've connected with ways to get
sats directly, but there's a whole self-hosted infrastructure as well.
You can get started at podcastapps.com.
I mention Fountain because it gets you in and it gets you boosting and supporting

(01:09:10):
the show that way pretty quickly.
So the two avenues, linuxunplugged.com slash membership or the boost,
or if you have a product you think you want to put in front of the world's best
and largest Linux audience.
Hit me up, chris at jupiterbroadcasting.com.
There's always a possibility that we might just be the audience you're looking
to reach. That's chris at jupiterbroadcasting.com.

(01:09:31):
Well, I felt a little bit of a reality shift going to this.
Whoa.
I did see you sweating a bit in your seat. That must explain it.
Well, we've been talking a lot about this behind the scenes.
And I have made the decision to switch my systems to Bluefin.
And the reason being is I'm going to, behind the scenes, start playing with image mode.

(01:09:54):
I'm going to start in Podman Desktop, and I'm going to start building my systems in image mode.
And then we're also going to start deploying some RHEL 10-based systems and
some open virtualization systems here just for us to learn and experience.
And I like a lot of what image mode is going to bring to RHEL and what's already
kind of there with Bluefin.
And that is immutability delivered in this image way that is accessible to all

(01:10:16):
kinds of administrators and DevOps people,
where I think Nix is extremely powerful, especially I like the building up from
the ground up approach, but we've clearly seen a lot of people bounce off of it.
So I want to try to jump into this mainstream that's going in a direction that I like anyways.
The rest of the world is kind of leaning into these immutable systems.
And I think there's a lot of value in learning a cloud native workflow outside of Nix OS.

(01:10:38):
Chris, this feels like such a massive shift for you. why now?
Because it's like getting in on the ground at the image-based workflow at this scale.
Will you stop if I just promised never to alias nano to Vim again?
I mean, I might bounce off it, but I really want to give it a go.
I've already got Bluefin downloaded and installed on one of my systems.

(01:10:59):
This is because you never figured out how to write a Nix function, isn't it?
Right, right. No, it's just the flakes, man. The flakes drove me away.
No, it's the idea of getting a lot of what I get with Nix OS,
but with, and you're going to hate it when I say this, but a standard file system
layout i know i'm sorry i'm sorry this is why.
You wouldn't use log seek.
But we have heard a lot of audience members say they really like these uh i

(01:11:23):
don't know quote unquote modern ways of deploying linux and bluefin has been
the choice that i've seen float to the surface.
Yeah and i think it's my starting point you know it's my i'm gonna give this
a go i'm gonna test drive it i'm gonna rent this out before i make the switch
and then at the same time i'm going to be playing around with podman desktop
seeing if i can build systems and what it's like to do that,

(01:11:43):
And then compare and contrast and move over. And at the same time,
also experiment with some of the OpenShift virtualization stuff.
Because I think that's really big.
That standalone OpenShift virtualization platform is going to be a contender. Or it is a contender.
I have a question about how long you're going to commit to this path.
Well, unless I, you know, drop off a cliff, I guess indefinitely.

(01:12:05):
I don't know. I don't really have any timeline on it.
Because I think it really depends on how the whole experiment goes. I've already started.
We've we've you know when we tried bluefin last time and i've played around
with bazite on my own i've always really liked their general initial approach
but i always thought always would be a little bit better if i could just take
and shift it a little bit and you know make it more specific to a podcasting

(01:12:25):
workflow because i'm not a developer i'm a podcaster it.
Makes me wonder about like some sort of challenge maybe not official but like
you know like what are some things that you are used to doing or like doing
on your current nix based systems and how can we see what it's like for you to try to port some of.
Those well i thought i'd start with the tui challenge like i was going to try
to have it all my my main workstation everything ready to go for the tui challenge
and then you know because i got installed a bunch of tui apps you.

(01:12:48):
Know i do like this because then if you publish maybe you know the container
files you're using then i can boot strap i.
See how it is.
Chris are you looking for advice from the audience.
People who've.
Maybe gone down this path.
I guess so i am curious people that are running this as their daily driver these
image-based immutables.
Silver blues and your blue fins and your U-Blue universe.

(01:13:10):
We need your atomic habits.
Or people that bounced off of Nix and Y, or people that tried and can't.
I mean, I'm curious too, the people that tried to switch away from Nix and it failed.
Because it seems like that could end up being me if I don't know what I'm doing.
So I'm a little nervous about that, especially because we're traveling and all of that.
But I'm willing to give it a go. I'm feeling adventurous.
Okay, so like after the show we pour one out and then we RMRF?

(01:13:34):
I think that's it.
Well, we did get some boosts. It was a slightly shorter week because we recorded
early, but that doesn't mean people didn't support us.
And Nostaromo came in with our baller boost, which is a lovely 50,000 sats.

(01:13:55):
And he says, here is to some better SAT stats.
Thank you, Noster. You are helping bring that stat up all on your own right
there. Appreciate that.
My favorite type of self-fulfilling prophecy.
That's right. That's right. Appreciate the boost.
Kongaroo Paradox comes in with 34,567 SATs.
Not bad.
Just upgraded my Nix machines to 2505. Yeah, we should mention 2505 is out.

(01:14:22):
Congrats to the folks involved.
Officially out.
I run unstable on my main laptop, which is an M2 Air running NixOS Apple Silicon,
the stable release on most of my home lab, and maintain the options for the two inputs in my flake.
This was my second NixOS release since getting into Nix last year,
and this strategy made it really painless.
No surprises about deprecated options since I saw these cases slowly when these changes hit unstable.

(01:14:47):
What is your approach to NixOS releases?
Good question. Thank you, Kongaroo. What do you do, Wes?
I mean, you're kind of a flake-based system, so you're probably not really paying
too much attention to channel changes and updates.
I do think this can be a nice way to do it. You can do sort of test upgrades,
either on other systems where you do want to be in Unstable and see sort of

(01:15:07):
the overlap between your two configurations, or just do test builds on Unstable
with whatever existing configuration you have.
And yeah, if you think there might be cases where you do need specific versions,
you're more sensitive to version changes, then pre-plumb your flake with Nix
packages versions ready to go with those then you can more freely get the boilerplate

(01:15:27):
done then you can more freely mix and match.
Really are you more or less likely to upgrade to the next release once the previous
release is no longer supported?
In other words, are you going to wait?
I usually wait like about a month I would say but then I'm all in.
So I like to give it a little bit of a transition period and then just dive right in All right.

(01:15:51):
We will add a link here to Kong Crew's Nix Files, too, for those who are curious
or maybe want to emulate the approach.
Oh, thank you for sharing that. I like that. Thank you for the boost, too.
Well, we've got a boost here. 23,000 sats from Grunerli.
Just in case nobody has already told you, it's called Da-wa-ish.

(01:16:16):
Da-wa-ish.
Which is German for I was there.
Da-wa-ish. So not Big D Witch? Oh.
Oh. We did redeploy a final iteration for ourselves, and it's been pretty fun
tracking everywhere we've gone, all around Boston and whatnot.
Been doing some tourism.
Yeah, it's actually to the point where Chris is kind of trying to choose his
itinerary based on, you know, getting fun new routes in the Dwarant.

(01:16:38):
He's been doing, like, route art. That's true.
That's really impressive. Yeah, I like to draw on the map. Thank you for the
boost. Appreciate it. Todd from Northern Virginia is here with 5,000 Sats.
Todd's just supporting the show. No message, but we appreciate the value.
Thank you very much, Todd.
Bravo, Boosin with 5,555 cents.

(01:17:02):
Jordan Bravo here. I recommend the Tui file manager, Yazzi. That's Y-A-Z-I.
Also, for folks who need to use Jira without the browser, check out Jira CLI.
Yeah, something tells me that's going to be way faster, too.
We got a boost here from TeviDog, 18,551 cents.

(01:17:24):
All the next service talk has me thinking of a new tool i recently found called
browser-use it's a tool that uses llms to control a web browser really interesting
to watch at work and it integrates with all the common llm apis oh.
Well thank you that's good to know also a post initial boost well.

(01:17:46):
What post leetzal i'm sorry that's my best i'm sorry leetzal do.
We know what that means, Wes?
No, but I'm curious.
It has something to do with math.
Oh, that's why I thought maybe Wes would be calculating over there.
He missed this one.
It's surprising. And we all know that...
Oh.

(01:18:07):
We do know that.
Did you bring it?
You want to know if I packed the five-pound math in my carry-on?
Yeah. I brought the mixer and the microphones. yes I did oh there it is,
Okay. Yeah, we can put it on the table here. Just move your laptop,

(01:18:29):
Brent. Don't spill the booch.
I'm already on the second table. Why do I get pushed off?
Okay. Alright, Teddy Dog. Tebby Dog, not Teddy. Tebby Dog.
He says it is a it is 18,551 sats, Wes.
Yeah, there we go. Thank you.
Can you get that dialed in?
I got a small paper cut, so I'm tending to that.
Yeah, there we go. Yeah, get some just grab one of Brent's Band-Aids.

(01:18:51):
He brought a whole bunch.
I also have a clothesline if you need it.
That actually would be helpful Because then we could string up the map And I
could lay down And then I could sort of read it that way Okay And take a little nap.
Do you need a headlamp?
Yeah, actually Yeah, and some epoxy would be useful too Oh.
I didn't bring it, darn Oh my gosh I did find some travel epoxy on our trip
though There's a little cute little bottle of it You could just keep in your pocket We.

(01:19:14):
Should definitely bring that then Okay.
Well just put a little dab on the map for me, would you?
Okay Right here?
Yeah, and a little to the left Oh Yeah, so where you just spilled the epoxy,
that is the German state of Mecklenburg-Vorpommern.
All right! Nailed it. That sounds like that's the name of it for sure.

(01:19:35):
On the island of Rugen.
Oh.
Whoa. That's pretty neat. That is pretty neat.
Thank you for the boost, and thank you for the fun zip code math.
Now I'm glad we actually packed that map. That was actually worth it.
Adversary17 is here with 18,441 Sats Says I'm a bit behind But the headsets

(01:19:59):
are sounding great Regarding the Bang Bus adventures and getting pulled over
If someone had offered their truck and trailer services Would you have taken it?
From what I know about you guys I feel like you would have been more interested
In the sketchy route regardless Well you gotta test the van It was as much of
a van We need to know it didn't work and the best way to find out Was to drive it.
As an uninvolved third party I'm just going to say confirmed.

(01:20:21):
Yeah.
You know what I realize? Our audience knows us so well.
Yeah. Yeah. You got us, adversaries.
Thank you. Tomato boosts in with 12,345 cents.
I think that might be a Spaceballs boost.
It's been a minute. Thank you for the Spaceballs boost.

(01:20:43):
I'm looking forward to hearing your reports from Red Hat Summit.
I've started the two-week challenge early because I'll be on holiday most of next week.
I'm already having a blast, and
it reminds me of how much I enjoyed using Linux and BSD back in the day.
Right on.
Oh, and Mr. Mato also links us up here because my write-up, which I'm updating
as I go along, is at a link we'll have in the show notes.

(01:21:06):
That is great. I love that he's getting a head start. That's really nice.
In fact, if anybody else has any great TUI tools, now is your chance to send
them in either boost or email because we need to round them up.
We'll be doing that in the next episode before we launch the actual TUI challenge.
That's fantastic. Thank you for the boost.
Megastrikes here with 4,444 sats. That's a big old duck. He says,

(01:21:27):
hello. It's funny you bring up the back catalog listeners.
I just finished listening to every Jupiter broadcasting episode minus the launch
released since the beginning of the year in the last week and a half at 1x speed.
I feel like MegaStrike, you should give us some insights.
That's so crazy.
What have you learned in this journey?

(01:21:47):
MegaStrike is a mega listener, I'll tell you what.
Does this include this week in Bitcoin? Are you going to go back and catch the
launch at least since episode 10 because it's pretty good?
I wonder. I have so many questions.
What's the schedule like? What activities do you listen for?
Were you road tripping? How did you get that much time in? That's awesome to
hear and I have so many questions. Thank you for the boost.

(01:22:08):
Well, Turd Ferguson is here with 18,322 sets.
First of all, go podcasting. And second of all, did you boys soak up any culture
in Boston or was it all Ansible and OpenShift?
It was a lot of Ansible OpenShift. That is true.
I mean, Chris got in the fight at the package store.
There was that. There was that. We got to go to a ball game. We did that.

(01:22:31):
We saw the Salem. We went to Salem, and we saw a very old grave site,
which was pretty cool, actually. Sounds a little weird, but it was actually pretty fun to do.
Some beautiful graveyards out here.
Famous witches, too.
Yeah. What else did we do? What else have we done that wasn't Summit-related?
We've done a few things. We're in our Airbnb now.
Well, we popped in to pay our appropriate respects at Cheers.
Oh, that's right. We went to Cheers. That was kind of all right.

(01:22:55):
It was all right. Norm has just passed. So it was kind of nice to be there right
as Norm had passed. So people were there pairing their respects and they had
pictures up and flowers and all of that.
They were very gluten friendly at Cheers, I gotta say.
Yeah, pretty good service. You know, it's not just a tourist hotspot, but the food is fine.
And of course, we did mention we got to go to a baseball game.
So that was pretty classic.
That was really nice. Yeah.

(01:23:16):
I thought we got pretty lucky here. Red Sox and the Mets is pretty like classic ball game.
Yeah.
And also Fenway Park. I'd always heard of it and how unique it was, but to see it in person.
Yeah, I'm not a sports ball guy, but that's just such a great opportunity. And it was a blast.
Well, as Wes knows, baseball has very strange rules around parks,
shapes, and sizes. Basically none.

(01:23:37):
So each one is a unique experience. But, you know, after that,
we kind of got our fill of the city and made our escape, which,
of course, meant encountering the native drivers.
That's true. I really, thank you both for letting me drive.
I really enjoyed it. I, I found it at first I was a little like,
wow, the lanes have no meaning here.
I mean, quite literally lanes have no meaning here, but it's because the roads are old and narrow.

(01:24:00):
And so you just kind of weave, you do a weave and you just trust that the other
driver is going to weave to, to your zig or whatever.
And so you zig and zag around everybody. And I really enjoyed it.
It's, it actually is a lot like driving the RV where it's down to last second
dodging another thing that's just barely sticking into your lane or you don't
have a complete lane and you have a very wide vehicle.

(01:24:20):
And so it was essentially taking
all my RV driving experience and applying it to a passenger vehicle.
But it worked great, and I enjoyed the heck out of that. So that was a treat
for me, because usually when we travel, I don't get to drive at all.
We also then got to see lighthouses and go to the ocean and get fresh seafood
out of the dirty Atlantic Ocean.
It's not as good as the Pacific, but you know, what do I know?

(01:24:40):
I feel like you're biased.
And we crossed off some new states, right? New Hampshire and Maine,
our cousin from another coast.
That's right. That's right. So thank you. Thank you, Turd, for that.
It's nice to reminisce about it. In fact, thank you, everybody,
who boosted into the show.
Even though it wasn't a full week, we had a decent showing, and we really appreciate it.
We had 30 of you just stream those sats as you enjoy the show,

(01:25:01):
and you stacked collectively 46,223 sats.
So when you bring that together with all of our boosts, everything that we read
above the 2,000-sat cutoff and below,
We stacked a grand total of 215,748 sats for this very humble but yet very appreciative
episode of the Linux Unplugged program.
Thank you everybody who supports us with a boost or the membership.

(01:25:24):
You are literally keeping us on the air and the best.
If you'd like to get in on the boosting fun, you can use Fountain.fm. It makes it really easy.
Or just a podcast app listed at podcastapps.com. Before we get out of here,

(01:25:45):
you know what we got? A pick.
This is one that we were tipped off to at the summit, and it's pretty neat.
It's MIT licensed, and Wes has it running on his laptop right now. What is it, Wes Payne?
It's Ramalama.
I love that name.
Say that again?
Ramalama. once more rama llama

(01:26:06):
rama llama uh yeah okay so
we've talked a bunch about olama on the show but it
turns out um it's not really fully open
source and so some folks are a little put off by this and there's some feelings
like it's got some vc money there's some like okay right now they're totally
fine but what might happen and i guess the core part of it and like the sort
of the some of the model serving stuff is not open source and i think there's

(01:26:28):
some feelings like they're trying to be a bit like Docker in the early days,
where they want to be the standard, right?
They've got their own model catalog and protocol for fetching the models from them.
When there's also places like Hugging Face and other, you know,
lots of ways to get these models.
Absolutely, yeah.
Ramalama was created sort of as a more fully open alternative to Ollama.
It's also more powered by containerization. So,

(01:26:51):
whereas Ollama has like its own kind of stuff that it does to do acceleration
in its core, and it handles the model running, And Ramalama starts with kind
of a first initial step, which is a scripting layer that assesses your host
system for whatever capabilities might be available for running models efficiently.
And then the rest of it is all done with containers. So it'll spin up a Podman
container. It can use Docker too.

(01:27:11):
And that gets a standardized environment, which then gets piped in whenever
host-specific stuff is needed.
And then in there, you go download the model from Ollama or Hugging Face or
wherever else is supported.
Wherever you want.
And then using either Ollama CPP or VLM, you can then directly run as a chatbot
or serve via open AI-compatible API that model.

(01:27:33):
So in other words, you can get a script, And even if you've just got a weak
CPU-based system, this thing will set up, identify you've got a CPU system,
launch the Podman containers, and inevitably give you an interface that looks
a lot like ChatGPT running on your local box.
But if you want to next level that sucker, you can use VLM to,
like, pipe the back end to, like, some serious GPU action or,

(01:27:55):
like, a cloud provider, whatever you might want.
Yeah, exactly. So you can kind of go from zero all the way to AI hero.
But, no, you can't actually. Like, I was just playing with it,
right? So it's OpenA compatible.
So, you know, you got open web UI or not so open web UI running locally.
You can, you can hit that right up just like you would for Ollama.
You can talk to Ramalama.
That's right. Okay. So we'll have links and more information in the show notes for that.

(01:28:17):
I see here at the bottom of the Ramalama.ai, it says supported by Red Hat.
So I take it. Red Hat's all in.
Yeah. I think it's actually maybe even under the containers repo there.
So it's kind of a, you know, first party season in Red Hat and in the wider Podman ecosystem too.
Boom. power tip right there from Wes Payne and Mr. Brantley.
So we're getting back to our regular live schedule. We always try to keep the

(01:28:40):
calendar as up to date as we can at jupiterbroadcasting.com slash calendar.
And of course, if you got a podcasting 2.0 application, then we mark a live
stream pending, usually about 24 hours ahead of time in your app.
And then when we go live, you just tap it right there in your podcast app and you can tune in.
Also, just a friendly reminder in case you don't know, we have more metadata
than that too, because we also got chapters, right?
Stuff you really want to hear about and jump right to, chapter,

(01:29:02):
stuff maybe you don't want to hear about and you wouldn't rather skip, go to the next chapter.
And we also have transcripts on the show. So if you want even more details on
that or you just want to follow along, those are available in the feed.
Show notes are at linuxunplugged.com slash 616.
Big shout out to Editor Drew this week, who always makes our on-location audio

(01:29:25):
sound great. We really appreciate him.
And of course, a big shout out to our members and our boosters who help make
episodes like this possible so we can do on-the-ground reporting to try to extract
that signal from the noise.
Thank you so much for tuning this week's episode of your Linux Unplugged program.
We will, in fact, be right back here next week, and you can find the RSS feed
at linuxunplugged.com slash RSS.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Ridiculous History

Ridiculous History

History is beautiful, brutal and, often, ridiculous. Join Ben Bowlin and Noel Brown as they dive into some of the weirdest stories from across the span of human civilization in Ridiculous History, a podcast by iHeartRadio.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.