Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
So imagine waking up tomorrow, you grab your phone, and
you notice your operating system has just inexplicably jumped from
version one all the way to version seventeen overnight.
Speaker 2 (00:10):
I mean, you'd probably think the phone was completely bricked, right.
Speaker 1 (00:13):
You'd think it's glitching out or I don't know, maybe
you just slept through an entire decade.
Speaker 2 (00:17):
Yeah, it would be terrifying.
Speaker 1 (00:19):
But back in early twenty seventeen, the developers behind Docker
actually did exactly that. They skipped sixteen major versions in
the blink of an eye.
Speaker 2 (00:29):
Which was a massive shock to the system for basically
anyone working in tech at the time. Oh I bet,
but you know it was also this stroke of absolute
genius that ended up solving a massive operational headache.
Speaker 1 (00:41):
Yeah, and welcome to the deep dive, by the way,
because we are going to get into the mechanics of
that crazy jum today. Getting Docker set up. It isn't
like installing a regular piece.
Speaker 2 (00:51):
Of software, No, definitely not.
Speaker 1 (00:53):
You don't just double click an icon, hit next three
times and suddenly you're working Docker. Is this entire ecosystem exactly.
Speaker 2 (01:00):
It's a whole environment, right.
Speaker 1 (01:02):
And if you're trying to get it running on your
local machine, whether you're using a Mac or a Windows PC,
you are stepping into this landscape full of specific additions
and unique release channels and some really intensive virtualization architecture.
Speaker 2 (01:18):
And the goal here is to make sure you have
the exact right setup for your specific hardware and your daily.
Speaker 1 (01:23):
Workflow, because if you mess it up early on.
Speaker 2 (01:25):
Oh yeah, making the wrong choice at the installation phase
can just cause massive friction down the road.
Speaker 1 (01:30):
So we are cutting right through the jargon today. By
the time we're done, you'll know exactly which version to download,
how to read Docker's cryptic version numbers, and how to
get a Linux native platform running perfectly on your Apple
or Microsoft hardware.
Speaker 2 (01:45):
It's going to be a fun one.
Speaker 1 (01:46):
Let's start by looking at the menu, because before you
even hit download, you have to choose your flavor. There
are two main paths here, right, Docker Community Edition which
we'll call CE, and Docker Enterprise Edition or E.
Speaker 2 (01:59):
Yeah. This choice really dictates the level of support and
the security you're bringing into your specific environment.
Speaker 1 (02:05):
Okay, so break down the Community Edition first.
Speaker 2 (02:08):
So Community Edition is the free, open source path. It's
heavily favored by individual developers. Startup companies, DIY operations teams.
Speaker 1 (02:16):
And I feel like there's this persistent myth in the
tech world that if something is free, it automatically means
it's like watered down or unstable somehow. Yeah, but if
I understand the underlying architecture here correctly, the core engine,
like the actual brain running the containers, that's identical in
both CE and.
Speaker 2 (02:36):
E, right, that is a completely crucial point. There is
zero difference in the fundamental computing capability. Wow, okay, yeah,
I mean you can absolutely run serious, massive scale, production
grade systems entirely on the community edition the engine itself.
It doesn't arbitrarily throttle your performance just because you aren't
paying them a licensing fee.
Speaker 1 (02:56):
Okay, So the pivot to enterprise edition, then it isn't
really about raw all.
Speaker 2 (03:00):
Not at all. It's really about risk integation. When a
big organization decides they need mission critical guarantees, that's when
they shell out for EE.
Speaker 1 (03:08):
Right.
Speaker 2 (03:08):
And we're talking about a price tag somewhere between seven
hundred and fifty to two thousand dollars a year.
Speaker 1 (03:14):
Okay, let's unpack this. What physically changes when you pay
that premium, Like, what are you actually buying?
Speaker 2 (03:21):
You are basically buying peace of mind, and you get
some very specialized tooling like what so for instance, Enterprise
Edition provides access to certified Bocker images and plugins.
Speaker 1 (03:31):
Okay, so they're vetted exactly.
Speaker 2 (03:33):
When you pull a certified image, you aren't just downloading
code from some random developer on the internet. You're getting
a package that has been rigorously tested and verified by
Docker itself.
Speaker 1 (03:45):
That's huge for security.
Speaker 2 (03:47):
It is, And you also get this deep vulnerability scanning.
The system actively dissects your containers to tell you, hey,
you're inadvertently shipping a known security flaw into production here.
Speaker 1 (03:57):
Plus I imagine you get someone to actually pick up
the phone when things break.
Speaker 2 (04:01):
Yes, you get official same day support. If your whole
infrastructure goes down at two am, you have a dedicated engineer.
Speaker 1 (04:07):
To call, which, honestly, in the context of Enterprise it
budgets where an hour of downtime can cost like hundreds
of thousands of dollars, a two thousand dollars annual fee
is basically a rounding error.
Speaker 2 (04:18):
Oh completely, It's an incredibly justifiable expense for those teams.
Speaker 1 (04:22):
It's kind of like comparing cars, right Like docer ce
is a high performance sports car. You can just drive
right off the lot for free. It's incredibly fast, handles
beautifully gets you to the finish line.
Speaker 2 (04:32):
Yeah, I like that.
Speaker 1 (04:32):
But then Docker E is that exact same sports car,
but it comes bundled with armored plating, a two hundred
and forty seven dedicated pit crew, and an ironclad warranty.
Speaker 2 (04:42):
That's a perfect framing and you know, knowing which car
you're driving brings us right back to that bizarre versioning
jump we mentioned at the start of the deep dive.
Speaker 1 (04:50):
Ah, the whole version one to seventeen thing.
Speaker 2 (04:52):
Right, because trusting a tool with your entire infrastructure means
knowing exactly how old that tool actually is.
Speaker 1 (05:00):
So put us in the mindset of a developer back
in early twenty seventeen. They were running Docker version one
point thirteen.
Speaker 2 (05:06):
Right, and seeing version one point thirteen tells you absolutely
nothing about the software itself other than the fact that
it came after one point twelve. Obviously, but if you
were auditing a fleet of say one hundred servers, and
you saw they were all running version one point thirteen,
you had zero immediate context.
Speaker 1 (05:24):
Unless you went and dug into the release notes on
some separate website.
Speaker 2 (05:27):
Exactly without doing that research. That software could be two
days old or it could be two years old.
Speaker 1 (05:33):
And if you're managing infrastructure, blind spots like that are
super dangerous. You need to know if your system is
aging out and becoming vulnerable. Right, So a Docker just
abandons the arbitrary number in completely. They jumped straight to
version seventeen point zero. And I'm guessing that wasn't just
a random number they pulled out of a hat.
Speaker 2 (05:50):
No, not at all. It was a complete shift to
a date based system. They adopted a YYMM format. Oh okay,
so it's a two digit year followed by a dot
followed by a two digit N. So seventeen point zero
three translates perfectly to the year twenty seventeen in the
third month March.
Speaker 1 (06:06):
Wow, that is brilliantly simple. So the release the following
month wouldn't be version eighteen, would just be seventeen point
zero four.
Speaker 2 (06:13):
You got it. And if a critical bug is discovered
a week after seventeen point zero four drops, they don't
need a whole new major release.
Speaker 1 (06:20):
They just add a patch number.
Speaker 2 (06:22):
Yeah, they just append a third number for the patch,
making it seventeen point zero four point one.
Speaker 1 (06:27):
This gives anyone auditing a system immediate X ray vision.
Like if I log into a server in twenty nineteen
and I see version seventeen point zero three, smiling back
of me, I don't need to go look up a wiki. Nope,
I know without a shadow of a doubt that this
software is exactly two years out of day and desperately
needs an upgrade exactly.
Speaker 2 (06:47):
It makes server maintenance incredibly transparent. The predictability was just
a massive win for the industry.
Speaker 1 (06:52):
But predictability brings up this really interesting tension. If I'm
running the community edition, Doker asks me to choose between
two completely different release channels right Edge and Stable. Yes,
and this basically dictates how frequently my system updates.
Speaker 2 (07:07):
Yeah, And this choice defines your entire operational workflow. So
the Edge channel is designed for teams that really want
to live on the bleeding edge of development.
Speaker 1 (07:15):
Meaning they want everything immediately exactly.
Speaker 2 (07:17):
It pushes out a new release every single month. You
get the newest tweaks, the latest features, the very moment
they are viable.
Speaker 1 (07:23):
But the catch there is the support window right.
Speaker 2 (07:26):
Right Edge only receives security patches and bug fixes for
that single current month. Once the next month rolls around,
you are effectively on your own unless you upgrade.
Speaker 1 (07:37):
Okay, and the alternative is the Stable channel.
Speaker 2 (07:39):
Yes, this release is much slower, only once every quarter,
so every three months. You have to wait a lot
longer for the shiny new features.
Speaker 1 (07:46):
But the tradeoff is long term reliability.
Speaker 2 (07:49):
Correct. A stable release receives active patches for bug fixes
and security issues for four full months.
Speaker 1 (07:56):
Okay, let's unpack the mechanics of that trade off, because
I have to push back a little on the pa
of the Edge channel. Sure is it really worth risking
your core infrastructure stability and dealing with potential monthly bugs
just to get a feature a few months early? I mean,
replacing the foundation of your server every thirty days sounds
like a total operational nightmare.
Speaker 2 (08:15):
Your hesitation is entirely valid, and honestly, for the vast
majority of users listening right now, stable is the default recommendation. Okay,
But let's look at a concrete scenario where Edge actually
becomes a life saver. Imagine your development team is completely stalled.
Speaker 1 (08:30):
Like they can't work at all.
Speaker 2 (08:31):
Right, You've hit a wall because of a deeply embedded
DNS routing bug in the Docker engine itself.
Speaker 1 (08:37):
Oh wow, so your team literally cannot ship their code.
Everything is totally blocked, exactly.
Speaker 2 (08:44):
And you know the Stable channel isn't scheduled to release
a patch for another two and a half months. Yes,
but tomorrow the Edge channel drops an update that contains
the exact networking fix you need. In that scenario, the
calculation completely flips.
Speaker 1 (08:57):
Oh, I see you gladly take the Edge update.
Speaker 2 (09:00):
You accept the risk of the shorter support window and
potential minor bugs because it unblocks your entire engineering team Today.
It's about weighing the pain of a potential bug against
the very real, immediate pain of a stalled workforce.
Speaker 1 (09:14):
Okay, that makes total sense. You use Edge as a
tactical unblocker, not just as a way to play with
shiny new buttons.
Speaker 2 (09:21):
Exactly.
Speaker 1 (09:21):
And for the enterprise teams paying the big bucks, I
imagine their Stable channel is even more reinforced.
Speaker 2 (09:27):
Oh it's heavily fortified. Enterprise Edition releases quarterly just like Stable,
but each of those releases is supported and maintained for
an entire year.
Speaker 1 (09:35):
Wow, a full year.
Speaker 2 (09:36):
Yeah, And crucially, Docker will backport security fixes to older
supported EE versions so an enterprise doesn't have to rip
and replace their entire infrastructure just to patch one specific vulnerability.
Speaker 1 (09:51):
Right. So we've navigated the editions, the version numbers, the
release channels, But choosing the right software means absolutely nothing
if you can't even get it running on your actual machine.
Speaker 2 (10:01):
Right. The practical part.
Speaker 1 (10:02):
Which brings us to the massive technical elephant in the
room here. Docker natively requires a Linux operating system. The
entire concept of containerization basically relies on a Linux kernel.
Speaker 2 (10:15):
That is the fundamental architectural hurdle. If we connect this
to the bigger picture, the core of Docker is a
background process called the Docker Demon.
Speaker 1 (10:24):
And would that do exactly?
Speaker 2 (10:25):
The Demon is this invisible engine that does all the
heavy lifting. It builds your containers, manages them, distributes them.
But that demon only speaks Linux.
Speaker 1 (10:33):
So if you are sitting on a Mac or a
Windows PC, your hardware natively speaks a completely different language.
So how do we force Apple and Microsoft hardware to
play nice with a Linux demon? Historically, there are two
distinct ways to build this bridge, right, the older Docker
toolbox method and the modern Docker Desktop method. That's right,
(10:53):
let's tackle the legacy approach first. The Docker Toolbox. I've
heard it's basically a giant's Swiss army knife.
Speaker 2 (11:02):
It really is. Because the toolbox knows your computer isn't
running Linux. It installs a suite of six different tools
to artificially create a Linux environment.
Speaker 1 (11:12):
Six tools. That's a lot.
Speaker 2 (11:13):
Yeah, And the most critical component it installs is a
program called VirtualBox.
Speaker 1 (11:17):
Okay. Virtual Box is what's known as a Type two hypervisor.
And to understand how this works, we need a quick timeout.
Can you explain the fundamental difference between these hypervisors and
plain english. If a Type two hypervisor sits on top
of my operating system, it's essentially acting like an emulator, right,
like running an old super Nintendo emulator inside a window
on my Windows desktop.
Speaker 2 (11:38):
That's actually a highly accurate way to visualize it. Yeah,
your physical hardware is down at the bottom, Windows are
mac os sits on top of that hardware. Virtual Box
is just a regular application running inside Windows. It cars
out a chunk of your memory and process or power
and runs a tiny, isolated Linux virtual machine inside that
application window.
Speaker 1 (11:57):
Got it. So it's a computer pretending to be another computer,
and the toolbox installs the Docker demon inside that tiny
Linux virtual.
Speaker 2 (12:06):
Machine, right, But that introduces a huge communication problem. Your
main Windows or Mac terminal can't easily talk to the
demon hidden inside that emulator.
Speaker 1 (12:15):
It's blocked off exactly.
Speaker 2 (12:16):
So the toolbox installs a few more things. There's Docker Machine,
which automates the creation of that VirtualBox VM, and then
there's the quick Start terminal, which specifically configures your command
line to bridge the gap and actually talk to the
hidden demon.
Speaker 1 (12:28):
It also throws in Kaiomatic, right, which I think is
a brilliant addition for visual learners.
Speaker 2 (12:33):
Oh Kaidomatic is great.
Speaker 1 (12:35):
Instead of just staring at a blinking cursor in a terminal,
Kaiomatic gives you a graphical user interface. You get literal
buttons to click to start and stop your containers, which
makes the whole thing feel a lot less abstract.
Speaker 2 (12:47):
It definitely lowers the barrier to entry for beginners. But
as you noted with the emulator analogy, the type two
approach is heavy.
Speaker 1 (12:55):
Yeah.
Speaker 2 (12:56):
There is a massive middleman translating every single instruck between Windows,
the virtual machine, and the Linux demon.
Speaker 1 (13:04):
Which naturally leads us to the second, more modern method,
Docker Desktop. Yeah, and desktop throws virtual box completely out
the window. It bypasses the Type two emulator entirely because
it utilizes a Type one hypervisor. Correct, So it's Type
two is a heavy emulator acting as a middleman. A
Type one must strip away the middleman entirely, right, It
(13:24):
must talk directly to the metal.
Speaker 2 (13:26):
You've hit the nail on the head. Type one hypervisor
is integrated right down at the foundation of the operating system.
Instead of running as an application on top of Windows
or Mac, it sits parallel to them, communicating almost directly
with the hardware silicon. On a Mac, this native hypervisor
is called Hyperkit, and on Windows it's called hyper v.
Speaker 1 (13:45):
Okay, so Desktop hooks directly into Hyperkit or hyper v,
allowing the Linux damon to run with incredible efficiency. From
my perspective, just sitting at the keyboard, it feels like
Docker is just a native Mac or Windows application exactly.
Speaker 2 (14:00):
You don't need a special quick start terminal, you don't
have to manage virtual box. It's just beautifully integrated into
the system.
Speaker 1 (14:06):
It removes an immense amount of friction from the daily workflow.
Speaker 2 (14:09):
It really does.
Speaker 1 (14:10):
Okay, So a natively integrated Type one hypervisor sounds vastly superior.
Why would anyone ever choose to install the heavy, clunky
Type two toolbox. I mean, it's time for the showdown
desktop versus toolbox.
Speaker 2 (14:23):
Well, the truth is the choice is often made for
you by your hardware.
Speaker 1 (14:26):
What do you mean?
Speaker 2 (14:27):
Because Docker Desktop hooks so deeply into the operating system
using that type one hypervisor, your machine has to meet
incredibly strict criteria. For starters, you cannot have virtual box
or VMware running on your machine at all.
Speaker 1 (14:40):
Let me guess the hypervisors fight each other for control
of the hardware.
Speaker 2 (14:44):
Precisely, they clash violently. So if you have legacy projects
for work or school that absolutely require virtual Box, Docker
desktop is just going to break your whole setup.
Speaker 1 (14:54):
Oh that's frustrating.
Speaker 2 (14:56):
Yeah. And beyond that, the operating system requirements are super rigid.
Mac users need hardware built in twenty ten or newer
running at least YO seventy ten point ten.
Speaker 1 (15:05):
And for Windows users, the limitation is even more severe.
Speaker 2 (15:08):
Is a massive sticking point. To run Docker Desktop on
a PC, you absolutely must have Windows ten Professional Enterprise
or the Student Edition. It simply will not install on
Windows ten Home Edition.
Speaker 1 (15:20):
Wait really, because Microsoft deliberately strips the hyper v hypervisor
out of the Home edition to cut costs exactly. So
if you're just sitting on your couch with a standard
Windows Home laptop, Docker Desktop is a completely locked door.
Speaker 2 (15:33):
And that is exactly where the Docker toolbox shines as
the ultimate fallback plan. Its requirements are incredibly forgiving because
it uses a Type two application, it plays perfectly nicely
alongside other virtualization tools. It works on much older operating systems,
going back to Mac Mountain line and Windows seven, and crucially,
it fully supports Windows Home Edition.
Speaker 1 (15:55):
Okay, so toolbox is reliable even if it's a bit clunky.
But let's look at the actual day to day usability here.
How does it feel to actually write code on these
two systems.
Speaker 2 (16:04):
The difference in accessing your containers is pretty stark.
Speaker 1 (16:07):
So if I build a web app using Docker Desktop,
I just open my browser type local host and boom,
my app appears, right, and I can use whatever terminal
I want, like standard PowerShell.
Speaker 2 (16:17):
Right, because the Type one hypervisor tricks the system into
treating the container network is native. But with the toolbox,
your container is trapped inside that virtual box vm oh. Right,
that virtual machine gets a signed it's own separate local
IP address. Buy your router it's usually something long and awkward,
like one ninety two point one sixty eight point nine
to nine point one metter zero.
Speaker 1 (16:38):
Okay, let me give you an analogy for this friction.
Using Docker desktop is like having a sophisticated translatorship implanted
directly into your brain. You just think local host and
the chip instantly translates it. You never even feel the
translation happening.
Speaker 2 (16:51):
Yeah, that's spot on.
Speaker 1 (16:52):
But using Docker Toolbox, however, is like having to pick
up a landline phone and dial a very specific, really
long foreign number one ninety two point one sixty eight
point nine to nine point one year zero every single
time you want a single word translated.
Speaker 2 (17:07):
That's exactly what it feels like.
Speaker 1 (17:09):
It gets the job done, but it is deeply tedious.
Speaker 2 (17:11):
The translator analogy perfectly captures the daily frustration of the toolbox,
But there's one more critical comparison. We really need to
unpack performance, specifically, how these two architectures handle file mounting right.
Speaker 1 (17:26):
Because when you're developing software, you don't just put code
in a container and leave it. You write code on
your Windows or mac coost machine, and you need the
Linux container to instantly recognize those file changes. So it
can recompile the app in real time exactly.
Speaker 2 (17:40):
That real time synk is called a file mount. And
for a long time there was this fascinating quirk in
the industry. Moving a file system event like hitting save
on a text document across a Type one native hypervisor
boundary actually introduces terrible latency. Really yeah, the hypervisor has
to painstaking translate the macfile event into a Linux file event.
(18:04):
Because of this overhead, the older Type two virtual Box
method actually had a noticeable edge in raw speed when
mounting heavy directories. Virtual Box have basically spent a decade
perfecting its shared folder system.
Speaker 1 (18:16):
That is wild. The heavy emulator actually beat the sleek
native integration at file sharing.
Speaker 2 (18:21):
It did initially, but I want to be clear that
docer realized this was a major bottleneck. The last few
years they've engineered entirely new file sharing mechanisms, specifically for desktop,
to close this gap. Okay, today the performance on desktop
is incredibly robust and continues to get better.
Speaker 1 (18:35):
So bringing it all together, what is the final verdict here?
If someone is sitting at their desk ready to start,
what is the game plan?
Speaker 2 (18:42):
My definitive recommendation is to always attempt Docker Desktop.
Speaker 1 (18:46):
First, always desktop first.
Speaker 2 (18:48):
Yes, yes, if your machine meets the requirements, meaning Windows
Pro or a modern Mac, and you aren't tied to
virtual box for another project, desktop is the clear winner.
The local host convenience and the seamless integration simply provide
a vastly superior developer experience.
Speaker 1 (19:05):
But if you hit a walk like if you have
Windows Home, or if you encounter bizarre performance issues on
an older machine, do not get discouraged exactly, just uninstall it,
pivot and download the Docker toolbox. It is an incredibly
reliable plan B that has literally powered millions of development workflows.
Speaker 2 (19:24):
Right. The overall goal is just to get the demon running,
whichever path you have to take.
Speaker 1 (19:27):
Okay, let's assume you've made your choice, You've run the installer,
the progress bar has finished. How do you actually prove
to yourself that it worked? We need to verify the installation.
Speaker 2 (19:36):
Verification is straightforward, but it does require using the command line. First,
open your terminal. If you chose desktop, just open your
native Mac terminal or Windows PowerShell. If you chose toolbox,
you absolutely must open that specific Docker quick start terminal.
Speaker 1 (19:53):
Right, you have to pick up the special red phone
to talk to the virtual box damon.
Speaker 2 (19:57):
Once the terminal is open, you're going to type two
words Docker space info Docker info. When you execute that,
your terminal should instantly flood with text. It will return
a massive wall of system details, the architecture type, the
exact Linux kernel version running the demon, the total number
of the containers on the system.
Speaker 1 (20:17):
And if you see all that, If you see all.
Speaker 2 (20:19):
That data, congratulations, you have successfully bridged the gap. Your
native operating system is actively communicating with a Docker demon.
Speaker 1 (20:26):
Now what if you get a giant angry red permission
denied error or a message saying cannot connect to the
Docker demon.
Speaker 2 (20:34):
Then you know the bridge is broken and you need
to revisit the installation steps.
Speaker 1 (20:38):
But let's say Docker info works perfectly. Is there a
second verification step we should do?
Speaker 2 (20:43):
Yes? Next type Docker Dash compose space Dash.
Speaker 1 (20:50):
Dash version okay, Doctor composed.
Speaker 2 (20:52):
Version Doctor composed is this incredibly powerful orchestration tool. It's
used for running applications that require multiple containers to work together,
like a web server container talking to a separate database container. Oh,
I see, Even if you aren't building complex apps on
day one, checking that Compose installed correctly. Basically guarantees that
(21:12):
your entire toolkit is complete and healthy. It's just returned
a single clean line stating the exact version installed.
Speaker 1 (21:20):
Which brings us to a quick review exercise for everyone listening,
I want you to go run those commands right now.
Open your terminal, type Docker info and look really closely
at the output.
Speaker 2 (21:29):
Yeah, find the version number exactly.
Speaker 1 (21:31):
Find that number based on the YYMM format we decoded earlier.
Exactly what year and month was your software released? How
old is the engine running on your machine? Right?
Speaker 2 (21:39):
So if you see twenty three point zero five, you
know immediately it's from May of twenty twenty three.
Speaker 1 (21:43):
It's a habit that completely changes how you look at
infrastructure health.
Speaker 2 (21:47):
It forces you to treat your tools as active aging
components rather than just static files.
Speaker 1 (21:52):
And as we wrap up, I want to leave you
with a slightly provocative thought about where all of this
is actually heading. We just spend this entire time secting
the intense architectural gymnastics required to make Linux run on
Windows and Mac.
Speaker 2 (22:06):
Yeah, we talked about emulators, hypervisors, translating kernels.
Speaker 1 (22:10):
It is a massive amount of engineering just to run
a demon, it really is. But the tech landscape is shifting.
Operating systems are becoming incredibly integrated. Just look at Microsoft.
They have heavily pushed the Windows subsystem for Linux or WSL.
They are effectively weaving a native Linux kernel directly into
the underlying fabric of Windows itself.
Speaker 2 (22:29):
They're erasing the boundary entirely, so.
Speaker 1 (22:31):
As that technology matures and the lines between Windows, Mac
and Linux completely blur, what happens to the whole concept
of virtualization.
Speaker 2 (22:40):
That's a great question.
Speaker 1 (22:41):
Will we even need type one or Type two hypervisors
a decade from now? If Windows natively speaks Linux right
down at the metal, Does the concept of Docker desktop
just become a relic of the past. Does the container
just become a native, built in citizen of every operating
system on Earth?
Speaker 2 (22:58):
That is the multi billion dollars question. The friction we
just spent twenty minutes navigating might eventually just disappear.
Speaker 1 (23:05):
It's an incredible horizon to mole over. As you start
spinning up your very first containers today, while you've survived
the deep dive, you now know your community edition from
your enterprise, your edge from your stable, and exactly how
to tame the hypervisors lurking inside your computer.
Speaker 2 (23:21):
You definitely have the foundational setup to start building something amazing.
Speaker 1 (23:25):
Thanks for joining us. Remember the next time you install
a piece of software, ask yourself, is this just an
application or is it an ecosystem? Keep exploring