All Episodes

April 28, 2026 22 mins
In this lesson, you’ll learn about: Docker Compose, multi-container apps, and service orchestration1. What is Docker Compose?
  • Docker Compose is a tool used to:
    • Define
    • Run
    • Manage
    multi-container applications using a single command
👉 Instead of long docker run commands, you describe everything in one file2. The docker-compose.yml File
  • Core configuration file written in YAML
  • Uses version 3 syntax
Example structure:version: "3" services: web: build: . redis: image: redis
  • Defines:
    • Services (containers)
    • Networks
    • Volumes
3. Defining Services
  • Each service represents a container
Example:
  • Web app (custom build)
  • Redis (prebuilt image)
🔹 Build vs Image
  • build: → build from local Dockerfile
  • image: → pull from registry (e.g., Docker Hub)
web: build: . redis: image: redis 4. Port Mappingports: - "5000:5000"
  • Format:host_port : container_port
👉 Allows access from your browser (localhost)5. Volumes (Data Management)🔹 Host-Mounted Volumevolumes: - .:/app
  • Syncs local files with container
  • Ideal for development
🔹 Named Volumevolumes: - redis-data:/data volumes: redis-data:
  • Persistent storage
  • Managed by Docker
6. Managing Service Dependenciesdepends_on: - redis
  • Ensures:
    • Redis starts before the web app
👉 Important for backend-dependent services7. Environment Variables with .env
  • Store sensitive or dynamic values:
COMPOSE_PROJECT_NAME=myapp Benefits:
  • Cleaner config
  • Avoid hardcoding
  • Easy to manage across environments
🔹 COMPOSE_PROJECT_NAME
  • Defines a custom project name
  • Prevents conflicts between projects
👉 Useful when running multiple apps on the same machine8. Running Everything with One Commanddocker-compose up
  • Builds images
  • Creates containers
  • Starts all services
9. Why Docker Compose Matters
  • Simplifies complex setups
  • Reduces human error
  • Makes projects:
    • Reproducible
    • Shareable
    • Scalable
Key Takeaways
  • Docker Compose = multi-container management made easy
  • docker-compose.yml = your infrastructure blueprint
  • Supports:
    • Services
    • Volumes
    • Networks
    • Environment variables
  • One command replaces dozens of manual steps


You can listen and download our episodes for free on more than 10 different platforms:
https://linktr.ee/cybercode_academy
Listen
Watch
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
You know, there is this incredibly satisfying moment in development
when you finally get a single container running perfectly.

Speaker 2 (00:07):
Oh. Absolutely, it's a great feeling.

Speaker 1 (00:09):
Right, You've got your app, it's totally isolated, all its
dependencies are locked in, and it's just beautiful. But then, well,
reality hits.

Speaker 2 (00:16):
Your application actually grows.

Speaker 1 (00:18):
Exactly Suddenly you need a database or maybe a caching layer,
and what was once this single elegant run command turns
into an absolute nightmare of manual, repetitive tasks.

Speaker 2 (00:32):
Yeah, I mean you're opening terminal tab after terminal.

Speaker 1 (00:34):
Tab, trying to remember like five or maybe ten really
long complicated run commands with a dozen flags each, just
to get your local environment standing up.

Speaker 2 (00:43):
It genuinely becomes a second job. You transition from being
a developer focused on writing code to basically being assists.
Had been desperately trying to orchestrate this fragile house of
cards in your local laptop.

Speaker 1 (00:55):
Which is stressful enough just for you, right.

Speaker 2 (00:57):
And then imagine trying to onboard a new team member
and handing them a document with twenty different commands they
have to execute flawlessly in a highly specific order every
single morning.

Speaker 1 (01:07):
It's totally error prone. Honestly, it's rage worthy.

Speaker 2 (01:10):
Oh for sure.

Speaker 1 (01:11):
It reminds me of the dark ages of programming before
we had proper package managers.

Speaker 2 (01:16):
That is a really good comparison, actually.

Speaker 1 (01:18):
Right, because imagine if instead of using a package dot
json for Node or requirements dot txt for Python, you
had to manually download and install every single dependency one
by one from the command line every time you set
up a project.

Speaker 2 (01:35):
Nobody wants to run a million commands.

Speaker 1 (01:36):
Nobody. So let's unpack this. We've gathered a stack of
DevOps documentation, system architecture deep dives, and some battle tested
developer blogs, and our mission today is to take all
that operational pain and completely replace it with Docker Compose.

Speaker 2 (01:50):
The core philosophy we're exploring here is the shift from
an imperative paradigm to a declarative one.

Speaker 1 (01:56):
Okay, break that down for us.

Speaker 2 (01:58):
Sure. So with those manual co you're imperative. You're telling
the doctor demon exactly how to do every little thing,
step by step. Go here, pull this attacks this network.
But Doctor Compose allows you to move to a declarative approach.
You define the final desire state of your entire architecture
in a single version controlled YAML file.

Speaker 1 (02:18):
You just compose what you want to exist.

Speaker 2 (02:21):
Exactly, you committed to your repository, and from that point
on you run exactly one command. The engine figures out
how to pull the images, build the code, attach the networks,
and just boot up your entire stack.

Speaker 1 (02:32):
So, if this is the ultimate tool, the obvious question
for you listening is why didn't we just start here?

Speaker 2 (02:38):
Yeah, that's a fair question.

Speaker 1 (02:39):
Why do tutorials and courses force you through the struggle
of learning all those manual network and volume commands? First,
if this declarative magic existed the whole time, well, that.

Speaker 2 (02:51):
Goes back to a fundamental philosophy of learning any complex system.
You have to understand the underlying mechanics before you rely
on the abstraction.

Speaker 1 (02:58):
Like learning a programming language.

Speaker 2 (03:00):
Exactly, if you want to master the Ruby on Rails framework,
you really need to understand the base Ruby language first.
Doctor has a lot of complex moving parts at the
kernel level.

Speaker 1 (03:09):
Like how the demon manages resource limits and stuff.

Speaker 2 (03:12):
Right, how it uses t groups, how it isolates network
name spaces, or how volumes bypass the union file system Entirely,
if you just jump straight to the abstraction of compose
you won't actually understand the infrastructure you're deploying.

Speaker 1 (03:26):
You'd just be memorizing syntax without actually understanding the architecture.

Speaker 2 (03:31):
And when something breaks, and it will break, you wouldn't
know if it's a network issue, a storage issue, or
just an application crash.

Speaker 1 (03:38):
The debugging process would be impossible.

Speaker 2 (03:41):
Completely impossible. But because you already understand those underlying concepts networks, volumes,
port binding, moving to compose is going to feel like
taking off the training weights.

Speaker 1 (03:51):
All that architectural knowledge still applies exactly.

Speaker 2 (03:55):
We're just changing the interface we use to orchestrate it all.

Speaker 1 (03:58):
All right, let's conceptualize this blueprint. Imagine we are drafting
the architecture layout. We start with a blank file or
Docker composed in the blank canvas yep, and conventionally, the
very first thing established at the top of this blueprint
is the API version. Usually you just see version cult
three quote. But I have to push back on this. Yeah,

(04:19):
if the Docker composed specification has evolved so much over
the years, why do we just slap a three on
it and move on? Are we just picking the most
stable legacy version to be safe or what?

Speaker 2 (04:29):
That's actually a fascinating piece of tech history. So version
one was very basic and version two introduced proper isolated networking.
But version three was a massive paradigm shift. How so
because it was designed to bridge the gap between local
development and Docker swarm, which is doctors built in orchestration
tool for multi noode clusters. Oh I see yeah. Version

(04:50):
three unified the specification so the exact same file could
theoretically be deployed locally on your laptop or to a
massive cluster.

Speaker 1 (04:57):
Wow.

Speaker 2 (04:58):
Okay, well there are minor point releases establishing the major
version as three ensures the engine parses your structural name
spaces using the modern networking first rule set.

Speaker 1 (05:08):
Okay, that makes total sense. That that's the foundation. From there,
we move into the actual compute layer, the services block.

Speaker 2 (05:15):
Right, This is where the magic happens.

Speaker 1 (05:17):
This is where we define individual components of our application.
For this setup, let's design a classic two tier architecture,
a back end database using Reddus and our custom web application.

Speaker 2 (05:29):
And the names you give these services, let's just call
them Reddus and web are critical. They aren't just arbitrary
labels for human readability.

Speaker 1 (05:37):
They actually do something functional.

Speaker 2 (05:39):
Oh absolutely, Doctor Compose actually registers these exact names into
its internal DNS resolver. Oh wow, So later on, when
your web application needs to send data to redtus, it
doesn't need to know the database's internal IP address, which
changes every time the container restarts.

Speaker 1 (05:56):
Anyway, it just sends traffic to the host name reddis exactly.

Speaker 2 (05:59):
It's built in service discovery.

Speaker 1 (06:01):
That alone saves so much configuration headache. Now these two
services are going to be instantiated in completely different ways. Right.
For the Redta service, we don't need to compile its
source code from scratch. We just want a reliable, precompiled version.
So we give the reta service an image property pointing
to an official tag like rettis colon three point two

(06:21):
dash Alpine.

Speaker 2 (06:22):
Right. The engine sees that, reaches out to the registry
and pulls the image down automatically.

Speaker 1 (06:27):
But the web service represents your proprietary code. You can't
just pull that from a public registry.

Speaker 2 (06:33):
No, you have to build it dynamically.

Speaker 1 (06:35):
Right, So instead of an image property, we use a
build property, and we set its context to the current directory,
usually represented by just a single.

Speaker 2 (06:42):
Dot, which is super clean.

Speaker 1 (06:44):
Yeah. The composed engine reads that, looks in your local folder,
finds your Docker file and executes the build process right
there on the spot. But let me bring up a
practical workflow trick here, because this is something that saves
a lot of time.

Speaker 2 (06:57):
I'll love a good workflow trick.

Speaker 1 (06:58):
Let's say you've built this web service locally, you've tested it,
and now you want to push your custom image to
your company's registry. Normally, you'd have to run a separate
manual command to tag that newly built image with your
registry repository and version number before you could push it.

Speaker 2 (07:15):
And that manual tagging process is incredibly tedious, it really is.

Speaker 1 (07:20):
Let incompose, if you define both a build property and
an image property under the same service, it handles the
tagging for you.

Speaker 2 (07:29):
Oh that's a great feature.

Speaker 1 (07:30):
You set build to the current directory and you set
image to something like your company slash web appcole in
one point zero. When you run the startup command, Compose
builds the application from source, and the moment the build succeeds,
it immediately applies that custom tag to the final image
hash in your local.

Speaker 2 (07:47):
Cash bypassing the manual tagging step entirely exactly. To understand
the mechanics of why that works, you kind of have
to look at how Compose interacts with the docer Demon API.

Speaker 1 (07:58):
Because Compose isn't actually building the idge itself, right.

Speaker 2 (08:01):
Right, it's streaming your local file context to the Docker demon,
asking the demon to execute the build, and then injecting
an additional API call to tag the resulting artifact. It
aggregates multiple imperative Demon commands into one declarative statement.

Speaker 1 (08:17):
Okay, so our compute layer is defined. The services are built,
but right now they are completely isolated.

Speaker 2 (08:23):
They're essentially in solitary confinement. Right.

Speaker 1 (08:26):
They are running inside the demon, but they are sealed
off from the outside world. We need to write traffic
into them, which means mapping our.

Speaker 2 (08:33):
Ports the port's property.

Speaker 1 (08:35):
Yeah, under our web service we map host port five
thousand to container port five thousand. Under Rettis we might
map sixty three seventy nine.

Speaker 2 (08:44):
Port mapping is where the abstraction hides a tremendous amount
of complexity. When you declare that port mapping, you are
instructing the docer demon to interact directly with your host
machines kernel.

Speaker 1 (08:54):
I always picture it like a corporate switchboard, Okay, I
like that the host machine is the main company phone number,
external traffic calls the mainline on port five thousand, but
the host machine itself doesn't actually have a web server
running on it, So the switchboard intercepts that call, looks
at its routing table, and forwards the traffic down a
dedicated internal wire to extension five thousand, which is the container.

Speaker 2 (09:17):
That's a perfect analogy. The switchboard in this scenario is
actually your host's firewall rules.

Speaker 1 (09:23):
Oh interesting, Yeah.

Speaker 2 (09:24):
On a Linux system, the docor demon automatically writes imptables
rules the moment you start the composed project.

Speaker 1 (09:30):
Wait, it writes actual firewall rules dynamically it does.

Speaker 2 (09:34):
It creates a network address translation or NAT rule that says,
any TCP packet hitting the host's physical network interface on
port five thousand must be seamlessly forwarded into the virtual
bridge network attached to that specific container.

Speaker 1 (09:48):
And here is a crucial connection back to the application architecture.
That internal extension of the container port must precisely match
the bind port defined inside your application code.

Speaker 2 (09:59):
Yes, that is a common trap.

Speaker 1 (10:01):
If your Python flask application is hard coded to listen
on port eighty eighty inside the container, but your compose
file forwards traffic to port five thousand, well, the switchboard
routes the call, but nobody is picking up the phone.

Speaker 2 (10:15):
The infrastructure and the application code must be perfectly synchronized exactly.
This is also why we don't always expose every service
to the host in our architecture. The web service needs
a host port mapping so you, the developer can open
a browser and see the app. But the reddest database
usually don't map its port to the.

Speaker 1 (10:34):
Host machine because the web app doesn't go through the
host to reach the database.

Speaker 2 (10:38):
Correct compose automatically creates a secure internal virtual network for
all the services in your.

Speaker 1 (10:44):
YAML file using that built in DNS exactly.

Speaker 2 (10:47):
The web container talks to the reddest container directly over
that internal network using the reddest host name we discussed earlier.
Exposing the database port to the host machine is not
only unnecessary for internal communication, it actively creates a security
vulnerability by opening the database to external host traffic.

Speaker 1 (11:05):
Wow. Okay, so we have our compute and we have
our secure network topology. The services are talking, but the
moment you update your application and the database container restarts,
all your stored data vanishes. Oh God, which introduces the
problem of ephemorality and takes us to the next architectural
layer volumes right.

Speaker 2 (11:24):
Because a container's file system is inherently temporary, it uses
a union file system where any changes written during runtime
are stored in the thin volatile layer, and if.

Speaker 1 (11:34):
The container is destroyed, that layer goes with it.

Speaker 2 (11:36):
Destroyed with it. To achieve persistent state, we have to
bypass that volatile layer and store data directly on the
host machine.

Speaker 1 (11:44):
And there are two main ways we handle this and compose.
The first is for our live application code. We want
to map our current working directory on our laptop straight
into the web container. That way, if we edit a
Python file, the change is instantly reflected inside the running
app without having to rebuild the.

Speaker 2 (12:00):
Whole image, which is a huge time saver.

Speaker 1 (12:02):
Huge. We map the current directory using a simple dot
representing the current path to the application folder inside the container,
and I really have to pause and highlight how massive
this is for cross platform development, especially for Windows users.

Speaker 2 (12:17):
Oh. The path format normalization is an absolute life saver.

Speaker 1 (12:21):
In the old manual command days, if a Mac user
shared a run command with a Windows user, it would
break immediately because the absolute file paths are formatted completely differently.

Speaker 2 (12:31):
You'd be fighting with drive letters and backslashes.

Speaker 1 (12:33):
All day exactly. By using the relative dot syntax and compose,
the engine dynamically translates the current directory path into whatever
format the underlying host operating system requires. The yamal file
remains pristine and universal across every operating system on your team.

Speaker 2 (12:51):
We've just described as a bind mount. You're taking a
specific known directory on the host and binding it to
the container. But for our database, reddis, we don't want
to bind that.

Speaker 1 (13:00):
Why not.

Speaker 2 (13:01):
We don't want database files littering our project directory. We
want Docker to handle the storage securely behind the scenes.

Speaker 1 (13:07):
Ah okay, So for reddis we use a named volume.
We map a volume named reddus underscore data to the
slash data directory inside the container. But and this catches
so many people out. You cannot just invent a volume
name in the service block and expect the engine to
automatically provision it.

Speaker 2 (13:26):
It creates a validation era right away. Yeah. If you
declare a named volume in the compute layer, you must
explicitly define its creation in the storage layer.

Speaker 1 (13:35):
Which means scrolling to the very bottom of the composed file,
completely stepping out of the services section and creating a
top level volumes declaration block, You list the name reddus
underscore data there and leave it with an empty configuration And.

Speaker 2 (13:48):
That empty configuration block is actually an instruction to the
Docker demon.

Speaker 1 (13:52):
What does it tell it?

Speaker 2 (13:53):
It says, create a volume managed by the local driver.
Store it deep in the demon's internal directory structure usually
slash far slash lib, slash docker, slash volumes, and manage
its life cycle independently of any container.

Speaker 1 (14:05):
Oh so it's totally decoupled exactly.

Speaker 2 (14:08):
Even if you tear down the entire composed project, that
named volume and all the database records inside, it safely
persists on the host until you explicitly command Docker to
prune it.

Speaker 1 (14:19):
That is so powerful. Okay, so compute is done, networking
is done, Storage is solved. But we still have an
orchestration problem regarding.

Speaker 2 (14:28):
Time the startup sequence.

Speaker 1 (14:30):
Right, These containers boot up simultaneously by default. If our
web application starts in ten milliseconds and immediately tries to
open a socket connection to the database, but the reddits
container takes a full second to initialize its memory structures,
our web app throws a connection refused error and crashes.

Speaker 2 (14:47):
This is a classic race condition. The architecture requires sequence.

Speaker 1 (14:50):
To fix this. We introduce the depends underscore on property
under our web service, we declare that it depends on reddus.
This force is composed to manipulate the startup sequence, guaranteed
being that it initiates the red is container before it
even attempts to start the web container. Problem solved, the
database starts first, Well, let's.

Speaker 2 (15:07):
Look at the mechanics of what the demon is actually
monitoring there. Because it depends underscore on solves the boot order,
but it creates a full sense of security regarding application readiness. Really,
how so, when Composed starts the red is container, it
monitors the container's primary process PID one. Okay, the moment
the Linux kernel reports that keid one is running, Composed

(15:29):
checks the depends underscore on condition off the list, and
immediately fires up the web container. But Compose has absolutely
no idea what is happening inside that database process.

Speaker 1 (15:40):
Oh, I see where this is going.

Speaker 2 (15:41):
If reddess takes another five seconds to run internal setup
scripts before it actually binds to its network port, the
web app is still going.

Speaker 1 (15:48):
To crash because Compose is managing the state of the infrastructure,
not the state of the application. It knows the server
is powered on, but it doesn't know if the software
is ready to take requests.

Speaker 2 (15:58):
Exactly the distinction the pens underscore on is an infrastructure
level orchestrator. To achieve true resilience, the application code itself
must account for network latency, So.

Speaker 1 (16:09):
Your web app needs logic to catch a connection failure,
apply an exponential back off, and maybe retry a few
times before throwing a fatal error.

Speaker 2 (16:17):
Precisely, infrastructure as code handles the topology, but your software
still has to handle the reality of distributing systems.

Speaker 1 (16:26):
That is a phenomenal architectural insight. The blueprint handles the structure,
the code handles the timing. Okay, we have our final
layer to define. The architecture is solid, but it needs
dynamic configuration.

Speaker 2 (16:40):
Things like apikeys, debug toggles, or database passwords.

Speaker 1 (16:44):
Exactly things that should never be hard coded into the
Yamel blueprint itself, which introduces environment variables.

Speaker 2 (16:51):
Hard coding configurations into your infrastructure blueprint violates the principles
of reproducible built. You want the exact same composed file
to work for local testing, staging in production.

Speaker 1 (17:00):
And the way we achieve that is by decoupling the
configuration from the architecture. Using the end underscore file property.
Instead of listing variables directly in the AML, we tell
the service to look at an external file, usually just
called dot env. And the beauty of the envi underscore
file property is that it processes lists sequentially.

Speaker 2 (17:20):
That's a huge feature.

Speaker 1 (17:21):
You can load a base dot en file with default
local settings and then immediately load a dot envi dot
production file right below it. Because the engine evaluates them
from top to bottom, the production variables smoothly overwrite the
base variables in memory.

Speaker 2 (17:36):
It allows for seamless environment switching. And there are specific
environmental variables that control how Compose itself behaves independently of
your application.

Speaker 1 (17:45):
Let's look at one of those right now. If we
open our dot end file, we might see Composed underscore
project underscore name equals my underscore app.

Speaker 2 (17:53):
That variable prevents a silent infrastructure collision.

Speaker 1 (17:56):
How does it do that?

Speaker 2 (17:57):
When Composed builds your networks and volumes, it needs the
way to track which resources belong to which project. By default,
it uses the name of the folder your YAML file
is sitting in as a prefix label.

Speaker 1 (18:08):
So if my code is in a folder called website,
Compose names the network website underscore default. That sounds perfectly reasonable.

Speaker 2 (18:17):
It is reasonable until you are an agency developer working
on three different client projects and you cloned all three
repositories into folders simply named back end.

Speaker 1 (18:26):
Oh no, a name space collision.

Speaker 2 (18:29):
The catastrophic one. If you run compose in the first
back end folder, it creates a network called back end
underscore default. Right. When you switch to the second clients
project and run compose, the engine sees the same folder name,
assumes it's the same project, and attempts to attach the
second client's containers to the first client's isolated network.

Speaker 1 (18:48):
Or worse, it tries to overwrite the persistent database volume exactly.

Speaker 2 (18:52):
It's a disaster.

Speaker 1 (18:53):
But by explicitly declaring compose underscore project underscore name in
the dot N file, we override the folder name default.
We force the engine to tag every resource with a
unique deterministic prefix, completely isolating the name space regardless of
where the code lives on your RD drive.

Speaker 2 (19:11):
It gives you absolute control over the demon's resource labeling.
There's another environment variable trick often used in these files,
specifically for Python applications, regarding standard output.

Speaker 1 (19:22):
Right python unbuffered equals true. This is a great example
of application quirks interacting with container infrastructure.

Speaker 2 (19:30):
It really is.

Speaker 1 (19:31):
By default standard po sex behavior for languages like Python
is to buffer terminal output to save computational resources. Instead
of printing every log line immediately, it waits for the
buffer to fill up and then flushes a chunk of
logs all at.

Speaker 2 (19:47):
Once, which makes total sense on a raw operating system.

Speaker 1 (19:50):
But Docker Compose acts as a centralized log aggregator. It
attaches to the standard output streams of all your containers
and multiplexes them into your terminal.

Speaker 2 (19:59):
Sothon is buffering those logs.

Speaker 1 (20:01):
Internally, doctor has nothing to capture. You can fire up
your whole project and your terminal will just sit there
completely blank. You'd think the process was frozen.

Speaker 2 (20:08):
It's terrifying the first time you see it.

Speaker 1 (20:10):
But by setting Python and buffer it equals true. You
force the application to flush every single line to standard
output instantaneously, allowing Composed to actually stream your application logs
in real time.

Speaker 2 (20:23):
If you are relying on Composed for debugging, understanding how
process streams interact with the damon's log capture is absolutely essential.

Speaker 1 (20:30):
All right, We have covered a massive amount of architectural
ground today to make sure the synthesis really sticks. Let's
do a quick structural.

Speaker 2 (20:38):
Review, a little mental exercise.

Speaker 1 (20:39):
Exactly. Imagine you are looking at the Yamel blueprint we
just finalized your application. Traffic is spiking, and you realize
you need to add a robust postgress database alongside retis.
Mentally map out the execution for me.

Speaker 2 (20:52):
Okay, let's walk through it.

Speaker 1 (20:53):
First, to add the compute, where do you go? You
add a new component under the top level services namespace. Second,
to allow the host to inspect the database, you expose
its port under the port's property. And finally, to ensure
user data survives a conteger restart, you declare a named
volume in the service. And then what section at the
very bottom of the file do you update to instruct
Docker to actually provision that storage.

Speaker 2 (21:15):
Well, you'd need the volumes name space.

Speaker 1 (21:17):
Yes, exactly, the volumes name space. You declare the new
postgress volume name with an empty configuration hash. If you
can trace that flow compute network storage, you fully grasped
the declarative power of this tool.

Speaker 2 (21:31):
You've transitioned from manually typing imperative commands to engineering a
reproducible infrastructure architecture, and that is.

Speaker 1 (21:39):
The true value unlocked today. We took a brittle, manual
setup that required specialized knowledge just to turn on, and
reduced it to a single file, an infrastructure blueprint that
lives right alongside your code, universally executable by anyone on
your team with a single command.

Speaker 2 (21:55):
It's a game changer for local development. But you know
here is something that you want tonight, doctor. Compose is
a masterpiece for orchestration on a single machine. But what
happens when your application goes viral and one physical server
simply doesn't have the compute power to handle the traffic?
How do you take this exact declarative philosophy defining your
compute network and storage and spread it dynamically across a

(22:18):
cluster of ten thousand servers.

Speaker 1 (22:20):
That is where enterprise orchestrators like Kubernetes enter.

Speaker 2 (22:23):
The chat exactly, But we'll leave that rabbit hole for
you to explore on your own.

Speaker 1 (22:27):
Definitely, keep building, keep experimenting, and remember don't let your
infrastructure become a second job. Use the tools. We'll catch
you next time on the deep dive
Advertise With Us

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

The Breakfast Club

The Breakfast Club

The World's Most Dangerous Morning Show, The Breakfast Club, With DJ Envy, Jess Hilarious, And Charlamagne Tha God!

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2026 iHeartMedia, Inc.

  • Help
  • Privacy Policy
  • Terms of Use
  • AdChoicesAd Choices