All Episodes

April 30, 2026 18 mins
In this lesson, you’ll learn about: applying Docker to real-world apps and scalable architecture principles1. Framework-Based Starter Projects
  • The episode provides 7 ready-to-use starter projects for popular frameworks:
    • Flask
    • Express (Node.js)
    • .NET
    • Django
    • Ruby on Rails
    • Golang
    • Laravel
  • Each project includes:
    • Dockerfile
    • docker-compose.yml
👉 Goal: get you running fast with real applications in Docker2. Logging to Standard Output (stdout)❌ Problem:
  • Writing logs to files inside containers
  • Logs are lost when the container stops or restarts
✅ Best Practice:
  • Log everything to stdout
print("App started")
  • Benefits:
    • Managed by Docker daemon
    • Easy to:
      • View → docker logs
      • Rotate logs
      • Send to monitoring systems
3. Environment-Based Configuration
  • Use environment variables instead of hardcoding values
Example:DB_HOST=redis APP_ENV=production Benefits:
  • Switch between environments easily:
    • Development
    • Testing
    • Production
  • No need to change source code
4. Stateless Application Design ("Stupid Apps")❌ Bad Practice:
  • Storing data inside the app container
  • Example:
    • Sessions in memory
✅ Best Practice:
  • Keep apps stateless
  • Store data in external services like:
    • Redis (sessions, cache)
    • Databases
Why this matters:
  • Containers can:
    • Restart anytime
    • Scale horizontally
👉 No data should be lost5. The 12-Factor App Philosophy
  • These practices are based on:
    • 12 Factor App
Core Ideas:
  • Config via environment variables
  • Logs treated as event streams
  • Stateless processes
  • Portable across environments
6. Real-World ImpactFollowing these principles allows you to:
  • Scale applications easily
  • Avoid downtime/data loss
  • Deploy consistently across:
    • Local
    • Cloud
    • CI/CD pipelines
Key Takeaways
  • Starter projects help you skip setup and start building
  • Always log to stdout
  • Use .env for configuration
  • Keep apps stateless
  • Follow 12-Factor App for production-ready systems
Big PictureYou’re no longer just learning Docker—you’re applying it like a professional:
  • Building real apps
  • Designing scalable systems
  • Following industry best practices


You can listen and download our episodes for free on more than 10 different platforms:
https://linktr.ee/cybercode_academy
Listen
Watch
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Imagine moving into this incredibly futuristic, highly automated smart home,
like the kind of place with climate control zones, built in,
ambient lighting, automated security.

Speaker 2 (00:12):
Oh that sounds amazing, right.

Speaker 1 (00:14):
It practically runs itself. But when you move in, you
insist on bringing all of your clunky, outdated furniture, like.

Speaker 2 (00:21):
A massive vintage sofa.

Speaker 1 (00:23):
Exactly. You drag in this massive vintag sofa that completely
blocks the motion sensors, and you plug in an antique
lamp that constantly trips the smart grits breakers.

Speaker 2 (00:33):
So you're basically fighting the architecture rather than leveraging it.

Speaker 1 (00:36):
Yeah, you have this brilliant new environment, but you're sabotaging
it because you're trying to live in it exactly the
same way you lived in your old apartment.

Speaker 2 (00:43):
The environment completely changed around you, but the behavior of
the things inside it, well, they didn't adapt.

Speaker 1 (00:48):
Welcome to the deep dive today, we are exploring a
transition that feels exactly like that smart home scenario.

Speaker 2 (00:55):
It really does.

Speaker 1 (00:56):
We are crossing the bridge from just understanding basic Docker command,
you know, like spinning up a pre Maine image, to
actually architecting and docrizing your own real world Web applications.

Speaker 2 (01:08):
And that transition is historically where so many developers just
hit a wall.

Speaker 1 (01:13):
Yeah they really do.

Speaker 2 (01:14):
Applying Docker to production level applications isn't just about throwing
your existing code into a container and hoping it runs.
It requires a fundamental paradigm shift in how you design
the applications behavior from the ground up.

Speaker 1 (01:26):
So today's mission is to map out that paradigm shift
for you. We're going to explore the absolute essential blueprint
every project needs, break down some critical architectural rules for scalability,
and unpack a massive logging trap that catches almost everyone
off guard.

Speaker 2 (01:42):
That logging trap is a big one.

Speaker 1 (01:43):
Oh yeah, And finally, we'll explore why making your application
incredibly unbelievably stupid is actually the smartest thing you can
do for its survival.

Speaker 2 (01:51):
Making an application stupid to make the infrastructure smart. I mean,
in the context of modern development, it is the absolute
core of system resilience.

Speaker 1 (01:59):
Let's start with the found foundation before we can even
touch application design. We need the materials to get a
project off the ground right. The materials we're reviewing today
lay out a very specific blueprint, like four fundamental files
that should be sitting at the root of every single
dock rized project you build.

Speaker 2 (02:16):
Yeah, we are looking at a Docker file, a doctre
ignor file, a doctor composed UL file, in an N file.

Speaker 1 (02:25):
Four files. Right.

Speaker 2 (02:26):
Strictly speaking, you could hack a container together without all
of them, but for a robust, professional setup, skipping any
of these is a mistake.

Speaker 1 (02:35):
The docter file is your recipe, the doctor composed file
is the orchestrator getting your containers to talk to each other,
and the n handles your secrets. But let's pause on
that doctor ignore file.

Speaker 2 (02:48):
Ah.

Speaker 1 (02:48):
Yes, I think a lot of developers treated as an afterthought,
sort of like a standard gidnore. But the stakes are
much higher in the containerized.

Speaker 2 (02:55):
Environment, much higher. It's not just about keeping the final
image size down, though you know that is important. It
fundamentally dictates your build times through Docker's layer caching mechanism.

Speaker 1 (03:06):
Right the caching.

Speaker 2 (03:07):
If you don't aggressively ignore local directories like a massive
mode modules folder or local log files or tem data,
Docker will evaluate those files during the build process.

Speaker 1 (03:18):
It just scans everything.

Speaker 2 (03:19):
Exactly, and if a single local log file changes, Docker
busts the cash for that entire layer. A build that
should take ten seconds suddenly takes ten minutes because you
forgot to ignore a constantly changing local directory.

Speaker 1 (03:33):
Ten minutes instead of ten seconds. That adds up fast.
And there's the security aspect too. If you don't ignore
your local d file, you might accidentally bake your development
databased password straight into the static.

Speaker 2 (03:45):
Image and push that to a public registry.

Speaker 1 (03:47):
Yeah, total nightmare.

Speaker 2 (03:48):
Precisely, that four file blueprint is critical, but setting those
files up from scratch, perfectly tuned for your specific web
framework is notoriously tedious.

Speaker 1 (03:57):
A lot of trial and error, endless.

Speaker 2 (03:59):
Trial and error with sports dependency managers and volume mappings,
which brings us to a highly recommended approach leveraging starter projects.

Speaker 1 (04:07):
Right this sources strongly advocate for these pre coded templates.
They actually highlight readily available royalty free templates for seven
major frameworks out of the box.

Speaker 2 (04:18):
Flask Express, dot Net, Django, Rubyon, Rails, go Lang, and Larravel.

Speaker 1 (04:24):
Yeah, you can grab these templates and immediately use them
as rappers for your own applications.

Speaker 2 (04:28):
It gives you a massive headstart. I mean, you completely
eliminate the friction of reinventing the infrastructure wheel every time
you start a new build.

Speaker 1 (04:34):
Okay, let's unpack this. The materials specify that these starter
templates are deliberately kept incredibly basic, like bare bones, Hello
wind levels simple, very simple. If someone is listening to
this and they are already a senior Jango developer, or
they've been building massive enterprise architectures in dot need for years,
why on earth would they want a basic Hello World

(04:58):
starter project? Doesn't that feel like going backward.

Speaker 2 (05:01):
What's fascinating here is that you have to completely separate
your application logic from the infrastructure logic. The Hello World
app isn't the product, It's just a placeholder. The true
gold in these starter projects is the heavily optimized, pre
configured Docker file and Docker Compose environment surrounding it.

Speaker 1 (05:17):
Ah, so it's not teaching you how to write Jango.
It's a guaranteed, stable ecosystem that already knows how to
host Jango inside a container exactly.

Speaker 2 (05:27):
The hardest part of porting an existing framework into Docker
is dealing with its specific quirks, like how does it
bind to a network port? How does it handle static
assets behind a proxy? Right by starting with a Hello
World app that already has a perfectly tuned Docker environment,
wrapped around it. You isolate the variables. If you copy

(05:48):
your complex application code into that container and it breaks,
you know immediately the issue is in your application code,
not a misconfigured container port the infrastructure works.

Speaker 1 (05:58):
Going back to our smart home analogy, you're buying a
house that perfectly calibrates the heating, cooling, and security. You
don't need the house to come fully furnish exactly. Yeah,
you just swap out the placeholder Hello World furniture with
your custom bill pieces, knowing the grid won't short out.

Speaker 2 (06:13):
That is the ideal workflow. But once you bring your
custom code into this new environment, you have to break
some ole habits. Designing an app to live inside a container,
even on a single machine, means designing it as if
it's going to scale across one hundred servers.

Speaker 1 (06:28):
And the first habit developers need to break is how
they handle logging.

Speaker 2 (06:32):
Oh absolutely, this is.

Speaker 1 (06:34):
Where people bring that anti clamp into the smart home.
The old habit is writing logs to a disc right.

Speaker 2 (06:40):
Traditionally, frameworks like Rivion Rails or Larreval, they're configured by
default to write log files directly to a temporary directory
inside the project.

Speaker 1 (06:50):
Folder, which makes sense on a normal server.

Speaker 2 (06:52):
On a permanent, bare metal server. That's fine, you just
ssh into the server and read the text file. But
containers are ephemeral by design. They are meant to be
destroyed and recreated at a moment's notice.

Speaker 1 (07:03):
So if your app is writing a log file to
the disk inside the container, and that container crashes and restarts,
that log file is vaporized, woof gone right when you
desperately needed to debug the crash, The evidence is just gone.

Speaker 2 (07:16):
It's well, it's a disaster. In production and in development,
where you are likely using a volume mount to sink
your code. It creates a completely different problem for clutter Exactly.
The container writes logs to its internal temp folder, which
is mounted to your host machine, and suddenly your local
hard drive is cluttered with endless generated log files.

Speaker 1 (07:35):
The metaphor I keep thinking of is writing in a
private diary versus broadcasting in a news ticker. The old
way is writing everything down in a physical diary and
hiding it under the bed. If the bed burns down,
the diary is gone. The container ways stopping the physical
writing entirely.

Speaker 2 (07:52):
That is a perfect way to look at it. Thestcution
is to configure your web application to stop writing files
entirely and instead push all of its log output directly
to standard out or start out.

Speaker 1 (08:03):
Most developers know to log t st out. But the
crucial piece here is how Docker actually wrangles those streams
without choking the demon what happens to that broadcasted newsticker.

Speaker 2 (08:13):
By default, Docker uses logging drivers. When your application pushes
a message to standard out, the Docker Damon intercepts that stream.
It completely decouples the logging mechanism from your application, so
the app doesn't have to manage it exactly. The default driver,
usually the Jason file driver, formats that stream and stores

(08:33):
it at the host level, completely safely outside the ephemeral container.

Speaker 1 (08:38):
Your app is just shouting into the void, completely unaware
of what happens next, and Docker is catching every word.

Speaker 2 (08:44):
And that separation of concerns is incredibly powerful. At the
system level, you can configure the Docker demon to automatically
rotate those logs when they hit a certain size. Oh nice,
Or you can change the logging driver entirely to ship
those streams directly to a centralized server like Elcy or
data Dog without changing a single line of your application code.
Your app doesn't need to know how to connect a

(09:06):
data dog. It just prints to stir out and the
infrastructure has the rest.

Speaker 1 (09:10):
But wait, if we are shipping logs to an external
service like data Dog, the environment needs an apikey to authenticate,
and if we are keeping our application clean and decoupled,
we certainly aren't hard coding that apikey into the source code.

Speaker 2 (09:25):
Definitely not.

Speaker 1 (09:26):
So where does that configuration live?

Speaker 2 (09:28):
That brings us directly to configuration management. The materials strongly
mandate extracting all configuration out of the application's internal files
and moving them strictly into environment variables.

Speaker 1 (09:38):
Even if the framework has its own sophisticated config files.

Speaker 2 (09:42):
Especially then, hard coded configuration files tie your application to
a specific environment. If your database URL is in a
config file, moving from a local development laptop to a
staging server means you either have to rewrite the code
or build complex logic to detect where the app is.

Speaker 1 (09:57):
Running, which introduces massive risk. You never want a scenario
where a production database pass whord accidentally gets committed to
a get repository because a developer just forgot to swap
a config file back.

Speaker 2 (10:09):
By relying exclusively on environment variables, the application just boots
up and queries its surrounding operating system for the current variables.
It asks the OS what is the database URL?

Speaker 1 (10:21):
Right?

Speaker 2 (10:21):
This means you can use the exact same immutable Docker
image in development, staging, and production. The only thing that
changes is the envy file you feed it at runtime.

Speaker 1 (10:31):
I want to dig into the mechanics of that and
the context of Docker. There is a big distinction between
ARG and ENV. How does Docker compose actually inject these
variables so the application can read them.

Speaker 2 (10:43):
It's a really critical distinction. ARG variables are only available
during the actual image build process. Say, if you need
a token to download a private code package while running
the Docker file.

Speaker 1 (10:53):
Got it.

Speaker 2 (10:54):
They do not persist into the final running container. ENV variables,
on the other hand, are strictly for runtime. Okay, so
that's the live app exactly. When you use a Docker
composed dot mlo file, you can map your local end
file to the service. When Docker spins up the container,
it takes those values and injects them directly into the
isolated process tree of the container before it even executes

(11:16):
your application's entry point.

Speaker 1 (11:18):
So the framework boots up, checks the system environment just
like it normally would, and the variables are magically there
provided by the composed orchestrator.

Speaker 2 (11:26):
Exactly, the app is entirely shaped by the environment it's
placed in, which leads us to the final and perhaps
most counterintuitive architectural rule, which is you must design your
web applications to be as stupid as possible.

Speaker 1 (11:39):
Here's where it gets really interesting. When the sources say stupid,
they are talking specifically about the application's memory. They state
that the application itself should have the memory of a goldfish.

Speaker 2 (11:50):
Yes, the technical term is statelessness. A truly scalable web
application holds no state, no local cache, and no memory
of past events from one HTTP request to the next.

Speaker 1 (12:01):
I understand the theory because of the ephemeral nature of containers.
If the app restarts, local memory is wiped. But how
does that work in reality? For a user?

Speaker 2 (12:11):
It can be tricky to conceptualize, right, because if I
log into an e commerce platform, add an item to my.

Speaker 1 (12:16):
Cart, and click to the checkout page, the application has
to remember who I am. If the app is a goldfish,
how does it know I'm still logged in?

Speaker 2 (12:25):
Well? Traditionally, older frameworks solve this by keeping a session
file in the server's local memory or disc But in
a containerized world, if you scale your web service to
run across five identical containers behind a load balancer, your
first click might hit container A, which saves your log in.
Your second click hits container B, which has no idea

(12:46):
who you are, and you are immediately logged out.

Speaker 1 (12:48):
So we can't use the app's internal memory. How do
we solve the session problem while keeping the app stateless?

Speaker 2 (12:54):
The sources provide two highly robust solutions. The first is
pushing the memory entirely to the client side using cryptographically
signed cookies.

Speaker 1 (13:02):
The user's web browser does the heavy lifting.

Speaker 2 (13:05):
Exactly when you log in. The server validates your credentials,
creates a data payload, cryptographically signs it so it can't
be tampered with, and hands it.

Speaker 1 (13:14):
Back to your browser like an ID card.

Speaker 2 (13:16):
Perfect analogy. On your next click, your browser hands that
signed cookie back to the server. The server verifies the signature,
reads the data, and processes the request. The server itself
didn't remember you, It just read the ID card you
handed it.

Speaker 1 (13:30):
That is incredibly elegant, but sometimes for security compliance or
architecture reasons, you have to store session data on the
server side. You can't trust the client with the payload.
What do we do then?

Speaker 2 (13:42):
If you must save sessions on the server side, you
decouple that memory into a dedicated external tool. The material
specifically highlight reddis as the industry standard session back end.

Speaker 1 (13:53):
Reddis is an in memory data store. But wait, if
we offload all our sessions to a Reddus container, haven't
we move the single point of failure?

Speaker 2 (14:01):
It sounds like it, doesn't it.

Speaker 1 (14:03):
Yeah, if that Reddus container crashes, doesn't everyone get logged out? Anyway?
We're back to square.

Speaker 2 (14:07):
One if we connect this to the bigger picture. The
key difference is that Reddus is purpose built to handle
that exact risk, whereas a standard web application container isn't
Reddus isn't just a fragile box of memory.

Speaker 1 (14:19):
Oh, it's more durable, much more.

Speaker 2 (14:20):
You can configure it with persistent strategies like writing an
a pend only file or taking point in time snapshots
map to a persistent host volume, or for true high availability,
you deploy a retes cluster. If one node fails, another
takes over instantly.

Speaker 1 (14:36):
Ah. So we delegate the responsibility of memory to a
tool that specializes entirely in resilient memory. The web application
container remains a goldfish exactly. Every time a user makes
a request, the web app reaches over to the redes
cluster over the internal network, grabs the session data, processes
the request, and forgets it again.

Speaker 2 (14:56):
Spot on. It makes your application to your completely disposable
and infinite le scalable. If traffic spikes, you can spin
up fifty clones of your web container. They instantly tap
into the centralized data dog logs, read the same environment variables,
and hit the same reddess cluster.

Speaker 1 (15:11):
Wow, let's zoom out and recap the journey we just took.
To truly docrize an application, you start with a foundation
those four essential files, heavily utilizing a doc Docker ignore
to keep builds fast.

Speaker 2 (15:23):
Yep.

Speaker 1 (15:23):
You lean on pre built starter projects to handle the
infrastructure quirks. Then you rewire the app's behavior, pushing logs
strictly to standard out, pulling all configuration into runtime environment variables,
and relying on signed cookies or reddis to keep the
application entirely stateless.

Speaker 2 (15:40):
Following those constraints forces you to build highly maintainable systems.
And this methodology isn't just abstract theory. It maps directly
to a formal philosophy. That we want to challenge you
with today.

Speaker 1 (15:51):
The sources point to the twelve factor app methodology for
those who aren't familiar. It's a famous set of principles
for building software as as service apps. It isn't tied
to dockers specifically, but Dogger perfectly enforces its rules.

Speaker 2 (16:02):
Exactly the concepts we discussed today, stateless processes, environment variables,
treating logs as event streams. They are pulled straight from
the twelve factor manifesto, but there are others that are
just as crucial, like port binding.

Speaker 1 (16:16):
Port binding is a great example. A twelve factor web
app doesn't rely on a hidden web server like apatche
injected into its environment. The web app exports HTTP as
a service by binding directly to a port specified by
the environment. Doesn't care if it's running on port eighty
or port five thousand. The infrastructure tells it where to listen.

Speaker 2 (16:36):
Another factor is disposability. Your application needs to maximize robustness
with fast startup times and graceful shutdowns. When Docker sends
a termination signal to your container, your aption just instantly
die and drop user connections.

Speaker 1 (16:51):
It needs to wrap things up right.

Speaker 2 (16:52):
It needs to finish processing current requests and exit cleanly.

Speaker 1 (16:55):
This brings us to your actionable challenge for the day.
Whether you are as student expres floring architecture for the
first time, or a seasoned tech professional dealing with legacy code.
Take fifteen minutes today search for the twelve factor app
Guide online.

Speaker 2 (17:08):
It's a fantastic resource.

Speaker 1 (17:10):
Read through the twelve principles and audit your current projects.
Where are you storing state and memory? Where are you
hard coding a configuration file? It is a brilliant diagnostic
tool to elevate your engineering.

Speaker 2 (17:21):
It shifts your mindset from just writing code that successfully
compiles to designing resilient ecosystems that can survive hardware failures
and massive scaling.

Speaker 1 (17:32):
Speaking of shifting mindsets, I want to leave you with
one final thought. We started by talking about the smart
home adapting to the new environment. But if the ultimate
goal of modern cloud native development is to make our
applications completely stupid, intentionally right, intentionally stripping away their memory,
their internal configuration logic, their log management, and relying entirely

(17:55):
on the surrounding infrastructure to do all the heavy.

Speaker 2 (17:58):
Lifting, it creates a fascinating, fullis sophagal question about our
priorities as developers exactly.

Speaker 1 (18:04):
It makes you wonder, are we rapidly moving toward a
future where the actual application code we write matters far
less than the ecosystem we build to surround it.

Speaker 2 (18:12):
That's a great point.

Speaker 1 (18:13):
Next time you sit down to write a brilliant piece
of business logic, ask yourself, is the code the masterpiece
or is the masterpiece the invisible automated smart home you're
building to keep that code alive. Think about it.

Speaker 2 (18:25):
That is the defining question of modern architecture.

Speaker 1 (18:28):
Until next time, keep diving deep.
Advertise With Us

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

The Breakfast Club

The Breakfast Club

The World's Most Dangerous Morning Show, The Breakfast Club, With DJ Envy, Jess Hilarious, And Charlamagne Tha God!

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2026 iHeartMedia, Inc.

  • Help
  • Privacy Policy
  • Terms of Use
  • AdChoicesAd Choices