All Episodes

October 13, 2025 • 24 mins
A practical resource for developing, deploying, and scaling containerized applications. It covers fundamental concepts like containerization versus virtual machines, Kubernetes architecture, and various Azure container services, alongside advanced topics such as serverless Kubernetes, Windows containers, and integrating with Azure DevOps for continuous integration and deployment. The text also includes hands-on instructions for managing, monitoring, and troubleshooting AKS clusters, making it an essential reference for developers and architects.

You can listen and download our episodes for free on more than 10 different platforms:
https://linktr.ee/cyber_security_summary

Get the Book now from Amazon:
https://www.amazon.com/Mastering-Azure-Kubernetes-Service-Containerized-ebook/dp/B095YVK18V?&linkCode=ll1&tag=cvthunderx-20&linkId=c932246431a8880480a86d18ee4df6b9&language=en_US&ref_=as_li_ss_tl
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Ever feel like you're trying to manage I don't know,
a huge complex project, maybe like building something intricate and
you got hundreds of moving parts. It's easy to get overwhelmed. Right, Well,
today we're diving into a really powerful tool that helps
manage exactly that kind of complexity, but specifically in the cloud,
Azure Kubernetes service. You'll often hear it called AKS. Our

(00:22):
mission really for this deep dive is to pull out
the most crucial insights from the book Mastering Azure Kuberneti
Service AKS by Ibichek Mishra. We're going to try and
demystify things a bit. We'll look at how software components
get packaged up neatly and then how they're intelligently managed
scaled by this super smart system in the Azure Cloud.
Think of this as your FastTrack to understanding what makes

(00:44):
AKS tick and why it's so important.

Speaker 2 (00:46):
Yeah, that's a great way to put it. We'll explore
the foundations, you know, the basic ideas, understand the core architecture,
and really uncover how AKS tackles those big real world challenges,
things like scalability, keeping things reliable, deploying a thing, especially
when you've got these complex applications. We want to get
to the heart of why this shift to containers and

(01:07):
orchestration is happening, and crucially, what it actually enables for
you today. This deep dive, it's basically your roadmap to
getting up to speed quickly on a topic that's pretty
critical right now.

Speaker 1 (01:19):
Okay, let's unpack this journey then, starting right at the beginning,
before we even get to Kubernetes, this orchestrator. Yeah, we
need to understand the basic building blocks containers. What exactly
was the problem they were designed to solve? Why are
they everywhere now?

Speaker 2 (01:35):
Right? Historically to have this constant headache, developers would build
something tested and s where it works on my machine.

Speaker 1 (01:40):
Oh, I've heard that one and lived it, probably exactly.

Speaker 2 (01:44):
Then it moves to a test server or worse, production,
and suddenly it breaks. The environment's just slightly different, maybe
a library version is off, or a canfig file isn't
quite right. It was a constant source of friction.

Speaker 1 (01:58):
Yeah, that sounds painfully familiar.

Speaker 2 (02:00):
So Initially applications ran on physical servers, often those servers
weren't even fully used. Then came virtual machines VMS. Big
improvement VMS virtualized the hardware, letting you run multiple isolated
operating systems on one physical box. But and this is
a key thing, each VM still needed its own complete
guest os.

Speaker 1 (02:20):
Okay, so lots of duplication, like extra baggage for every app.

Speaker 2 (02:23):
Precisely, think of it like having I don't know, a
separate boiler room for every single apartment in a building.
It works, but it's inefficient. Containers emerge as a much
lighter alternative. Instead of virtualizing the hardware, they use operating
system level virtualization. So a single host OS runs many
isolated containers, and each container packages just the application itself
and all its dependencies, the code, the run time tools, libraries,

(02:45):
everything it needs, all bundled together, neat and tidy.

Speaker 1 (02:48):
Okay, So it's more like those apartments sharing the building's
main boiler but each apartment is still totally self contained
and isolated.

Speaker 2 (02:55):
That's a perfect analogy, And this leads to some really
significant benefits.

Speaker 1 (02:59):
If I'm say a developer or leading a tech team.
What's the real impact here beyond just saving server space,
what's the deeper shift containers brought?

Speaker 2 (03:09):
Well, the big insight isn't just about packing things tighter,
it's about killing that works on my machine problem. That consistency,
the promise of build once run anywhere. It fundamentally changes
the dynamic. It smooths out the whole process between developers
writing code and operations running it. You get faster releases
because everyone trusts that what worked in development will work

(03:30):
in production. That confidence is huge, right.

Speaker 1 (03:33):
Less finger pointing, more shipping code's exactly.

Speaker 2 (03:35):
So you get consistency first and foremost predictable behavior everywhere.
They're lightweight and efficient, much smaller than VM's start up
way faster. You can run more apps on the same hardware,
which saves money, real money. Isolation is key too. Apps
don't interfere with each other or the host os, better security,
fewer conflicts, portability, easy to move them around your laptop

(03:58):
on prem servers, any cloud, and rapid creation and scaling.
They start and stop in seconds. That's crucial for scaling
quickly to meet demand, keeping things highly available.

Speaker 1 (04:08):
That consistency point really resonates having that single source of
truth for the environment. Yeah wow, it sounds revolutionary for
individual apps. Like you said, each plant in its perfect pot.
But okay, what happens when your neat little garden turns
into a massive commercial farm. When you've got hundreds of
these containerized apps needing to work together, don't the benefits
for one become a management nightmare for many. This is

(04:29):
where things get really really interesting. I think once you
start using containers seriously, especially for modern apps like micro services, yeah,
you get a whole new set of problems. Right. Doing
it manually just doesn't scale absolutely.

Speaker 2 (04:40):
Imagine that huge farm, hundreds of greenhouses. Now you need
to ensure every greenhouse stays at the right temperature. If
a plant dies, replace it immediately. If demand for tomato spikes,
you need more tomato plants like now, some plants need
to connect, maybe share nutrients, others must be kept totally separate.
And you need a way to monitor the health of

(05:01):
every single plant all the time.

Speaker 1 (05:03):
That's yeah, that's a lot. You can't just want around
check in each one.

Speaker 2 (05:06):
No way. It grows exponentially, you'd need an army or,
like you said earlier, a super smart automated system.

Speaker 1 (05:13):
So what is that automated system for containers.

Speaker 2 (05:16):
That's Kubernetes. It's the container orchestrator, developed originally by Google
open source back in twenty fifteen and now managed by
the CNCF, the Cloud Native Computing Foundation. Basically, Kubernetes automates
the deployment, the ongoing management, and the scaling of all
those containerized applications. It's built precisely to handle that complexity
we just talked about, takes away the manual drudgery.

Speaker 1 (05:36):
Okay, So Kubernetes is the brain, the master control system
for the container farm. How does it actually work? In
simple terms, At its core.

Speaker 2 (05:44):
It groups your containers into logical units called pods. A
pod is the smallest deployable thing. Think of it as
that tiny greenhouse holding one or more closely related containers,
maybe an app and its logging sidecar. These pods then
run on nodes, which are your worker machines, the actual
servers vms, your plots of land, and the whole show

(06:05):
is run by the control plane. That's the brain. It
manages the nodes and pods, handles things like self healing,
restarting failed containers, automatically balancing incoming traffic, allocating resources efficiently
across the whole cluster.

Speaker 1 (06:18):
Right setting up and managing that whole Kubernetes system, especially
that control plane. That sounds like a massive job in itself. Powerful, yes,
but maybe too complex if you just want to run
your apps. Is there an easier way, a shortcut?

Speaker 2 (06:31):
Absolutely, And that's exactly where managed Kubernetes services come in.
The prime example we're talking about today is Azure Kubernetes
Service AKS. With AKS, Microsoft, Azure handles the creation, the operation,
the scaling, the patching, all the really complex bits of
the control plane infrastructure for you. They abstract it away.

Speaker 1 (06:47):
Okay, So Azure manages the brain the hard part.

Speaker 2 (06:50):
Essentially, Yes, you get the power of Kubernetes orchestration without
needing a dedicated team just to keep the orchestrator itself
running smoothly.

Speaker 1 (06:58):
So if I choose AKS, what do I need to
focus on? What's my job versus Azure's job, and what's
the real strategic payoff?

Speaker 2 (07:05):
You focus on your applications, building them, containerizing them, deploying them.
You focus on your plants, your unique value. AKAS takes
care of provisioning the nodes, the land and ensures that
control plane is always available, reliable handling, scaling, handling failures.
It's a true platform as a service, a pair cause
but specifically for Kubernetes, lets you move faster, move faster.

Speaker 1 (07:28):
Yeah, that's always good. What are the specific benefits?

Speaker 2 (07:30):
Well, first, reduced infrastructure overhead. You can spin up an
AKAS cluster in minutes, not days or weeks. Your engineers
aren't wrestling with Kubernetes set up, they're building features. It
frees up your best people.

Speaker 1 (07:41):
That's huge, less time on plumbing, more on innovation exactly.

Speaker 2 (07:46):
Then there's enhanced operational efficiency. Monitoring is easier, you get
optimization tips to measure, Advisor node management is simpler, upgrades
are smoother. Your farm just runs better with less effort.
Flexibility is another big one. Run almost anything dot Net, Java, Python,
NOJS in Linux containers or Windows containers. That opens up

(08:06):
modernization paths for lots of different apps, rapid development and deployment.
It plugs right into things like Azure debops for CICD pipelines,
infrastructure as code tools, makes automation much easier, and crucially
enterprise grade security integration with Azure Active Directory for access control,
threat detection with Azur Security Center, network isolation options like

(08:26):
Azure Private Link applying security policies. It's built with security
in mind.

Speaker 1 (08:30):
Okay, that reduced overhead and faster innovation cycle sounds incredibly compelling.
But let me play Devil's advocate for a second. By
letting Azure manage the control plane, do I lose critical control?
Is there a downside of trade off compared to running
my own Kubernetes cluster from scratch.

Speaker 2 (08:44):
That's a really important question, and yes there's a trade off.
With a fully self managed Kubernetes setup, you control everything,
the OS on the master nodes, the exact component versions,
every single knob. But that control comes with immense responsibility.
You have to patch it, upgrade it, secure it, and
sure it's highly available yourself. That needs dedicated expertise a

(09:06):
specialized team. It's a significant operational burden.

Speaker 1 (09:09):
Right You're on the whole stack, warts and all.

Speaker 2 (09:11):
Exactly with Akas. You trade some of that deep infrastructure
level control for operational simplicity and reliability. Azure handles the
control planes, health, patching availability. For most organizations, especially those
whose core business isn't runn Kubernetes infrastructure, that trade off
is incredibly worthwhile. The speed the reduced burden leveraging azures

(09:33):
built in security and monitoring it usually far outweighs the
slight loss of granular control. You focus on your apps,
not the orchestrator's clumbing.

Speaker 1 (09:40):
That makes total sense. It's like choosing whether to build
your own house from the foundation up or buying a
well managed condo where the building maintenance is handled. Both work,
but one lets you focus on decorating your living room faster.
So okay, we've chosen akas our managed farm. Now we're inside.
What are the essential bits we need to grasp to

(10:00):
actually get our applications running well? How do things like
networking work within AKS? Let's get into those core concepts.

Speaker 2 (10:08):
All right, let's break down the key components inside your
AKS cluster. First up, networking your apps. Your pods need
to talk for internal communications securely within the cluster. You
use cluster ip. Think of it as an internal phone line.
Pods can reach each other, but nothing outside can directly
call in.

Speaker 1 (10:24):
Okay, private communication. What about reaching the outside world or
letting users reach my app? Right?

Speaker 2 (10:29):
For external access, You've got a few options. Nodeport is
a symbol way exposing your service on a specific port
on each node, but more commonly you'll use a load balancer.
This distributes incoming traffic across multiple pods running your service,
usually on standard ports like eighty or four forty three.
It's like having a receptionist directing visitors to the right office,

(10:50):
making sure no single office gets overloaded.

Speaker 1 (10:52):
And you mentioned something more sophisticated, like directing traffic based
on the URL.

Speaker 2 (10:56):
Yeah, for that more intelligent roading. Maybe setting a pyrate
request to one set of pods and web requests to another.
You use an ingress controller. Think of it as a
smart mail sorter, looking at the address, the URL path
or host name to route traffic precisely where it needs
to go. Common ones are NGI, NX ingress or Azure's
own application gateway ingress.

Speaker 1 (11:17):
Controller That mail sorter analogy helps. Okay, so that's traffic
in and around the cluster. What about how the cluster
itself plugs into the bigger Azure network. Does that choice
matter much?

Speaker 2 (11:25):
It matters quite a bit. Actually, When you set up AKS,
you pick a networking model. The simpler one is basic
networking kubinet. Here AKS manages a virtual network largely behind
the scenes, and pods get ips from a separate address space.
It's easier to start with.

Speaker 1 (11:42):
Okay, simple start. What's the alternative?

Speaker 2 (11:44):
The alternative is advanced networking. As your CNI CNI stands
for Container Networking Interface. With this, your pods get IP
addresses directly from your own Azure virtual network, the same
VNA your other Azure resources might be in. This gives
you much more controls. You can apply network policies directly
to pods, integrate tightly with firewalls, and allow direct communication

(12:06):
between pods and other v net resources. It's usually the
way to go for enterprise setups or anything needing complex
network rules.

Speaker 1 (12:12):
Got it basic for ease, advance for control and integration. Okay,
next challenge, not data containers disappear, pods can be replaced.
How do we handle storage for apps that need to
keep their data like a database running in a container.

Speaker 2 (12:26):
Yeah, that's critical. Containers are ephemeral. Like you said, AKS
provides volumes for temporary data within a POD's life, things
like scratch space, empty dur or mountain configuration can figmap
and secrets secret. But if the pod dies, that data
is gone.

Speaker 1 (12:42):
So we need something durable, like an external hard drive exactly.

Speaker 2 (12:45):
For data that needs to survive pod restarts or moves,
you use persistent volumes pvs. These represent actual storage resources
outside the pod. Typically they map to Azure manage discs,
Azure disks for storage needed by a single POD, or
shared file storage Azure files if multiple pods need to
access the same data concurrently.

Speaker 1 (13:05):
Okay, so pvs are the durable storage. How do my
applications actually request the storage they need?

Speaker 2 (13:10):
They do that using persistent volume claims PVCs. Think of
a PVC as a request the POD says I need
fifty gigabytes of standard disk storage. Kubernetes then tries to
match that request to an available PV and to make
this dynamic and defined types of storage available, we use
storage classes. A storage class defines what kind of Azure

(13:31):
storage backs the PV like standard HDD, Premium SSD, or
Azure files, and its reclamation policy. So the flow is
admin defined storage classes types of storage. POD request storage
via PVC. Kubernetes dynamically provisions a PV based on a
matching storage class and binds it to the PVC for
the POD to use. It's a neat, automated way to
manage persistent data right a.

Speaker 1 (13:52):
Structured request system. Storage classes define the catalog PVCs are
the order forms. Makes sense? Okay. Shifting to another crucial topic, scaling,
We know aks helps us scaling, But how exactly? What
are the main mechanisms for handling more load? And what
about those sudden, unpredictable bursts.

Speaker 2 (14:09):
Great question. AKS offers several ways to handle scaling. You
can always do manual scaling. Just go in and change
the number of pods or nodes yourself. Simple but reactive.

Speaker 1 (14:17):
Right, you have to guess or wait until things break.
Not ideal.

Speaker 2 (14:20):
Not ideal for dynamic loads, no, So that's where automatic
scaling comes in. There are two main types working together. First,
the cluster autoscaler. This watches for pods that can't be
scheduled because there are enough resources CPU memory on the
existing nodes. If it sees that, it automatically adds more
land to your cluster. It scales the infrastructure.

Speaker 1 (14:41):
Okay, as more servers if needed. What about scaling the
apps themselves, That's.

Speaker 2 (14:45):
The job of the horizontal pod autoscaler HPA. The HPA
monitors metrics like CP utilization or memory usage for your pods.
If the average CPU goes above say seventy percent, the
HPA automatically increases the number of pod replicas pods for
that application. When load drops, it scales back down.

Speaker 1 (15:04):
So cluster autoscaler ADS nodes, HPA ADS pods. They work
together precisely.

Speaker 2 (15:08):
HbA needs resources, cluster atoscaler provides them if necessary.

Speaker 1 (15:12):
But adding a whole new node a VM can take minutes.
What if I get a massive instant spike in traffic
like viral marketing campaign level spike? Is there anything faster? Yes?

Speaker 2 (15:22):
And this is a really powerful feature for that kind
of brust scenario. It involves using serverless nodes which leverage
Azure Container instances ACI through something called virtual Cublet. Virtual
cublet lets AKS schedule pods onto ACI. ACI provisions containers
in seconds, way faster than spinning up a full VM node.
So for sudden burths, AKS can instantly schedule extra pods

(15:46):
onto ACI via virtual cublt.

Speaker 1 (15:48):
Wow seconds versus minutes. That's a huge difference when you're
under heavy load.

Speaker 2 (15:51):
It really is. We actually use this during a big launch.
We had an unexpected flood of traffic from social media.
Virtual cubult just kicked in scaled up pods onto ACI
almost instantly handle the bursts without a hitch. If we'd
relied only on traditional node scaling, we might have had issues.

Speaker 1 (16:06):
That's a brilliant real world save, so you only pay
for ACI when it's actually running those burst pods exactly.

Speaker 2 (16:13):
You pay for the execution time. It provides that rapid
elasticity without paying for idle capacity. And this serverless event
driven scaling concept is also how you run things like
containerized Azure functions on AKS using kDa Kubernetes event driven
auto scaling KATA can scale your function pods down to
zero when idle, and then scale them out rapidly based

(16:35):
on event triggers like QUEU messages or Coughka topics, super
cost effective for certain workloads.

Speaker 1 (16:40):
Okay, that's incredibly flexible scaling. So we've got networking storage scaling.
How do we actually tell Kubernetes how we want our
applications managed? What are the different ways we can instruct
it the deployment patterns.

Speaker 2 (16:52):
We communicate our intentions to Kubernetes using declarative configuration files,
usually written in Yamel. These manifests describe the desired state,
will contain on how many replicas, network settings, et cetera.
Kubernetes then works constantly to make the actual state match
that desired state.

Speaker 1 (17:07):
So yamal files are the blueprints. What are the common
types of blueprints or patterns.

Speaker 2 (17:11):
Well. For typical stateless applications like web front ends or
APIs where each request is independent, the most common pattern
is using deployments which manage replica sets underneath. A replica
set simply ensures that a specified number of identical pod
replicas are always running. If a pod dies, the replica
set controller notices and creates a new one. It provides
basic high availability and scaling for stateless apps.

Speaker 1 (17:34):
Think interchangeable workers exand standard pattern for stateless stuff. What
about apps that do have state or need special handling
like databases.

Speaker 2 (17:43):
Right for applications that need stable unique identities or persistent
storage tied to each instance like databases, message queues, or
anything with a leader follower setup, you use stateful sets.
Stateful sets provide guarantees about the ordering and uniqueness of pods.
Each pod gets a stable network identical fire like dB
zero dB one, and its own persistent storage volume that

(18:04):
follows it, even if the pod gets rescheduled to a
different node. It's crucial for apps where identity and data
persistence per instance.

Speaker 1 (18:10):
Matter, So predictable names and storage for stateful apps. Got it?
Any other key patterns.

Speaker 2 (18:15):
The other main one is the demon set. A demon
set ensures that a copy of a specific pod runs
on all nodes in the cluster or on a specific
subset of nodes you define. This is typically used for
cluster level agents or services that need to be present everywhere.
Think logging collectors, monitoring agents, node problem detectors. They're like
the essential maintenance true that needs to be on every

(18:37):
part of the farm.

Speaker 1 (18:38):
Okay, replica sets for stateless, stateful sets for stateful, demon
sets for everywhere. That covers the main ways to deploy.
So we've got our apps running, scaled, managed using these patterns.
We're feeling pretty good. But what about day to day reality?
How do we operationalize this integrated into our development workflows,
keep it healthy? What else do we need to truly

(18:59):
master akas?

Speaker 2 (19:00):
That's the crucial next step operationalizing it first. Devups integration
automation is absolutely key. You really want to leverage Azure
devups pipelines or similar tools for CICD, continuous integration and
continuous delivery. These pipelines automate the whole flow, building your
application code, creating the container image, pushing that image to
a secure private registry like Azure Container Registry ACR.

Speaker 1 (19:22):
ACR is like our private library for container images.

Speaker 2 (19:25):
Right exactly your secure private Docker registry and Azure. Then
the pipeline continues deploying your application to aks, usually by
applying those yamal manifests or using helm charts, which are
like packages of Kubernetes resources. Automating this saves massive amounts
of manual effort, reduces errors, speeds up releases dramatically, and
lets you embed quality gates like automated testing right into

(19:48):
the process. It's the automated assembly line makes sense.

Speaker 1 (19:51):
Automation is critical. Now once it's deployed and running, how
do we keep tabs on it? Monitor performance, troubleshoot problems
when they inevitably happen.

Speaker 2 (20:00):
If for monitoring and troubleshooting, the primary tool and Azure
is Azure Monitor for containers. It gives you really deep
visibility into your cluster. It collects performance metrics, CPU memory
usage for nodes, pods, individual containers. It also collects logs
from your containers. All this data gets sent to a
log analytics workspace where you can query it, visualize it,

(20:21):
create dashboards. You can view container logs in real time.
Set up alerts based on metrics or log entries like
alert me if CPU usage stays above ninety percent for
five minutes, or alert if we see error messages in
the logs?

Speaker 1 (20:36):
Can you also see what the Kubernety system itself is
doing like the control plane logs.

Speaker 2 (20:40):
Yes, you can configure AKS to send control plane logs
things like the Cube appaserver logs which show API requests,
or the scheduler logs to log analytics as well. Same
for node level logs like the Cube lit This is
invaluable for diagnosing deeper cluster issues. It's your central dashboard
investigation toolkit. And one more operational aspect worth mentioning specifically
is Windows containers on As.

Speaker 1 (21:01):
You mentioned that flexibility earlier.

Speaker 2 (21:02):
Yeah, yes, it's a really important capability now. While Linux
containers are maybe more common, AKS fully supports running Windows
server containers. This means you can take existing ASP, dot
net applications or other Windows based workloads, containerize them using
Windows based images, and run them directly on AKS. You
can even have node pools, groups of nodes running Windows

(21:24):
server side by side with Linux nodepools in the same
AKS cluster. This opens up AKS for modernizing a much
wider range of legacy applications without needing to completely rewrite
them for Linux.

Speaker 1 (21:35):
That's a huge deal for organizations with a lot of
Windows history. Okay, so APS is clearly very capable, but
Azure has other container services too, Right right, How does
AKS fit into that bigger picture? When would I choose
AKS versus say, ACI or web apps for containers.

Speaker 2 (21:48):
That's a great question to wrap your head around the ecosystem.
AKS is definitely powerful, but it's not always the only answer.
It sits within a broader set of Azure Container services.
Azure Container Registry ACR. We talked about this. It's fundamental
your private registry to store and manage container images. You
pretty much always need this. Azure Container Instances ACI, as

(22:11):
we discussed with Virtual cubult. ACI is perfect for running
single containers quickly without managing any underlying VMS or orchestration.
Great for simple tasks, batch jobs, or that burst scaling.
With AKS, it's the simplest, fastest way to run.

Speaker 1 (22:25):
A container, like a quick disposable container run exactly.

Speaker 2 (22:29):
Then there's Azure Web app for Containers. This is a
PAS service specifically for hosting web applications packaged as containers.
If you have a standard web app, maybe built with
aspnt core, no JS, Python, you just want to deploy
it easily without worrying abou Kubernet's complexity. This is often
a great fit. It handles scaling, load balancing, patching for
you in a very web app centric way.

Speaker 1 (22:50):
Simpler for standard web apps.

Speaker 2 (22:52):
Then generally yes. And finally, there's az your service fabric.
This is another orchestrator actually used under the hood by
many core at for services. It's very powerful, especially for
complex stateful micro services, but it's also more opinionated than Kubernetes.
So where does AKS fit? You typically choose akas when
you have complex multi container applications, often based on micro services,

(23:15):
where you need the power and flexibility Kuberntes orchestration things
like complex deployment strategies, fine grained network control, service discovery,
auto scaling based on various metrics. You want that rich
orchestration capability, but you also want Azure to manage the
underlying Kubernetes control plane complexity for you. If you're building
sophisticated distribute systems with container. AKS is often the sweet

(23:36):
spot in the Azure eCos.

Speaker 1 (23:38):
Okay, that really clarifies landscape. We've gone from the single container,
that basic building block, all the way through Kubernetes orchestration
and landed squarely on how AKS provides that power as
managed service and Azure. It really feels like we've gone
from tending one plant to managing a whole high tech
automated greenhouse operation.

Speaker 2 (23:55):
That's a perfect summary. By using AKS, you really do
get those huge advantages, the automated deployments, the smart scaling,
solid networking, persistent storage. But crucially you offload the hardest
part managing the Kubernet's cluster itself to AZURE. The real
insight I think is that it lets your teams shift
their valuable time and energy away from just keeping the

(24:17):
lights on with infrastructure and towards actually building better applications
that deliver value. That's the core idea and that.

Speaker 1 (24:23):
Is incredibly powerful. Isn't it taking away that operational complexity
so innovation can actually happen faster. So as you are listeners,
think about your own projects, your own applications and challenges,
Maybe consider this, how could a managed Kubernetes service like
AKS help you stop wrestling with infrastructure and start truly
mastering your applications. What part of your world could really

(24:44):
take off with this kind of automated, intelligent orchestration. It's
definitely something worth thinking about.
Advertise With Us

Popular Podcasts

My Favorite Murder with Karen Kilgariff and Georgia Hardstark

My Favorite Murder with Karen Kilgariff and Georgia Hardstark

My Favorite Murder is a true crime comedy podcast hosted by Karen Kilgariff and Georgia Hardstark. Each week, Karen and Georgia share compelling true crimes and hometown stories from friends and listeners. Since MFM launched in January of 2016, Karen and Georgia have shared their lifelong interest in true crime and have covered stories of infamous serial killers like the Night Stalker, mysterious cold cases, captivating cults, incredible survivor stories and important events from history like the Tulsa race massacre of 1921. My Favorite Murder is part of the Exactly Right podcast network that provides a platform for bold, creative voices to bring to life provocative, entertaining and relatable stories for audiences everywhere. The Exactly Right roster of podcasts covers a variety of topics including historic true crime, comedic interviews and news, science, pop culture and more. Podcasts on the network include Buried Bones with Kate Winkler Dawson and Paul Holes, That's Messed Up: An SVU Podcast, This Podcast Will Kill You, Bananas and more.

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.