All Episodes

September 30, 2025 • 19 mins
Explores the evolution of DevOps and its contemporary challenges, particularly the issue of fragmented toolchains and the necessity of standardization. It details how cloud-native and data-centric architectures, including microservices, containers, and Infrastructure as Code (IaC), establish a robust foundation for modern software delivery. A significant portion of the material then focuses on the transformative role of Generative AI, illustrating its application in coding, unit testing, functional testing, IaC, data provisioning, and CI/CD pipeline optimization. Finally, the text envisions a future of Multiagent AI systems and the NoOps paradigm, emphasizing human-AI collaboration and the ongoing strategic role of humans in an increasingly autonomous software development landscape.

You can listen and download our episodes for free on more than 10 different platforms:
https://linktr.ee/cyber_security_summary

Get the Book now from Amazon:
https://www.amazon.com/NoOps-Agents-Reinventing-DevOps-Software/dp/B0F9KGYY2Z?&linkCode=ll1&tag=cvthunderx-20&linkId=eab6e1d21a472f3f18e64e711f375458&language=en_US&ref_=as_li_ss_tl
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
You ever feel like software development is just this relentless treadmill,
you know, new releases, features, patches, It just never stops. Yeah, absolutely,
And for so many teams it feels less like build
ship learn and more like build ship burnout.

Speaker 2 (00:15):
That burnout is real.

Speaker 1 (00:16):
But what if there was like a shortcut, a way
to make things smoother, faster, more reliable.

Speaker 2 (00:23):
That's the million dollar question, isn't it.

Speaker 1 (00:25):
Well, today we're diving deep into Roman Verrel's book New Ops.
How AI agents are reinventing DevOps and software. It's pretty
insightful stuff. Our mission here is to sort of unpack
that journey. How do we get from these you know,
fragmented systems we have today to a future where software
almost runs.

Speaker 2 (00:43):
Itself, freeing up people for the really interesting.

Speaker 1 (00:46):
Work, exactly, freeing you up to focus on what only
humans can do, which is innovate. So we'll explore why
traditional DevOps well sometimes it just isn't enough, how standardization
kind of leads the groundwork, and then crucially, how generative
AI steps in to revolutionize pretty much everything, coding, infrastructure,
the works. It's a big shift, it really is. So Okay,

(01:08):
let's unpack this. How can we actually get there. How
does the drudgery melt away?

Speaker 2 (01:13):
The foundation from DevOps evolution to standardization.

Speaker 1 (01:16):
Okay, before we jump into like reinventing the future, let's
glance back for a second. Remember the old days, the
rigid walls between devs and ops.

Speaker 2 (01:25):
Oh yeah, that thrown over the wall, mentality.

Speaker 1 (01:27):
Role nightmare, right, Yeah, long release cycles, constant finger pointing.

Speaker 2 (01:31):
It really was. Now Agile started breaking down those silos
within development, which was a good start.

Speaker 1 (01:36):
But OPS was still often the bottleneck, definitely.

Speaker 2 (01:39):
And then you know, DevOps as a term started gaining traction.
Patrick de Bois, the DevOps Days conferences.

Speaker 1 (01:45):
And that flicker talk ten plus deployees per day back
in two thousand and nine, right.

Speaker 2 (01:51):
All spaw and Hammond. That was like a real light
bulb moment for a lot of people.

Speaker 1 (01:55):
And then the Phoenix project came along in boom. It
really brought CICD, shared responsibility, all that stuff into the mainstream.

Speaker 2 (02:01):
Yeah, collaboration, automation, measurement, learning, those became the core ideas.

Speaker 1 (02:06):
And the results. I mean, look at the door reports,
elite teams deploying hundreds of times a day.

Speaker 2 (02:11):
It's incredible. Think Amazon, Netflix, It's.

Speaker 1 (02:14):
Not just faster. It's a whole different way of operating,
adapting almost in real time exactly.

Speaker 2 (02:19):
But here's the catch. Despite all that promise, so many
organizations are still struggling to get there.

Speaker 1 (02:25):
Yeah, why is that? If the path is clear, why
are people stuck?

Speaker 2 (02:29):
Well, the book points to a few things, new pressures
and emerging challenges, like cultural resistance. That's a big one.
Up to forty five percent of initiative stalled just on culture.

Speaker 1 (02:39):
Wow, nearly half.

Speaker 2 (02:40):
Yeah, and then skill gaps and kind of ironically, this
thing called tool sprawl.

Speaker 1 (02:46):
Okay, tool sprawl, let's dig into that. So if you've
adopted DevOps but you still feel stuck, this sounds like
a big part of the.

Speaker 2 (02:54):
Why it often is. It's this sort of ironic outcome
of DevOps. You want the best tool for each makes sense,
So you get get for version control, Jenkins for CI,
maybe Terraform for your infrastructure, Splunk for monitoring, all good
tools on their own, all great tools. But suddenly you've
got twenty thirty maybe even more, different tools you're trying
to juggle.

Speaker 1 (03:14):
Okay, I can see how that gets messy.

Speaker 2 (03:16):
Fast, And the book cites this figure. Over fifty percent
of enterprises using more than twenty DevOps tools. That creates
what VORL calls.

Speaker 1 (03:24):
A tool tax cool tax, meaning.

Speaker 2 (03:27):
Meaning significant license costs. Yeah, but also constant integration headaches
and developers just burning time context switching between all these
different interfaces.

Speaker 1 (03:37):
Right, jumping from one UI to another, trying to remember
how this.

Speaker 2 (03:39):
One works exactly. And then there are the data.

Speaker 1 (03:42):
Silos, so each tool keeps its own data separately pretty much.

Speaker 2 (03:46):
Build logs are over here, test results are there, deployment
record somewhere else.

Speaker 1 (03:50):
Entirely, which leads to that visibility gap you mentioned precisely.

Speaker 2 (03:55):
The book says seventy four percent of teams lack that
end to end visibility. So when something breaks, good luck
figuring out what happened quickly.

Speaker 1 (04:03):
Ouch, So incident response times just go through the roof they.

Speaker 2 (04:06):
Do, and it creates these DevOps ironies. Like you tried
to break down silos between devnops, but now you've got
new silos based on who knows which specific tool.

Speaker 1 (04:15):
So more friction, duplicate effort.

Speaker 2 (04:18):
Yeah, that whole choose your own tool culture sounds great
for innovation initially, but it often just spirals into fragmentation
and hidden costs slower time to market, more errors.

Speaker 1 (04:29):
The book had that example, right, the financial services firm.

Speaker 2 (04:32):
Oh yeah, that was a painful one to read about.

Speaker 1 (04:34):
Yeah.

Speaker 2 (04:34):
Eight different CI pipelines, three.

Speaker 1 (04:37):
Code repositories, different logging systems.

Speaker 2 (04:39):
Imagine being an engineer there just trying to track down
which version of a micro service was deployed when something
went wrong. Hours wasted, and.

Speaker 1 (04:47):
It wasn't just the pipelines, right, It went down to
the developers machine.

Speaker 2 (04:50):
Absolutely, multiple IDEs vs. Code intelliga eclipse. It fragments the
whole developer experience, harder to enforce, standards, harder to collapse.

Speaker 1 (05:00):
And ultimately, you said, it hurts AI.

Speaker 2 (05:02):
Readiness critically, that's the connection to the future. Why does
all this fragmentation matter so much for AI? Okay, why
because AI, especially generative AI, thrives on data consistent, complete,
high quality.

Speaker 1 (05:16):
Data right, garbage in garbage.

Speaker 2 (05:17):
App Exactly If your logs, metrics test results are scattered
everywhere in different formats, any AI trying to find anomalies
or generate code is basically working with blind.

Speaker 1 (05:28):
Spots, so it just can't see the whole picture.

Speaker 2 (05:30):
It can't. Fragmentation is like the number one enemy of
advanced AI. And DevOps.

Speaker 1 (05:35):
Okay, so the path forward becomes well obvious. I guess standardization, standardization.

Speaker 2 (05:42):
But it's not about stifling innovation, right, It's about creating
a unified, repeatable, and data friendly framework like.

Speaker 1 (05:51):
A golden pipeline, a curated set of tools that work together.

Speaker 2 (05:54):
Exactly that streamline collaboration, less operational overhead. The book calls
it the anti tool tag. Stronger security, better compliance.

Speaker 1 (06:01):
And crucially getting ready for AI.

Speaker 2 (06:03):
That's the big payoff. You provide that uniform structured data
that these autonomous AI agents need to function effectively. Vorrel
puts it bluntly. Without standardization, AI driven automation will never
reach its full potential.

Speaker 1 (06:16):
So standardization is the blueprint. What about the foundation? The
book talks a lot about Cloud Native. Why is that
so important here?

Speaker 2 (06:22):
Because Cloud native architecture is basically designed for this kind
of dynamic, automated world from the get go. Ah, Well,
you're using things like micro services, containers think Docker, dynamic
orchestration like Kubernetes, infrastructure as code with tools like Terraform.

Speaker 1 (06:38):
Okay.

Speaker 2 (06:39):
All that enables systems to scale automatically, to self heal
when things go wrong, to spin up temporary environments easily.
It's the technical underpinning you need for AI agents to
really manage things effectively.

Speaker 1 (06:51):
Got it. So it provides the flexibility and resilience AI.

Speaker 2 (06:55):
Needs exactly, and that connects directly to needing data centric
architectures and observes.

Speaker 1 (07:00):
Meaning getting all that data in one place, logs, metrics, traces.

Speaker 2 (07:05):
Yes, and tagging it consistently so you actually have deep visibility,
real time feedback loops. That's the fuel for the AI,
and tools.

Speaker 1 (07:12):
Like OPS are come in here helping aggregate all that right.

Speaker 2 (07:15):
The book mentions ops A specifically because it provides platform
agnostic analytics. It integrates with like over eighty different nosec
ops tools, wow, eighty plus Yeah, pulling all that data
into unified dashboards so leaders can actually see things like
deployment frequency, MTTR lead time, regardless of the specific tools.

Speaker 1 (07:35):
Underneath, seeing the whole forest, not just the trees. Like
you said, precisely, that Retail Giant case study really showed
the impact, didn't it. Monthly to daily deploys outages way down.

Speaker 2 (07:46):
And their observability shot up. That immediately paved the way
for them to start piloting AI for anomaly detection lay
the foundation see the benefits quickly.

Speaker 1 (07:56):
So if we're aiming for this autonomous future, what does
GOOD have actually look like? Is there a reference architecture?

Speaker 2 (08:02):
Yeah, the book outlines what GOOD looks like. It's a
unified architecture covering everything from requirements all the way to
monitoring and production.

Speaker 1 (08:09):
And key features are.

Speaker 2 (08:11):
Self service for developers, self healing capabilities, security embedded right
from the start, not bolted on later, and.

Speaker 1 (08:18):
The practical way to get there. That paved road concept exactly.

Speaker 2 (08:21):
Part I offers this practical paved road. It's about deliberately
choosing a lean stack, often centered around something like gethub
Enterprise Cloud maybe with actions for the pipelines, get Hub,
Advanced Security gaks for shifting security left okay, OPS area
for the analytics and visibility layer we just talked about,
and then standardizing the developer workspace, maybe on something like vs.

Speaker 1 (08:44):
Code, so collapsing maybe dozens of tools down to a core, cohesive.

Speaker 2 (08:49):
Set right, creating that clean telemetry foundation that you need
for autonomous new OPS. It's about intentional simplicity driving complex automation.

Speaker 1 (08:58):
Okay, sounds good rate in theory, but how do you
actually do it? People listening might be thinking, my setup
is chaos. Where do I even start?

Speaker 2 (09:06):
Yeah, it can feel overwhelming. The book provides an implementation
guidance playbook which is pretty helpful.

Speaker 1 (09:12):
What are the steps.

Speaker 2 (09:12):
It starts with forming an AI gild Tiger team, a
dedicated group to lead this. Then baseline your current tool sprawl.
You need to know how bad it.

Speaker 1 (09:21):
Is first, Okay, measure the mess.

Speaker 2 (09:22):
Measure the mess. Then importantly freeze new tool purchases for
a while. Stop adding to the problem makes sense. Publish
clear standards for co depositories, tagging things like that. Then
you systematically migrate teams and projects onto the paved road
and track progress absolutely, track ey KPIs, lead time, MTTR,
toolcount reduction, even licensed cost savings, and surface all of

(09:46):
that in a tool like opserra so everyone can see
the progress. It's methodical.

Speaker 1 (09:50):
Generative AI transforms the software development life cycle.

Speaker 2 (09:53):
Okay, so we've built this solid, standardized runaway. The foundation
is there. Now the book really takes off into the
revolutionary part generative AI, not just helping operations but actually
changing how we write the code itself.

Speaker 1 (10:07):
Right, This is where it gets really exciting the rise
of AI coding assistance like Gethub. Copilot is just a
massive game changer.

Speaker 2 (10:13):
More than just autocomplete right, oh, way more.

Speaker 1 (10:15):
The book calls it going from autocomplete to intelligent pair programming.
It works directly inside VS code, deeply integrated with GitHub.

Speaker 2 (10:23):
And the productivity james are real.

Speaker 1 (10:25):
They seem to be. We're talking ten to thirty percent
boosts generally, but some studies show developers finishing tasks thirty
even up to forty seven percent faster. Wow, that's significant,
it really is. Think about the example of adding that
O off to middleware.

Speaker 2 (10:39):
Yeah, thirty three percent faster with copilot. Yeah, because it
suggested code helped with unit.

Speaker 1 (10:44):
Tests exactly, instant code suggestions, scaffolding for tests. It saves
a ton of time, especially on repetitive or boilerplate tasks.

Speaker 2 (10:53):
But it's not perfect, right, There must be pitfalls.

Speaker 1 (10:56):
Oh definitely. The book is clear about the challenges AI
hallucinations where it just makes stuff up. Uh, security risks
if it suggests vulnerable code, and just the danger of
developers becoming overreliant not thinking critically.

Speaker 2 (11:10):
So how do you manage that? You can't just let
the AI run wild?

Speaker 1 (11:13):
No way, Yeah, best practices are crucial, always human the
loop reviews. People need to check the AI's work. Okay,
you need to write clear prompts to guide the AI,
and you absolutely integrate the AI generated code and tests
into your existing CI pipelines. Security tools still need to.

Speaker 2 (11:28):
Scan everything, so it's augmentation not replacement.

Speaker 1 (11:31):
Augmentation not application that's the key.

Speaker 2 (11:33):
Okay, so unit tests. But let's talk about the really
thorny stuff, functional and integration testing, especially with micro services,
that's always been a huge bottleneck.

Speaker 1 (11:43):
Huge, and this is where AI gets really interesting again.
Tools like functionize are mentioned.

Speaker 2 (11:48):
Functionize what do they do differently?

Speaker 1 (11:51):
They use AI to automatically generate functional and integration tests
just by watching how users actually interact with the application.

Speaker 2 (11:58):
Okay, observing user flows.

Speaker 1 (12:00):
But here's the really critical part. They can also self.

Speaker 2 (12:03):
Heal the tests self heal, meaning.

Speaker 1 (12:05):
Meaning if a minor UI element changes or a button
moves slightly, things that would normally break a traditional test script,
the AI adapts the test script automatically.

Speaker 2 (12:15):
WHOA Okay, that tackles the massive test maintenance burden exactly.

Speaker 1 (12:20):
That's often where teams spend most of their testing effort
just fixing broken tests.

Speaker 2 (12:25):
The e commerce case study nailed this right. Sixty percent
reduction in test maintenance.

Speaker 1 (12:29):
Yeah, a sixty percent drop, and they got better coverage
on critical paths like checkout. That directly speeds up feature
delivery because testing isn't such a drag anymore.

Speaker 2 (12:38):
And looking even further ahead, the book mentions open AI operator.

Speaker 1 (12:43):
Yeah, that's more experimental, but fascinating. It's an AI agent
that interacts with an app like a human would, literally
using a built in browser, so it looks at the page,
It interprets the page, visually, understands high level concepts. It
doesn't rely on fragile things like HTMs locators or specific
API end points, so it can adapt to changes much

(13:05):
more like a person would.

Speaker 2 (13:06):
That's kind of mind blowing. It really feels like a
step towards true intelligent automation.

Speaker 1 (13:11):
It does. It's early days, but the potential is huge.

Speaker 2 (13:14):
Okay, so AI is helping write code, helping test code.
What about the infrastructure it all runs on? Can AI
manage that too?

Speaker 1 (13:22):
Yes? Absolutely, generative AI is moving into infrastructure as code
iac and also data.

Speaker 2 (13:28):
Provisioning, so it can write my terraform for me.

Speaker 1 (13:30):
Pretty much. You give it a high level description. I
need a standard web server. Setup with these specs, and
it can generate the terraform or claudformation scripts. The book
suggests this can cut INFRA provisioning time by over sixty percent.

Speaker 2 (13:42):
Sixty percent that's huge for standing up new environments.

Speaker 1 (13:45):
It is, but it's not just about the initial setup, right.

Speaker 2 (13:48):
What about keeping things running smoothly preventing those annoying configured
drifts or scaling issues.

Speaker 1 (13:54):
That's the next level. AI enables predictive scaling. It analyzes
past usage pa terns and scales resources before the spy kits,
not after.

Speaker 2 (14:03):
Proactive not reactive.

Speaker 1 (14:05):
Nice and drift remediation. The AI constantly watches the live environment,
compares it to the IEC definition.

Speaker 2 (14:14):
And if something's changed someone tweaked a setting manually.

Speaker 1 (14:18):
It can either flag it for review or even automatically
correct it to bring it back in line with the
code definition. Viral highlights how this ensures compliance without constant
manual checks. It's like having an automated compliance officer.

Speaker 2 (14:30):
That fintech startup example sounded great. Faster environment spin up
and better PCI compliance through automated data masking.

Speaker 1 (14:37):
Right. It freeze up the ops team from that reactive
firefighting to think more strategically.

Speaker 2 (14:42):
Which brings us to this stay in the flow idea.
What's that about.

Speaker 1 (14:45):
It's about bringing these capabilities directly into the developers or
operator's main workspace, usually their ide like vs code.

Speaker 2 (14:52):
So instead of switching tools, you just talk to the
AI essentially.

Speaker 1 (14:55):
Yeah. Using natural language processing and LP, you could type
something like create a masked copy of production data for
this staging environment, okay, or generate terraform for a new
micro service based on our standard template. The AI figures
it out, proposes the scripts or actions, and.

Speaker 2 (15:12):
You review it right there, maybe in a pool request.

Speaker 1 (15:15):
Exactly, all without leaving your development context. It removes that friction,
keeps you focused. Okay.

Speaker 2 (15:20):
That sounds incredibly powerful. So finally, the CICD pipeline itself
the heart of DevOps. How does AI make the pipeline smarter?

Speaker 1 (15:30):
Several ways? One big one is optimizing the pipeline run itself.
AI can do intelligent test.

Speaker 2 (15:37):
Selection, meaning it doesn't just run all the tests every
single time, right.

Speaker 1 (15:41):
It analyzes the specific code changes in a commit and
figures out which subset of tests are actually relevant to run.

Speaker 2 (15:47):
That could save a lot of time, especially on big
test suites.

Speaker 1 (15:50):
Huge amounts that e commerce company, example, they cut their
forty minute pipeline runs down to twenty minutes just with this.

Speaker 2 (15:56):
Wow half the time. That massively speeds up the feedback
loop for developopers.

Speaker 1 (16:00):
Instantly they know much faster if their change broke something.
And when feedback is that fast, developers feel more confident
making smaller, more frequent changes.

Speaker 2 (16:10):
It changes the whole dynamic.

Speaker 1 (16:11):
It really does. And it goes further with predictive failure
analysis and remediation.

Speaker 2 (16:16):
Okay, what's that.

Speaker 1 (16:17):
The AI is watching the pipeline run in real time
logs metrics. It can spot anomalies that suggest a test
might be flaky or a build environment is misconfigured, and
then it might automatically retry a step or even apply
a known fix. It can also get smarter about deployments.
How So, based on the risk profile of the code change,
how big it is, what areas it touches, the AI

(16:40):
could dynamically choose the best deployment strategy, maybe a slow
canary release for a risky change, or a faster rolling
update for something simple.

Speaker 2 (16:47):
And it watches the deployment, watches.

Speaker 1 (16:49):
The real time telemetry. If it sees aerror rate spike
or performance tank. It can trigger an automatic rollback.

Speaker 2 (16:56):
So the pipeline becomes truly adaptive, not just the dumb
script runner.

Speaker 1 (17:00):
Exactly intelligent, adaptive self healing.

Speaker 2 (17:03):
And this also ties into stay in the flow.

Speaker 1 (17:06):
Yes, a developer could be in their ID and type
deploy this branch to Canary with ten percent traffic, watch
for anomalies.

Speaker 2 (17:13):
And AI just handles it, triggers the right pipeline monitors,
it gives feedback.

Speaker 1 (17:19):
That's the vision. The AI interprets the intent, orchestrates the
actions and reports back, maybe even suggesting a rollback if needed.
It's this frictionless pipeline guided by human intent but executed
largely by AI. That's the core of NUOPS. According to
the book.

Speaker 2 (17:35):
Wow, that final section in part two, Catalyst to Autonomy,
really paints a picture of it all coming together.

Speaker 1 (17:41):
It does an AI first paved road, copilot and VS
code helping write the code functionize I as creating and
healing the tests, AI agents managing the infrastructure via.

Speaker 2 (17:51):
IAC and all the data feeding back.

Speaker 1 (17:53):
Every suggestion, every healed test, every drift correction, tagged and
streamed into that unified analytics platform like opsrac iktives get
real time visibility into velocity, quality.

Speaker 2 (18:03):
Cost savings, and security gets pushed even further left.

Speaker 1 (18:06):
Right to the keyboard. Essentially, issues blocked by copilot or
caught by automated security in the pipeline ideally never even
make it close to production. That's a massive security posture
improvement outro.

Speaker 2 (18:17):
What an incredible journey really from that fragmented, often frustrating
world of traditional DevOps to this potential future driven by standardization, cloud,
native thinking, and wow, generative AI creating real autonomy.

Speaker 1 (18:34):
It really feels like we're on the cusp of transforming
all that repetitive toil and software delivery into something much
more intelligent, almost self managing.

Speaker 2 (18:43):
Yeah. But the key takeaway, it seems, isn't that humans
become obsolete?

Speaker 1 (18:47):
Right, Not at all. The book really emphasizes this as
AI takes on the drudgery the human role shifts. It
elevates to what to higher level design, to strategy, to
focusing on the next innovation, not just the current lights on.
It's about freeing up human potential, and of course there's
always more to learn as this tech.

Speaker 2 (19:05):
Of all so fast, absolutely so for everyone listening, what
does this all mean for your role? How can you
start thinking about embedding AI, making it like muscle memory
in your team or org.

Speaker 1 (19:18):
How do you get from that burnout cycle to maybe
build chip and wonder, wonder what's next exactly.

Speaker 2 (19:24):
We really encourage you to explore these ideas, maybe pick
up Vorl's book and just identify one or two places
where AI could start elevating your daily work.

Speaker 1 (19:34):
Start small, build momentum.

Speaker 2 (19:35):
Yeah, well, thank you so much for joining us on
this deep dive into the future of software development.

Speaker 1 (19:40):
My pleasure.

Speaker 2 (19:40):
We hope this exploration empowers you, gives you some ideas,
and helps you pave your own road towards a more autonomous, efficient,
and ultimately more innovative future.
Advertise With Us

Popular Podcasts

My Favorite Murder with Karen Kilgariff and Georgia Hardstark

My Favorite Murder with Karen Kilgariff and Georgia Hardstark

My Favorite Murder is a true crime comedy podcast hosted by Karen Kilgariff and Georgia Hardstark. Each week, Karen and Georgia share compelling true crimes and hometown stories from friends and listeners. Since MFM launched in January of 2016, Karen and Georgia have shared their lifelong interest in true crime and have covered stories of infamous serial killers like the Night Stalker, mysterious cold cases, captivating cults, incredible survivor stories and important events from history like the Tulsa race massacre of 1921. My Favorite Murder is part of the Exactly Right podcast network that provides a platform for bold, creative voices to bring to life provocative, entertaining and relatable stories for audiences everywhere. The Exactly Right roster of podcasts covers a variety of topics including historic true crime, comedic interviews and news, science, pop culture and more. Podcasts on the network include Buried Bones with Kate Winkler Dawson and Paul Holes, That's Messed Up: An SVU Podcast, This Podcast Will Kill You, Bananas and more.

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.