All Episodes

August 22, 2025 31 mins

What if your AI agents could not only collaborate but also autonomously safeguard your enterprise? 

In this episode, host Andreas Welsch sits down with Steve Wilson, Chief AI and Product Officer at Exabeam, to explore the nuances of securing agentic AI and the emerging landscape of multi-agent communication. 

Together, they discuss how businesses can enhance their cybersecurity measures while adapting to the rapid evolution of AI capabilities, from agency levels to ethical governance. 

Learn about practical strategies for evaluating AI agent frameworks, mitigating insider threats, and implementing security-first approaches in a world increasingly dominated by AI:

  • What are the internal and external threats and challenges of AI agents and Agent-to-Agent (A2A)/ multi-agent systems?
  • How can organizations defend against those threats?
  • What can also go right with Agentic AI security?
  • What do AI and IT leaders need to do to ensure enterprise security despite all the Agent sprawl?

Ready to navigate the complexities of AI security? Don't miss this insightful conversation and tune in now to tap into the potential of secure AI agents for your business!

Questions or suggestions? Send me a Text Message.

Support the show

***********
Disclaimer: Views are the participants’ own and do not represent those of any participant’s past, present, or future employers. Participation in this event is independent of any potential business relationship (past, present, or future) between the participants or between their employers.


Level up your AI Leadership game with the AI Leadership Handbook:
https://www.aileadershiphandbook.com

More details:
https://www.intelligence-briefing.com
All episodes:
https://www.intelligence-briefing.com/podcast
Get a weekly thought-provoking post in your inbox:
https://www.intelligence-briefing.com/newsletter

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Andreas Welsch (00:00):
Hey, welcome to What's the Buzz, where leaders

(00:02):
share how they have turned AIhype into business outcomes.
Today we'll talk about how tostrengthen your enterprise
security for agentic AI andmulti-agent to agent
communication, and who better totalk about it than someone who
is actively working on that.
Steve Wilson.
Hey Steve.
Thank you so much for joining.

Steve Wilson (00:20):
Hey Andreas, thanks for having me back.
I'm excited for the conversationtoday.

Andreas Welsch (00:24):
Wonderful.
Hey you're my go-to securityexpert for anything around AI
you've been on the show, I thinknow five times or something like
that.
And we always have so much totalk about, and I love bringing
you back because you see thatthe topic keeps evolving.
Every quarter, every six months,there's something new to talk
about, something that's reallyimportant.

(00:46):
But before I get too excited,maybe you can tell our audience
a little bit about yourself, whoyou are and what you do.

Steve Wilson (00:52):
Yeah, so I think I wear three or four different
hats that are relevant totoday's conversation.
So my main hat that I wear isI'm the Chief AI and Product
Officer at Exabeam, which is acybersecurity company that's
been using AI and machinelearning for 10 years to improve
the cybersecurity stance of ourcustomers.

(01:14):
And we've been shipping.
LLM based copilots and agents aspart of our product for 18
months now.
So I have a lot of experiencebuilding those hands on.
The other things is I foundedsomething called the OWASP GenAI
Project, which is if you areunfamiliar with OWASP, it's an.

(01:35):
Open Source Foundation dedicatedto building secure software.
And about two years ago westarted developing guidance on
how to build secure softwarewith AI.
Super hot topic for the lastcouple years.
And and then also I thinkAndreas and your book in my book
came out the same week.
I think it was right around thesame time.

(01:58):
I did write a book for O'Reillycalled the Developer's Playbook
for Large Language ModelSecurity.

Andreas Welsch (02:04):
That's awesome.
And again, like I said, I thinkit was about two years ago that
I first reached out to you whenI saw the OWASP Top 10 for Large
Language Models.
And it's just amazing to see howthings have progressed and how
quickly they've evolved fromthat initial list.
Yeah, so again, super excitedabout the conversation we'll
have today.
Why don't we play a little gameto kick things off?

(02:26):
What do you say?

Steve Wilson (02:27):
Let's do it.

Andreas Welsch (02:28):
Alright.
So in good, old fashioned,right?
I'll hit the buzzer, the wheelswill start spinning.
When they stop.
I'd love for you to answer withthe first thing that comes to
mind and why, in your own words,and to make a little more
interesting.
You only have 60 seconds foryour answer.
For those of you who arewatching us live, drop your
answer in the chat and why aswell.
Steve, are you ready for, What'sthe BUZZ?

(02:49):
Let's do it.
Okay.
So here we go.
If AI were a band, what would itbe?
60 seconds on the clock go.

Steve Wilson (03:00):
All right.
I'm gonna go with my dad'sfavorite band which is The
Beatles.
And I'm not old enough toremember this moment, but I
remember my dad talking about itwas there was a very sudden
change from before anybody heardof The Beatles.
And then one night they were ona big TV show in the United

(03:20):
States and everybody knew aboutthem.
And that was like the ChatGPTmoment for The Beatles.
After that they were ubiquitousfor a few years, but they always
kept reinventing themselves.
It's not like the Beatles musicfrom 1969 sounded like the
Beatles music from 1964.
It sounded totally different.

(03:41):
They kept reinventingthemselves, but it was always
the same thing, and at somepoint they disappeared.
But they didn't really, at somepoint, you can just listen to
all of the popular music todayand you can hear the Beatles and
all of it.
And I expect the place thatwhere we're going is we're gonna
start, we're gonna stop talkingabout AI companies.
Like we stopped talking aboutweb companies.

(04:03):
There will just be companies andjust like we all use the web
today, everybody will use AItomorrow.

Andreas Welsch (04:10):
I love it.
That's such a great analogy,right?
It is really this evergreenthing to, some extent, but also
something that deeply influenceseverything else that comes after
it.
Great analogy.
Speaking of these companies andat some point we won't talk
about AI companies anymore atthe moment we do, and we are

(04:30):
seeing them bring more and moreAI agents on, one hand into the
market, and then those thatadopt them, bring them into
their organizations.
We see players like Google.
They've announced protocols toenable agent to agent
communication and multi-agentscenarios.
Many others in, the industry aregetting behind it.
It's great to see that there'ssome collaboration.

(04:50):
Everybody realizes.
We want to have these agentscommunicate with each other and,
like I said, we've talked aboutthe OWASP Top 10 for LLMs.
Previously, we talked about howthat's evolved for Gen AI, and
now here we are talking abouthow organizations that are
letting LLMs and steroids looseon their systems and data are
facing the next set ofchallenges.
So I'm, curious, what are youseeing when you talk to

(05:11):
organizations around this topicof agents?
Do they even have that on theiron their radar yet?
And what are the internalexternal threats and challenges?

Steve Wilson (05:21):
So obviously everybody's talking about agents
you can turn on CNBC, which isthe Financial News Network in
the us and all they talk aboutis AI and agents and things like
that.
But I think often people don'thave any idea what they mean
when they say I agents.

(05:41):
It's just the thing they say nowinstead of AI as they say
agents.
But I think you need to dissectthe word a little bit to really
get to the heart of it, becauseit's not like the concept is
completely new.
If you, get under the word AIagent or even the more
complicated hip word agent.

(06:02):
It's all about how much agencydoes something have, and we've
had concepts about grantingagency for a very long time.
I grant my lawyer agency to filepapers on my behalf.
That's, where the, basicconcepts come from, is you're
giving something the rights todo things on your behalf and.

(06:23):
The way I look at agency is it'snot a binary switch.
It's not like it has agency orit's not, it's like a volume
knob.
It's like I'm giving this verylittle agency, and that's your
classic chat bot use case.
It might have the agency torepresent you as a company if
it's your customer service chatbot.
But at the end of the day, theworst it could do to someone is
call them a bad name.

(06:46):
But if we start to give it toolsand give it access to do actions
that are un undoable all of asudden there's a lot more agency
in these things.
Do we give them the capabilityto act on their own Chat bots
never act on their own.
You prompt them and they respondand it's always that very simple

(07:08):
interaction.
These agents.
Tend to have longer runningprocesses, and those could be.
Seconds to minutes, or theycould be hours to days, although
those are very rare at thispoint.
But I think that's where peoplethink it's going.
And from a security perspective,the way I like to think about it

(07:29):
is there's a grid.
It's like, how much agency do Igive something?
And how smart actually is it?
How capable is it?
And.
You know where you get intothese real danger zones are
what, in the first version ofthe oasp guidance we call
excessive agency, which means Ihave a thing that's not that

(07:51):
smart, but I'm giving it a lotof capability.
I'm asking for trouble, I'masking for security problems to
happen.
And that might be because it canaccess things inside my company
or outside my company that areimportant and it opens you up to
risk and exposure on the otherhand.

(08:11):
These agents are, the modelsunderneath them are so much
smarter than they were even ninemonths ago, much less two years
ago, that I can give them moreinteresting jobs.
And so at Exabeam now, it usedto be our co-pilot was an
assistant and if somebody had aquestion about something, they
could ask the assistant.

(08:33):
Now when, our low level AIalgorithms detect a problem.
The agent runs off and, does acomplete security investigation
and comes back with a report andpresents it to the user where
it's done a tremendous amount ofwork before it was asked.
And I think those are the kindof use cases where it can be

(08:53):
really powerful as you do giveit some initiative.
But you have to be careful abouthow much actual agency you give
it.
And then you get into a lot ofquestions about where do humans
come into the loop?
How much do you let it dosupervised versus unsupervised?

Andreas Welsch (09:09):
I have a follow up question there.
A couple years ago I was doingsome, work with clients in,
aerospace and the biggest threatto our organization is not so
much the external threats.
They're big and significant too,but it's actually the insider
threats of people that wealready know are on the network,
that are in the company.
They may be in the four walls ofthe company that have access to

(09:30):
information and that they mightfunnel.
Somewhere else for maliciouspurposes.
Now there, there have been toolsto detect insider threats.
There are things like data lossprevention these kind of things.
Now, all of a sudden, inaddition to people you have
agents on your network that cando this, not just by clicking

(09:51):
manually, but automating thesethings as much as one agent, but
dozens or hundreds or thousands,if you will, maybe they spawn
new ones and, what have you.
How, are organizations able todefend against those threats?
How does security need tochange?
We've previously talked aboutzero trust architecture and,
these kind of things.
How, do all of these factorsplay into it?

Steve Wilson (10:16):
By the way, that's a great question.
First place to start is.
Insider threats and compromisedcredentials, which are two sides
of the same coin, are thehardest to detect most insidious
security threats that every CISOworries about.
And classically the vectors forthose were.

(10:39):
Maybe you had a disgruntledemployee, but more likely what
you had is somebody who'd beenthe victim of a phishing attack
who lost their, credentials, andthen somebody who's on your
network who doesn't belongthere.
What's interesting about theagent ones is, as you do give,
say, a set of agents, the rightsto do work on your behalf, on

(11:01):
your network that, does becomeits own cut side of insider
risk.
What I can tell you though iswhat, I see in practice is the
bar for what people call anagent right now for what people
are really deploying is farlower.
And in a lot of cases whatthey're saying is and, it, this

(11:24):
doesn't mean that they'relesser, it just means they're,
let's call them less risky usecases.
But even, at that, we have tomanage them.
Somebody will say, Hey, wouldn'tit be great if we had an agent
that was part of the HRdepartment that could answer HR
related questions for ouremployees?
And you know what it's really aclassic, call it a copilot use

(11:48):
case, but you give it all thepolicies and all the stuff
nobody knows how to find andnobody knows how to read and
you, hook it up with RAG andother things we've come to
understand over the past coupleyears and you put it on the
network where people can come toit and interact with it and it
doesn't have a huge amount ofagency, but what it does have.

(12:10):
Is access to a potentially a lotof data.
And that's where the, first lineof defense really comes in.
And this is what I talk aboutwith any of these LLM based
systems is the first thing youneed to think about is how much
data are you giving it accessto?
Because the first thing you haveto understand is.

(12:32):
Even the more advanced modelsare not great at decision
making, they're easy to trick.
And if you give it access todata and your security defense
is something in the systemprompt that says, please don't
give this information out, thatis never going to work.
It is never gonna be sufficient.

(12:54):
So the first line of defense isinformation management.
You say, for my agent, for whatit needs to do its job, what's
the minimal version of that jobthat it can do, and what's the
minimum amount of data that itneeds?
And then you can do a classicrisk assessment around that.
It's, it then becomes when youdo get to these agents and you

(13:18):
want them to do work there couldbe one where it says.
Take that AI HR agent forexample, could be, it now gets a
set of actions that it can do.
You're like, I would like to upupdate my tax withholding.
There's a version of that youjust give the agent access to

(13:41):
the API set for.
Day force or whatever your HRsystem is, and you say I'll just
let it figure out the code towrite when somebody asks to do
something.
And it could do it.
And it could do anything thatthey could do, or worse yet it
could do anything theadministrator could do.

(14:02):
That's probably not a good idea,but.
What are the, low risk actionsthat they could do that are
undoable later or confirmable,or does your request get queued
up and prepared for somebody inHR to maybe just briefly review
and approve so they don't haveto do the action?

(14:22):
You don't have to get ahold ofthem, but it becomes a broker
with a human in the middle

Andreas Welsch (14:29):
Now.
I am as I'm listening to whatyou're sharing, I'm thinking
especially with things likeagent to agent communication,
multiple agents or multi-agentsystems, what are the risks of
one agent constructing anotherto reveal more than it's it's a
poster, right?
It's A2A is a communicationprotocol.
Yeah.
Like we have a protocol whenhumans speak.

(14:50):
Usually when one person speaks,the other one listens and
responds and vice versa.
But the information that istransported, what am I trying to
get you to do?
Is a totally different one.
So I hear information managementis one critical piece and,
locking down the amount ofinformation in the systems and
so on, not to expose it.
What are you seeing when agentscommunicate with each other?

Steve Wilson (15:13):
So it's fascinating because I think
there's two kinds of protocolsand it's a big source of
confusion.
So it's worth outlining what thetwo big protocol categories are
right now, and you've.
You've probably talked about oneof the other ones on the show,
but when you say agent to agentcommunication and there's, some
of these emerging standards,like A two A from Google.

(15:38):
Those are honestly seeing verylimited adoption right now.
There's a lot of experimentationgoing.
I would say there's very littleactual production usage of these
things.
What has the communicationprotocol though, that has
stormed the beaches and is beingused everywhere, whether wisely
or not, is tool usage protocols,and in particular MCP, which is

(16:02):
the model context protocol, and.
So when people think aboutbuilding agents, one of the ways
that they now give their agentsagency is they give them tools,
right?
It's classically, our LLM hasjust been a brain and a mouth.
'cause it was a chat bot andthat's all it could do.
Now it has fingers.

Andreas Welsch (16:23):
Yes.

Steve Wilson (16:24):
And that's the first thing to really look at is
if you're going to, if you'regoing to start to use MCP to
give your agents.
Fingers and hands.
There's a, lot of considerationsthere.
There's a lot of securityconsiderations.
There's a lot that's beenwritten on the, topic of how do

(16:46):
you think about what agency yougive them, but also where do you
get these tools from?
And it, it adds a whole supplychain set of conditions.
The thing I will say that Ithink is coming though when we,
look at agent to agent, thefirst hurdle to getting there is
having more than one agent.

(17:09):
I'd say the use cases where wesee some agent to agent
interactions are oftencompletely internal with bespoke
agents.
So inside a company I might haveseveral agents.
That I might construct to builda swarm.
It might be multiple instancesof one agent.
So one of the first places thatwe're seeing with this is

(17:30):
writing code.
And you see a lot of talk thesedays about spinning up basically
multiple instances of an agentlike Claude Code.
Which work together on a codebase, like a team of engineers,
each taking different jobs outof a queue and processing those
in parallel.

(17:50):
I'd say that's one of the earlyplaces that we're starting to
see.
This is multiple kind ofinstances of the same agent
working together.
The next one is within acompany, bespoke agents that are
in roles and within the productat Exabeam, we now have.

(18:11):
Three or four different agenttypes in the product.
They actually don't reallyinteract with each other yet.
They're more like custom agentsfor vertical use cases.
I think the things that we'regonna see develop though and
this is where I think peopleshould start to look, is the
first thing that we're gonnaneed is we're gonna need

(18:31):
reliable.
Basically discovery services foragents.
The, thing that the internetneeded before the web was really
viable was it needed DNS, andwe're starting to see.
A NS like proposals for hey,here's how I publish that.

(18:52):
I have an agent that can carryout certain kinds of services.
What is promising about these isthey're often much better
thought out than DNS from asecurity perspective.
Sure.
These early internet services,nobody took.
Trust into account.
And it's the reason that the,internet is the train wreck that

(19:12):
it is it was just designed toshare information between
universities.
Why would you wanna limit it?

Andreas Welsch (19:19):
But see I love that on one hand, these concepts
are seeing revival.
I've also been talking aboutcentral registry or, discovery
for, agents for a bit and seeingthat this is materializing and,
maturing I think is great.
So at least this time around.
With everything that we knowover the last 50, 60, 70 years,
we know how to make it better.

(19:40):
And yeah.
How to design for our securityfirst principles, at least in
theory we should know.

Steve Wilson (19:45):
Yeah.
But I think when, you thinkabout these multi-agent cases
where your agent might beinteracting with an untrusted
third party agent, a lot of thepractices are gonna be similar
to interacting with untrustedhumans.
And today when we put our, chatbot on the web or our, agent on

(20:08):
the web to interact with humansyou, have to take this very hard
zero trust approach that you'relooking at guard railing, these.
Things multiple ways.
One of the things that I put outin the last couple months was I
created a new open sourceproject that you can go find on
GitHub called Steve's ChatPlayground.

(20:28):
And it's the embodiment of thetop 10 list in the book where
there's a set of chatbots andyou can pick different chatbots
and some of them have built invulnerabilities.
There's check boxes where youcan install guardrails.
Guardrails to look for promptinjection or look for unintended
code generation on the backend.
And if I'm building an agentthat I want to interact with

(20:50):
untrusted things, whether that'sreading email that I'm getting
from an untrusted source ortaking API requests from an
untrusted source, I need to bescreening everything that I can
using very traditional methodson the way in and the way out.

Andreas Welsch (21:09):
I like that.
That's really, practical and I,didn't know you, you had put up
the, playground, so I, need tocheck that out myself.
So it's sounds like a great andhelpful tool to, see how
additional guardrails strengthenyour security there.
Yeah.
Now we've, talked quite a bitabout the things that can go
wrong and where all the risksare, but I also want to make

(21:31):
sure that we talk about thethings that can actually go
right.
When you bring agents into yourbusiness, when you look at
security what are you seeingthere?
How are companies doing themreally well that are embarking
on their agenda, AI journey thatare bringing some agents or
maybe some more agents in?

Steve Wilson (21:48):
I think there's two big categories to think
about.
When I talk to people abouttheir approach to this and
there's I'll put it in, I workat a big company and I'm trying
to figure out how to make mycompany better, and then all the
way, at the other extreme, it'slike, how could I make my

(22:10):
company.
Act like a big company eventhough it's not.
And and there are two verydifferent approaches that may
involve a lot of the technologystack.
But let's talk about what I'llcall the hard case first, which
is I have a, battleship and I'mtrying to steer it.
I work at a global 2000 companyand I see.

(22:34):
My smaller competitors movingfaster, doing things that are AI
native and deploying some ofthese agentic technologies in
efficient ways.
And I think what, you do see arethese very vertical agent stacks

(22:54):
that are starting to becomeproductive.
The coding one is by far themost mature, even though it's by
far the scariest.
And the one with the mostagency, so to speak.
But we've had cases in casesI've been involved with at our
own company where we have, oneproduct has a 20-year-old code

(23:18):
base.
It is the, s gnarliest scariestthing.
And there are parts of that codebase, no human wanted to touch,
but it was, problematic.
There were.
Things the customers didn'tlike, and we had one brave
engineer weighed in with Claudecode and a positive attitude
came back four weeks later withthis thing rewritten in ways

(23:40):
people never thought.
They thought, man, if Idedicated a team of 10 for three
months to it, I don't know thatI could do it.
And, they went in and they didsomething and it meant something
to the business.
And we're, doing more and moreof that now.
And I know people acrosscompanies are doing that with
the software development pieceall the way over on the

(24:01):
marketing side.
We've seen people generatingmarketing collateral forever.
What we're just on the earlypart of is things like, agent
based sales prospecting it's.
Your, sales, your main salesmanager persona, that's one of

(24:22):
the most expensive single unitsthat you have at the company.
These people may make a milliondollars a year.
You don't want them cold callingthings on the phone.
So traditionally you had giventhem one or two low level
assistants who would place theircalls and coordinate meetings.
We're, starting to see theability to replace those or

(24:44):
accelerate those.
And that, that kind of leads meto that other case, which is I
wanna be a one person startupand we hear multiple people talk
about it.
And we will see.
A billion dollar valuationcompany coming out of one human
soon surrounded by agents.
And we're gonna see somebody whohas an idea for the product and

(25:09):
they're gonna use agents tobuild the first version of the
product.
They're gonna use agents to sellthe first version of the thing
that they build.
And they're gonna use softwareto, to do the basic.
Compliance needs that they havefor filing your taxes and the,

(25:30):
those basic things and those areall within reach and we see
people really doing that.
And I think we know we're in theplace where we've, crossed a
line where we see what I'll call20 person startups who are well
over a billion dollar valuation.
And that would, never have beenpossible two years ago.

(25:51):
We're on our way to a one personstartup.

Andreas Welsch (25:54):
That's pretty exciting, right?
Having all these capabilitiesat, your fingertips and whether
it's something that you builtyourself or it's a collection
of, different tools that youassemble and, certainly if
you're a one person startup,right?
It's pretty straightforward.
They're pretty shortcommunication pathways between
your head and your hands,probably.
Yeah.

(26:14):
So just, seeing where that goesnonetheless.
At some point they also need tothink about security and, how to
secure the data that theygenerate, that users give them
and so on.

Steve Wilson (26:25):
You, you do and I know that we see that.
Look vibe coding is, part ofthis, and there have been so
much controversy out there aboutthe security of vibe coded
things.
The one, piece of hope I willgive everybody, having worked in
what they call AppSec for years,which is just the industry that
helps people secure their code,is the code that gets written

(26:48):
today by teams of humans is notsecure.
By and large, we try and weshould try and we fight that
battle.
But that battle is hard andhumans are bad at writing secure
code.
So what agents let us do isoften what humans do much
faster.
The piece of hope that I havegoing forward is there's so much

(27:10):
investment in these AI aidedcoding.
Things.
What I could never get the humancoders to do was really care
about security.
They really, they wanna buildnew stuff.
They wanna build exciting stuff.
They wanna build new features.
The bots don't really want to doanything.
And if while we're trainingthem, we incent them to write

(27:31):
secure code, they'll writesecure code.
And I think in the next year ortwo years, we will rapidly cross
the point where he said I don'twant the humans doing the
securing of the code.
I want the next generation ofAppSec tools, which are
artificially intelligent to bethe ones fixing those bugs, not

(27:52):
the humans.

Andreas Welsch (27:54):
Talk about rapid development in that space.
Speaking of rapid.
Seems like last half hour hasgone by in no time.
Steve, I was wondering if youcould summarize the key three
takeaways for our audience todaybefore we wrap up.

Steve Wilson (28:08):
Sure.
So I think first things first,understand the level of agency
that you are giving to youragents.
Once you've moved past justbeing a mouth and the thing has
fingers.
Think about what.
You're allowing that to do onyour behalf, and what mechanism

(28:28):
is it gonna use to do that?
Think about the intelligencelevel that your bot has.
It's just as likely for you toget yourself in a high risk
situation by the bot making abad decision as by, as being
attacked by a malicious.
Third party.

(28:49):
But the malicious case makes iteven harder.
So you really need to thinkabout what data do I have access
to?
What privileges am I giving itto execute on those decisions.
And then the third one though isbe aware of the opportunities.
Because the opportunities aremassive.
You can't look at the first twoand say, we don't know how to

(29:11):
fully secure these systems.
So I'm not, I'm just gonnaignore it.
You do that and your name'sgonna be Blockbuster or Sears
Rose.
Sears Roebuck in a few years.
You need to start to learn howto do this.
Just be really conscious aboutthose things in terms of data
access and agency levels, findappropriate use cases, and start

(29:33):
to move forward and I think youcan change the trajectory of
your business.

Andreas Welsch (29:38):
I love that it's very tangible ad advice sums up
our conversation really, well.
Steve, thank you so much forjoining us and for sharing your
experience with us.
Again, I'm always amazed howquickly this topic of security
evolves and what all thedifferent aspects are that now
come into focus.
So hopeful for those of you inthe audience, you are already
also have a better understandingof it.
So Steve, again, thank you somuch.

Steve Wilson (30:00):
Thanks Andreas.
It's always a pleasure.
Advertise With Us

Popular Podcasts

Law & Order: Criminal Justice System - Season 1 & Season 2

Law & Order: Criminal Justice System - Season 1 & Season 2

Season Two Out Now! Law & Order: Criminal Justice System tells the real stories behind the landmark cases that have shaped how the most dangerous and influential criminals in America are prosecuted. In its second season, the series tackles the threat of terrorism in the United States. From the rise of extremist political groups in the 60s to domestic lone wolves in the modern day, we explore how organizations like the FBI and Joint Terrorism Take Force have evolved to fight back against a multitude of terrorist threats.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

NFL Daily with Gregg Rosenthal

NFL Daily with Gregg Rosenthal

Gregg Rosenthal and a rotating crew of elite NFL Media co-hosts, including Patrick Claybon, Colleen Wolfe, Steve Wyche, Nick Shook and Jourdan Rodrigue of The Athletic get you caught up daily on all the NFL news and analysis you need to be smarter and funnier than your friends.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.