All Episodes

February 9, 2025 • 35 mins
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Harry (00:01):
The data doesn't make the decision.
The data's there to create aninformational context for you to
use your good judgment.
And your good judgment is therebecause we, as beings, have
evolved to collect informationthrough all sorts of senses,

(00:23):
conscious and unconscious.

Narrator (00:30):
You're listening to Traction Heroes.
Digging In to Get Results withHarry Max and Jorge Arango.

Jorge (00:40):
Harry, how have you been, my friend?

Harry (00:43):
I've been great, Jorge.
It's really nice to see youagain.

Jorge (00:46):
It's always great to catch up with you.

Harry (00:49):
How are you?

Jorge (00:51):
I'm doing well.
I was taking a walk earlier thismorning.
And I was thinking, I'm meetinglater today with Harry and I'm
wondering what text to bring toour conversation.
And I had checked out a bookfrom the library through Libby,
and one of the things thathappens when you check out a

(01:11):
book through this means is thatthe books come with an
expiration date.
And it's usually hard to renewthem, especially if it's a book
that is in high demand.
And I was taking a bit longerthan normal finishing the
previous book, which was alsofrom the library, so it was due.
So I had to finish that before Igot to the other one.

(01:32):
And I finally finished the otherbook yesterday and I started
reading this book again thismorning.
And it's a book that I happen toown as an audio book.
And as I was listening to theaudio book, I thought,"Oh my
gosh, this would be perfect fora conversation with Harry." And
I felt kinda bad because I don'twanna bring a book that I

(01:54):
haven't fully read.
But then I had an interestingexperience in that I opened the
Kindle app on my phone tohighlight something that the
author had said in the audiobook.
And I realized that, even thoughI had got this book from the
library, I already hadannotations in the tail end of
the book.
And I was like,"Wait a second,I've read this before!" And not

(02:18):
only have I read it before, Ihad already bought it as a
Kindle ebook.
So I'm at that stage towardseither senility or having too
many books in that I am checkingout from the library books that
I already own and feeling thepressure to read them quickly

(02:39):
even though I own them.

Harry (02:40):
That's so funny.
I know that feeling of"I'vegotta buy a book!" And I go buy
the book and then I realizethis, I have to put it on the
shelf right next to the othercopy of it I already have.

Jorge (02:51):
It's a long-winded story, but anyway, it's a way of
introducing this text and, likeI said, even though I had
already read the book, it hadbeen several years since I read
it.
And it's a great book.
I enjoyed it a lot the firsttime, and now that I'm

(03:12):
revisiting it, I'm like, there'sa lot of, really valuable stuff
here.
And it's one of those thingsthat, this probably happens to
you as well, but there arecertain books that I've reread
and when I revisit the book, thebook is the same but I'm
different and I take differentthings out of it.
Maybe the context is different.
And I'm finding that to be thecase with this book.

(03:34):
So I'm gonna read a fragmentfrom this thing, and then I'll
tell you what the book is andwho it's by.

Harry (03:41):
Okay.

Jorge (03:41):
So, here goes:"In theory, you can't be too logical, but in
practice you can.
Yet we never seem to believethat it is possible for logical
solutions to fail.
After all, if it makes sense,how can it possibly be wrong.

(04:03):
To solve logic-proof problemsrequires intelligent, logical
people to admit the possibilitythat they might be wrong about
something.
But these people's minds areoften most resistant to change,
perhaps because their status isdeeply entwined with their
capacity for reason.

(04:24):
Highly educated people don'tmerely use logic, it is part of
their identity.
When I told one economist thatyou can often increase the sales
of a product by increasing itsprice, the reaction was not one
of curiosity, but of anger.
It was as though I had insultedhis dog or his favorite football

(04:45):
team." And I'm going to skipahead now a little bit.
"If this book provides you withnothing else, I hope it gives
you permission to suggestslightly silly things from time
to time.
To fail a little more often.
To think unlike an economist.
There are many problems whichare logic-proof, and which will

(05:07):
never be solved by the kind ofpeople who aspire to go to the
World Economic Forum at Davos."

Harry (05:14):
Wow.
I have not read that book.

Jorge (05:19):
All right, so this is a book called Alchemy
Art and Curious Science ofMaking Magic in Brands,
Business, and Life by RorySutherland.

Harry (05:29):
Oh, Sutherland's a genius.

Jorge (05:32):
Yeah, he's an interesting fellow.
And, for folks listening along,you might have caught the
reference to football team.
So he's a Brit and, I think he'sretired now, but he worked at
Ogilvy for a long time.
So advertising.
And again, I'm, revisiting thebook.
I read it many years ago.
And what I remember from thisbook is that it is, as suggested

(05:57):
by this passage, a call or areminder that there are
decisions that you can makethrough the use of data, but
there are decisions that kind ofdefy the use of data or I'll say
it more strongly, maybe basingthe decision on data might lead

(06:20):
you astray.

Harry (06:22):
Yeah, It is funny, it reminds me...
a friend of mine often talksabout, this is, you might know
Mark, my friend Mark Interrante.
He talks about things in termsof churches.
Like he'll talk about thereligion of work.
He'll talk about what is yourreligion of work?

(06:43):
Or he'll talk about like theChurch of Reason or something
like that.
I can't actually think of theone that I'm trying to reference
right now in response to theSutherland reading and the
concept here, but the idea thatwe tend to think that rational

(07:04):
logic...
The question is who's the we inthe statement I'm about to make:
that we tend to think thatrational logic will lead us to
rational decisions and rationalaction is true for some people
and not true for many others.
And in fact, the idea itselfthat the rationality behind

(07:31):
these decisions and theseactions is actually what is
ultimately informing thosedecisions and actions has been
proven to be fundamentallyflawed in that it is actually
the part of our brain that isthe emotional part of the brain,
which is the decider.

(07:52):
And in the neurobiology of thephysical brain, when the
sections of the brain that allowemotions and, logic to
communicate, and I can't bringthose forward into this with
appropriate references rightnow, are separated, that

(08:13):
somebody who still has access totheir rational mind but not
access to their emotional mindcan no longer make decisions.
And yet, there is this questionof we that Sutherland brings up
here because this, associationinto the thinking person or the

(08:34):
rational mind is in effect itsown blind spot.
And the people that areassociated into that or people
that have identified with thatabsolutely fail to realize that.
many, if not most people, arenot driven to interact with the
world that way, but even theythemselves are not making

(08:56):
decisions that way.

Jorge (08:59):
That's right, and we operate in relationship with
other human beings who are alsomaking decisions from emotions.
In our last conversation, wetalked about learning to trust
your gut in some ways.
This notion that you tune in towhat your body's telling you,

(09:22):
right?
The mind is not a computer thatis making an analysis based on
data points in the same way thata computer algorithm is.
Our bodies are part of ourdecision-making apparatus, and
there's stuff like pheromones inthe air that will affect how you

(09:42):
understand what is going on,right?
And there, there's so much morethere.
I remember hearing at one point,my YouTube algorithm, served up
an interview with Ayn Rand.
I don't think I had ever heardher voice, and I played the
first few minutes of it.

(10:02):
And the reason I tuned out, andthis was a while back, so I'm
gonna probably do her aninjustice by paraphrasing, but
the reason I tuned out is thatone of the first questions the
interviewer asked was somethinglike,"What is your vision of an
ideal man?" And her answer wassomething along the lines of,"An

(10:23):
ideal man does not let emotionsinterfere with their
decision-making." And again, I'mprobably paraphrasing that, but,
that explains so much to meabout...
I haven't read a lot of Ayn Randbooks.
I read The Fountainhead when Iwas in school.

(10:44):
But it brought into contrast,the, the worldview, which I
imagine her espousing with whatI think of as a more kind of
pragmatic approach that involvesmaking use of the entirety of
your capacities, including youremotions, right?

(11:06):
We do not make decisions basedjust on the data.
We would be doing ourselves adisservice if we make decisions
based just on the data.

Harry (11:15):
Part of that is because the data we have is not all the
data there is or are.
I never remember if it's datathere is, or data there are.
I think it's data there are.
At any rate, the point is thatdata are helpful and possibly
informative and they certainlyput us in a stronger position to

(11:41):
help people understand how we'vearrived at some informed place.
However, the data doesn't makethe decision.
The data's there to create aninformational context for you to
use your good judgment.

(12:02):
And your good judgment is therebecause we as beings have
evolved to collect informationthrough all sorts of senses,
conscious and unconscious.
And so much of our decisioningis unconscious that for us to

(12:22):
hallucinate that we can look atdata and make a conscious
decision that's rational and endup in a place that's extremely
predictable is foolish, right?
Because a) as we said, much ofthe data aren't even available.
Just because we have a set ofdata doesn't mean that's all

(12:42):
that's there.
Moreover, that data may havecome through a very limited
channel, and it's really just aset of points that help us
rationalize a decision thatwe've already made or inform a
decision that we're going tomake and then act on.
But in between all of that isthe judgment that we bring here,

(13:06):
the ethics that we bring here,the contextual awareness of both
whatever has happened, what ishappening in the moment, and the
possibilities of what could behappening going forward.
I touch on this, turns out, inthe very last chapter of my
book, Managing Priorities, whereI talk about the role of AI in

(13:30):
prioritization.
And if we assume that AI canunderstand enough to be ethical,
if we assume that AI canunderstand all of what's
necessary to understand a highlycomplex and dynamically changing

(13:52):
environment then, sure, maybeAIs will put us in a strong
position to not have to use ourown judgment.
Otherwise, we're gonna have tocome up with accommodations to
insert our humanity.
And our humanity, to bring itall the way around, is all of
us.
It's not just our rationalminds.

Jorge (14:15):
It's funny you mention AI.
I talked about a book that I hadto return before getting to,
Sutherland's book.
And that book was Yuval NoahHarari's, new book, Nexus, which
I would say it's about thedegree to which the shape of
human society is defined by thestructure of our information

(14:36):
networks.
And his argument is that for allof human history up until now,
these information networks havebeen characterized by means for
sharing, capturing, sharing,disseminating information, but

(15:00):
the means for doing that havenot been actors in producing
information themselves.
And that, is changing now,right?
So AI does, put us in theterritory that we discussed in
our last conversation.
It's kind of unprecedented,right?
The situation that we findourselves with AI is
unprecedented.

(15:21):
And we are trying to do the bestwe can, given what we know,
where we come from, our longhistory with information
networks with other humans.
But we now face the prospect ofbeing in information networks
with non-human actors.

(15:43):
You said data don't make thedecisions, but we do seem to be
heading in a world where moreand more decision-making is
relegated to the data.
And I've had the experience ofworking in projects with
stakeholders who are verydata-driven.
I have one particularly in mindwho would not accept design

(16:05):
directions that were notinformed by statistically
significant data.
And, I didn't do it as at thetime, which was probably a
mistake, but my gut feeling andmy feeling about it in
retrospect was that I shouldhave pushed back more because in
some ways the designerlyapproach to decision-making,

(16:29):
taps into other means of knowingthe world besides merely
processing quantitative data.

Harry (16:39):
Yeah, that's a hundred percent right.
I think perhaps in one of ourprevious conversations we
touched on the iPhone.
And I can't imagine for the lifeof me having worked at Apple and
knowing the stories of theiPhone development, I can't
imagine for the life of me thathad Apple's leadership demanded

(17:01):
that the decisions around theiPhone be informed by data, we
would have no iPhone today.
And the idea that somebody wouldrequire statistically relevant
data without reallyunderstanding all of the

(17:23):
probabilistic scenarios thatcould be informed by other data
to enhance the decision optionsthat they have, makes very
little sense to me.
I think about, Douglas Hubbard'sbook, How to Measure Anything,
and his approach to...

(17:44):
brilliant man, and his appliedinformation economics, an
approach to looking atspeculative futures and
different scenarios and dealingwith decisions in situations of
uncertainty.
I think you bring up a superinteresting point, which is
something I've never seriouslyconsidered, which is how do we

(18:07):
get people off that hook.
That decision maker, what do wesay to them?
What are our talking points andhow do we back those up?
Because these are claims that wemake as designers or information
architects or product managersor leaders or whatnot.
These are claims that we makeabout,"you can't use the data to
make the decision solely, youhave to take other things into

(18:29):
consideration." But I don't knowthat we have the talking points
laid out in a way that a leaderin the situation that you're
describing, would accept.
And until we have those talkingpoints, we are gonna be at the
mercy of their demand for thatdata.

Jorge (18:50):
That's a great prompt.
I think that a good use for theremainder of our time here would
be to brainstorm a little bitabout what those talking points
might be.
And I'll start by saying thatone approach that might be
helpful and again, I can't saythat I used this, but in
retrospect I wish I had, whichis to acknowledge the fact that

(19:14):
there are people who do wantthis sort of security.
And I pause there because I'mimagining like air quotes around
the word security here, but thesecurity that is provided by
quantitative data.
Acknowledge that there arepeople who do want that, for

(19:37):
whom that is important.
And making clear to those folksthat data are important, that
there is a role for data in thedecision-making process, but
that role is not evenlydistributed throughout the
entirety of the process, right?
There are certain decisions forwhich data is more appropriate
than others.

(19:58):
And I would venture that theappropriateness is highly
dependent on the degree to whichthe ideas are developed.
So you talked about the iPhone,I could imagine A/B testing
aspects of the final design, butI can't imagine doing

(20:18):
quantitative research on theexploration for the solution
space, the initial part of theprocess, where you're trying to
figure out what the heck is itthat we're making here, right?
That feels to me like, a part ofthe process that demands a much
more intuitive, visionary,opportunistic, approach to

(20:40):
decision making, which mightmake those folks nervous, right?
And you, wanna call it out andyou wanna say,"Hey, the phase of
the process we're in is notquantitative; we can't
quantitatively figure out ourway through this forest.
So it's gonna be uncomfortablefor a little while.
Hang tight; we're gonna getthere."

Harry (21:00):
I think a really good point.
And I thought of so many thingsas you were talking.
one of them is, in working withDoug Hubbard, who wrote How to
Measure Anything, I brought himin as a consultant to help with
a project at AllClear ID, whereI was the head of product for a
number of years.
And Doug was working to help usestablish a very complicated

(21:26):
decisioning model for looking ata set of potential events that
could happen.
And funny, Doug's the kind ofguy that the insurance industry
calls when they can't figuresomething out, right?
He's just an amazing guy.
And he once said to me, he goes,"Look, I didn't write a book on

(21:49):
how to measure most things, Iwrote a book on how to measure
anything." And, I'm like,"Yeah,but Doug, what you're really
talking about in this case, isabout clearly identifying the
subjective and then buildingsome kind of model to establish
some relevant quantitativeframework to support that

(22:13):
subjectivity." And he goes,"Exactly!" Which is what you
would wanna do in a situationwhere you can't know something.
He goes,"You can always knowsomething.
The question is, to whatextent?" And he talks a lot
about the value of, information.
And about sometimes the value ofinformation is very high.

(22:35):
And sometimes the value ofinformation is very low.
And it falls on a spectrum.
And I, think about a highlydata-driven decision-maker can
be the kind of decision-makerthat ends up driving the kind of
changes you see at a companylike Boeing, where the decision
framework around the design anddevelopment, execution,

(22:57):
engineering, operationaldecisions, lead to doors falling
out at 38,000 feet because theyweren't thinking with a broad
enough frame.
And that broad enough frame hadnothing to do with numbers and
data, it had to do with ethicsand humanity.
Sure, if you open up theaperture, you can establish a

(23:19):
quantitative model around it andlook at brand equity and impact,
and you can look at, stock priceand all sorts of stuff like
that.
But most of those arguments areused in reverse to support the
kind of changes that over timeled to the kind of catastrophe
that we're now seeing unfoldingat Boeing, where you bring in a
Jack Welch cost-cutting acolyte,an executive who's really there

(23:44):
to ring out the innovation andreally drive up the stock price
over time by cost cutting, youend up in a Sidney Dekker model.
There's a book that SidneyDekker wrote called Drift Into
Failure, where the decisionsthat are being made are very
data-driven, but they'redata-driven in a frame that's

(24:05):
too small.
And by looking at that frame, itappears that those decisions are
being made well.
But in fact, over time, as theyplay out, they have horrible
consequences because nobody'sstepped back and looked at the
broader subjective context inwhich those decisions are being
made.

(24:25):
And that leads me to this courseI took just a couple weeks ago.
I flew out to Boston to take acourse with Ed Catmull and Hal
Gregersen.
Ed Catmull was the co-founder ofPixar Studios, Toy Story Fame,
and he ended up being thepresident of Disney Animation.

(24:48):
And he was teaching, theytogether, wonderful course
called Embracing Uncertainty.
And much of it's embodied in theexpanded edition of Creativity
Inc., Ed's book, which is nowout highly recommended, by the
way.
A dramatically good book onleadership, which I think speaks

(25:08):
to many of the issues we'retalking about.
But it doesn't necessarily giveus a script to maybe fill in the
Mad Libs, story points, ortalking points about how do you
respond to an executive whosays,"Look, I gotta have the
data.
Can't make a decision withoutthe data." Because knowing what
those talking points are gonnabe, and having them laid out in

(25:31):
a way that allows somebody inthat sort of fixed frame, that
smaller aperture fixed frame,say,"Huh, okay, in this
particular case, I'll set asidethe data because we can't
necessarily know and we're gonnahave to allow human subjectivity

(25:52):
and judgment drive some ofthis." Brings us all the way
back around to the question thatyou said, which is maybe we
should be brainstorming on thetalking points.

Jorge (26:04):
In hearing you talk about it, I could think of another
talking point, which is that ifthe purpose that you are putting
the data to is optimizing somekind of solution or some kind of
response to a problem, itbehooves you to be clear on what

(26:24):
it is that you're optimizingtoward.
In the case of, Boeing, and Idon't know in detail the
situation there, but as far asI've read, they were optimizing
towards cost savings, right?
And in Nexus, Noah Harari talksabout the fact that social media

(26:44):
optimized their algorithms forengagement and that had the
second, order effect of making,contentious, irritating,
hate-filled content rise to thetop, because that's what drives
engagement.
So maybe something likeemploying the five whys tool,

(27:06):
right?
Ask why.
"Why are we doing this?
What is it in service to?" Andthen maybe that can help you in
these conversations withdata-driven stakeholders to...
again, I wanna be very clear,I'm not arguing that we should
not use data at all.
I think that data are superuseful.
I just wanna make sure thatwe're using them adequately for

(27:28):
the right purposes.
And maybe this is a way to helpsteer that conversation.

Harry (27:33):
Yeah, I'm thinking of a couple things.
Number one is, I realize I'vestopped using, I think it was
the TPS, Toyota ProductionSystem, the lean world that
brought up the Five Why's.
But I've stopped using that.
I've replaced it with the FiveHow's, and I've found that I get
to a much better causalunderstanding of things with a
how then a why.

(27:53):
And so, instead of saying,"Whydid that happen?
Why did that happen?
and then why did this happen?"recursively, I'll say,"How did
that happen and how did thishappen?" And it tends to lead
more directly to either apeople, a process, a technology,
a decision.
The whys tend to like knockpeople back on their heels a
little bit and maybe put peoplein a defensive psychological

(28:16):
position.
So I found those a lot lessuseful, therefore, I've stopped
using them.
The other thing this makes me...
This makes me think of two otherthings.
Number one was, I think aquestion that could be asked, as
a talking point as it were, is,"What are the unintended
negative consequences of relyingon this kind of data?

(28:39):
What blind spots could it becovering up or preventing us
from seeing?" And that brings meright back to the Hubbard
question of,"Okay, so we may nothave all the data that we want,
but if we had data to supportthe subjective elements of this

(29:02):
that we think are important,what data would we need?
A) what are those subjectiveelements?
And B) what data would we needto support those?
And then, what is the value ofthat data?
Is it high, medium, or low orsomething in-between?" If we can
pinpoint some subjective elementor characteristic and then
identify the data, putting aBayesian model behind that might

(29:25):
take a lot of work, might nottake much work at all.
But if we can assign a highvalue to that information, maybe
it allows us to say,"Alright,fine.
We have all of this other datathat's informing this potential
decision.
What other subjectiveinformation would be valuable?
Where is it high value?

(29:46):
And do we have the time andresources to go do a little bit
more work to establish someclarity there?"

Jorge (29:53):
It's funny.
In our last conversation, wetalked about our responses to
unprecedented situations.
And part of that entailedknowing enough to realize that
you are in the domain of theunprecedented.
And that seems to be rearing itshead here in the sense that we

(30:15):
might be operating based on thedata we have and not on the data
we need.
And we might be in the thrall ofthe fact that we do have data
and think ourselves inpossession of greater expertise
than we actually have because wedon't have the data that we need

(30:37):
and we don't even know that weneed it.
And this is where I think that amore designerly approach might
be useful.
And I'm gonna tie this into alearning from the field of
information architecture, MarciaBates's berry picking model,
this notion that when peoplesearch for information, it's not

(30:58):
very common that they knowexactly what it is they're
looking for.
The more common scenario is thatyou clumsily ask a first
question, and based on theanswer you get back, you learn
enough to ask the secondquestion, right?
And you progressively become amore skillful researcher of the

(31:22):
situation.
By getting feedback in the formof search results, getting
feedback in the form ofconversations with other people.
Little by little, you acquirethe necessary conceptual models,
the necessary language, toundertake the next step, right?
And it might be that a potentialanswer for these data-driven

(31:47):
folks is to say,"Look, we aregoing to be undertaking a design
process that is going to teachus what kind of data we need to
be capturing and how to measurethat data and what to do with
it." So, perhaps we can givethem some degree of confidence
that we will build the necessarydata muscles.

(32:07):
But we have to work our waytoward it because, at the
beginning, we just don't knowwhat we don't know.
And we might be trying to forcedecisions that we are unable to
act on based on what we have.

Harry (32:24):
Yeah, a hundred percent.
And that it also brings us rightback to another point we
discussed in our previousconversation, which is I like to
tell people,"Look, certainty isthe enemy of truth." And when
I'm dealing with an executivethat says,"Look, I need the data
to make the decision." And thenthey get to the point where they
feel like they can make thedecision and they assert that

(32:48):
decision with a high degree ofself-assurance and confidence,
the job there, my job there as adesigner, information architect,
executive coach, consiglieri,whatever role I'm in, is to,
without saying,"Hey, look man,certainty is the enemy of
truth," is to get them to acceptthe possibility that they will

(33:09):
not know if they're wrong untilafter the fact, is to get them
to open up to the possibilitythat in this sense of
self-assurance that they havelies the seeds of error.

Jorge (33:24):
Hubris.

Harry (33:26):
Well, that's the most extreme version of this.
Yeah.

Jorge (33:28):
Well, you talked about Boeing, right?
Again, I don't have insightsinto it, but from the outside
seems to be the tail end of adisastrous series of decisions,
probably made without engagingin the sort of humble
introspection that we'rediscussing here.

Harry (33:49):
Yeah, that's my sense.
Fundamentally, it came down to abad hiring decision, I believe.
And that's a board level thing,right?
That's a governance problem.
But it was all enacted by a CEOwith a philosophy.
And that philosophy may haveworked well in one context, AKA,

(34:09):
General Electric, but it was notappropriate in the context of
Boeing.
They're working their waythrough how to fix that now;
they've got a lot of work to do.

Jorge (34:22):
We're obviously not going to fix Boeing's problems, but
hopefully we can help folksnavigate these sorts of
situations better.
And again, I wanna emphasize, Ididn't bring this reading with
the idea of poo-pooing data.
Data are hugely important.
They play a hugely importantpart in our future with AI.

(34:44):
But I think that they must beused thoughtfully and we will be
working with folks who are verydata-driven and we need to find
ways of being good contributorsand helping them be good
contributors.
And that requires understandingthe role of data and the fact

(35:04):
that there are other ways ofmaking decisions.

Harry (35:06):
Yeah.
Yeah, no doubt.
It's been fabulous.
Once again, Jorge, I just lovethe conversation, and I so
appreciate you making the timefor these.

Jorge (35:16):
Oh, same here.
I always learn something newwhen I talk with you.
I'm gonna take the Five Hows andand implement those.
I've retired the Five Whys,Harry, thank you!

Harry (35:27):
You're welcome.
I'll be very curious to hearwhat your experience is.
I've heard some good things fromother people.

Jorge (35:32):
Awesome.
Thank you for sharing.

Narrator (35:38):
Thank you for listening to Traction Heroes
with Harry Max and Jorge Arango.
Check out the show notes attractionheroes.com.
And if you enjoyed the show,please leave us a rating in
Apple's podcasts app.
Thanks.
Advertise With Us

Popular Podcasts

24/7 News: The Latest
Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.