All Episodes

March 2, 2025 28 mins

In this episode of Peaceful Life Radio, hosts David Lowry and Don Drew engage in a profound discussion with futurist Michael Hanegan about artificial intelligence (AI) and its transformative impact on society. The conversation explores the rapid advancements in AI, including generative AI and machine learning, and their implications for various fields such as medicine, education, and intellectual property. Hanegan highlights the exponential growth in computational power and information, showcasing examples like Google's AlphaFold in protein modeling. The hosts and Hanegan discuss the potential benefits and risks of AI, the future integration of learning and work, and the democratization of education through modular and decentralized programs. The episode provides valuable insights into how professionals can adapt to AI advancements and leverage these technologies to solve complex problems and improve human well-being.

00:00 Introduction and Welcome
00:15 Introducing Michael Hanegan
00:51 The Evolution of AI and Moore's Law
02:15 Understanding Generative AI
06:10 Machine Learning Explained
07:48 AI in Education and Work
15:42 AI's Impact on Medicine
19:49 Future of Education
24:14 Navigating the AI Revolution
27:00 Conclusion and Final Thoughts

Visit the Peaceful Life Radio website for more information. Peaceful Life Productions LLP produces this podcast, which helps nonprofits and small businesses share their stories and expertise through accessible and cost-effective podcasts and websites. For more information, please contact us at info@peacefullifeproductions.com.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
David Lowry (00:00):
And hello, everyone.

(00:01):
Welcome to Peaceful Life Radio.
This is David Lowry, and with metoday is my good friend, Don
Drew.

Don Drew (00:06):
Yes, I am here, and it is quite a cold and snowy day in
Oklahoma City.

David Lowry (00:11):
That's reason enough to stay indoor for a
couple of days.

Don Drew (00:14):
There you go.

David Lowry (00:15):
Don, I'm really excited about our guest today.
Michael Hanegan.
And I'm telling you Michael is aperson you want to know.
I describe him as a futurist.
He does intellectual propertyarchitects.
He's a founder of a movement formoral care.
He's a tedx speaker.
Has an organization calledWorking for a World Worth Living

(00:36):
For.
But I think of Michael as awonderful person who's studying
AI, the ins and outs of it, ashe does his work helping people
write and do intellectualproperty work.
So we're gonna be talking toMichael today.

Don Drew (00:51):
Okay.
So in 1965 a fellow by the nameof gordon moore He was one of
the co founders of the intelcorporation he made a prediction
that the number of transistorsthat could fit on a microchip
With double in about every 18 to24 months.
We today refer to Moore's law assaying technology doubles about

(01:12):
every two years.
Interestingly, up through around2020, that was fairly true, but
along comes this new thing we'reall hearing about called
artificial intelligence or AI.
Just last week there was an AIsummit in Europe where 61
countries signed a statementtalking about how they would

(01:34):
support open, inclusive,transparent, ethical, safe and
so on AI activities in theircountry.
And today we're hearing allkinds of things about what AI
is, everything from allowingstudents to cheat with impunity,
to killer robots, Michael.
Tell us a little bit aboutyourself and what is AI?

Michael Hanegan (01:55):
Yeah, thanks for having me.
I run a company calledIntersections, which is a
learning and human formationcompany.
And I teach AI and the future ofwork at Rose State College and
the University of CentralOklahoma.
And then I advise K 12 districtsand universities trying to
figure out that new line betweenthe future of learning and the
future of work.

(02:15):
AI is not new.
We've been working on AI for along time.
What's new in the general senseis what we call generative AI, a
particular subset of artificialintelligence.
What we're seeing here is thatwe are looking at a technology
that not only Outpaces the speedat which we're used to things

(02:36):
advancing.
You talked about Moore's lawjust a few minutes ago about how
computing doubles every 18 to 24months.
We're now essentially at Moore'slaw squared.
Which means every four to ninemonths we're doubling.
One way I like to talk aboutthis is to say the computational
power of Alan Turing's firstmachine that broke the Nazi

(02:59):
Enigma Code could do 26calculations a second.
And one of the more advancedchips from NVIDIA, one of the
more valuable companies in theworld because of AI, their most
advanced chip can do 420quadrillion calculations a
second.
So, to give a spatial metaphorfor that, Alan Turing's machine

(03:19):
in one second could go thelength of my arm.
The current Blackwell B200 fromNVIDIA can go from here to Pluto
and back three and a half timesin one second.
So, we've reached the scale ofcapacity and computational
power.

David Lowry (03:34):
And along with that, Michael is the doubling
and quadrupling of informationthat we have available to us as
we conduct our scientificresearch, continue to write as
more and more people graduatewith graduate degrees from
around the world.
And in places we had overlookedin the past.
So, we have an informationexplosion along with this

(03:57):
hardware explosion.

Michael Hanegan (03:59):
Yeah.
And so what we really have isthis birth of a age of
intelligence, right?
Where we now have technologythat is capable of helping us
use at scale and at speed thefull wisdom, insight, and
technical prowess of the wholeof humanity.
Our medical literature doublesevery 72 days.

(04:22):
So, the amount of learning thathumans produce is impossible to
keep up with as a human being,but these technologies enable us
to work and learn at a speed andscale previously impossible.
One example of this I love isfrom Google DeepMind, their
award winning tool AlphaFold,which is for protein modeling
and drug discovery.

(04:42):
We've been working on proteinmodeling since the 1970s.
You used to be able to leaveyour computer on the internet
overnight and donate cycles todo protein modeling.
Right?
AlphaFold has now modeled allknown proteins.
Which is approximately 1 billionyears of PhD time work-- in
about a year.

David Lowry (05:02):
Oh my goodness.

Michael Hanegan (05:03):
And now that tool is open source and
available to the entire researchcommunity.
More than a million scientistsuse it every day.
It won the Nobel Prize inchemistry this year.
The people who created it.
So, we're living in this spacewhere.
Change is at an unprecedentedscale.
It is unlocking things at aspeed and in capability that we

(05:26):
had not anticipated.
And there's a couple ofdifferences with generative AI
and other technologies.
Our experience with software hasalways been for our entire
lives, that we made software todo something, and then, that's
what it did.
This is the first kind oftechnology where we discover its
capabilities.
We say, Oh, we didn't know thatcould happen.

(05:49):
And this opens up, a differentkind of way of engaging with
technology that, I'm optimistic,I'm not doom and gloom and I
think this is, it's all over forus.
I think that comes more from ourpop culture movies.
We've watched maybe a little toomuch Terminator and iRobot.
But I do think there are realrisks, but I think there are
tremendous upsides to explore.

Don Drew (06:10):
Michael, there's a term AI called machine learning,
which is some of what you'vebeen talking about.
But when somebody uses thephrase machine learning, what do
they mean?

Michael Hanegan (06:20):
Yeah.
I mean, part of what they meanby machine learning is this
ability for algorithms and otherforms of computation to do work
where getting a correct answeror an incorrect answer enable
them to improve.
This is some of the more basicmachine learning, like for
example, when we created toolsthat could beat human beings at

(06:42):
chess.
We would learn from wins orlosses about the better way to
move.
So machine learning is thistechnology which enables us to
not write something that'sstatic and it works or it
doesn't, but for it to continueto be developed.

Don Drew (06:57):
A human side of that might be that I learn a lesson.
I know something about an actionthat I want to take or not take.
I want to avoid making thatmistake again and yet I managed
to do it and fall in the sametrap, make the same mistake
twice.
The machine would not do that.
It would learn the first timeand not make the same mistake
the second time it.

(07:17):
Might make another error, butit's going to learn from that
error as well.
So with each iteration, it wouldget smarter.
Is that fair?

Michael Hanegan (07:25):
It can, it doesn't always deal with the
first time, but similar tohumans, like sometimes it takes
more than once for us to makethe pivot.
But the difference betweenmachine learning and other forms
of, it's this kind of softwarebased technology is that it can
improve.
If you want to improve otherkinds of software, you have to
change it.
There's no mechanism internal toitself to get better.

David Lowry (07:48):
Michael, how are you using AI in your work as
intersections and theintellectual architect?

Michael Hanegan (07:56):
I use this all the time in the coursework that
I teach.
I'm teaching students not onlyabout the societal and ethical
questions that are raised bygenerative AI but also using
tools for research for ideationand planning for data analysis.
When I was in grad school, I waspreparing to apply for a
National Endowment for theHumanities grant where I wanted

(08:18):
to do a digital humanitiesproject that was going to take
me about a year and a half andcost me about a quarter of a
million dollars.
And I think I can do that entireproject now for probably a
couple hundred bucks in a coupleof weeks.
So, the capacity for us toleverage learning is remarkable.
One practical example.
One of my favorite tools rightnow is a tool called Notebook

(08:40):
LM.
It's from Google.
You can upload documents.
You can create podcasts.
You can ask questions.
I have a notebook that hasprobably 3, 500 pages of text in
it from stuff that I work with.
And every day I go in andessentially say, show me
something I haven't seen beforeand explore at a scale that I

(09:00):
couldn't.
I could spend the next sixmonths reading all of that if I
wanted to, but then I couldn'tactually hold it in my mind.
But with this technology, it isaccessible at my fingertips at
any time.
And I think that's where it getsreally, really interesting.

David Lowry (09:13):
One of the things that seems concerning at the
same time is.
Whether or not it can docreative work.
I believe we're beginning to seethe first stages of actual
creativity.
Not only does it learn from howwe've written things in the
past, but it can improvise.
You can say I'd like to write inthis kind of romantic style, but

(09:35):
fuse it with this postmodernlook at something.
And it seems like it's does apretty good job of creating a
new a hybrid product.

Michael Hanegan (09:43):
Yeah, I think one of the challenges that we
have here is this is so unlikeany other technology we've
encountered that we inevitablyuse language that makes sense
for how humans do things.
And it's not necessarily becausethat's technically what's taking
place, but it's the only kind ofmetaphorical language we have.
I am regularly surprised ortaken aback sometimes by the

(10:06):
progress or the insight or theconnections that are made.
Or recently as a joke for ademo, I used notebook LM to
create a podcast and I said, Ineed you to communicate this
very serious technicalinformation with as many dad
jokes as possible.
And it was some literature aboutcreativity.
In the podcast, these two peoplewho don't exist right there, AI

(10:29):
generated, we're talking abouthow humor is uniquely human, and
that AI would never be able tomake a dad joke.
Immediately after having madewhat I thought, as a dad, was a
pretty good dad joke.
So, we have these moments wherewhat we used to feel like were
the lines of this as a humandomain get fuzzy sometimes.

(10:50):
And I think that's exciting.
And I think what we're alsofinding out is that these tools
are more like us than weexpected.
One of the examples that I loveabout this is a year ago, there
was a lot of concern about thisidea of using what's called
synthetic data, which is wherethe machine creates data, and
then you train the machine onthe data.
They were worried that it wouldbe like a snake eating its own

(11:13):
tail, right?
That it would fall apart if youdid this.
And what we learned in themiddle of all that completely
unrelated from completelydifferent scientific research.
Is that human beings run onsynthetic data.
And what we learned was thatwhen humans walk and run and
play and fight in their dreams,they are training their actual

(11:36):
motor system for moving in theworld.
So they never actually moved,but they did.
And we're finding in so manyways that the way that we think,
the way that we plan, the waythat we create is in some ways
mirrored in some of thesetechnologies, which I think is
it's interesting.
I don't know how to feel aboutit, but it is curious for sure.

Don Drew (11:56):
That's amazing, David.
I'm afraid we may be out ofbusiness here.
Sounds like.
New hosts that do dad jokes.

David Lowry (12:02):
And they'll have really wonderful vocal qualities
as well.
I don't know.
What can we do?
Michael, all three of us teachuniversity students and year
before last, our university wasabsolute, we are not going to
allow AI to be used.
That's cheating.
The line was drawn.
A year later, it was we mustlearn how to work with AI but

(12:26):
there was still this holdoutthat you would not use it for
certain things.
I opened up my latest version ofWord and the first thing that
comes up is an AI prompt.
What are you writing about andhow can I help you?
And if we think that people arenot going to use that in our
classes I mean, let's all getreal here.
This tool is embedded ineverything.

(12:46):
We were using it to correct ourgrammar for a while.
Now we're using it to help usfind resources.
And then it's like, how do weorganize these resources?
Then how do we summarize theseresources?
Where do you see this going fromthese very simple tools that
we're using now to where it maygo?

Michael Hanegan (13:04):
So, I think there's two possibilities and
this will depend largely on howpeople think about their
learning.
There are some people who willabsolutely outsource their
learning to these tools.
They will no longer put in theeffort to acquire the skills.
They will ask for what theyneed, they'll copy, paste and
ship.
And I think those people will beat a profound disadvantage very,

(13:26):
very quickly.
But the other option is to say,what does it look like to use
these tools as an integral andenhancing capacity of our work?
To do that, you have to betterunderstand at a deep level, the
work you're trying toaccomplish.
That's going to be the keydifferentiator.
So, I don't necessarily need tohave a PhD in whatever to use

(13:49):
these tools in a meaningful way,but I do need to understand very
clearly the problem I'm tryingto solve, or the result that I'm
trying to produce or thequestion that I have.
This is similar to saying itdoesn't do a lot of good to
bring something that's morepowerful than the scenario
enables.
Right?
It doesn't make sense to bringMichael Jordan to the

(14:11):
recreational basketball league.
It doesn't make sense to bring aFerrari to a go kart track.
It doesn't make sense to bringsomething that is infinitely
more capable than what you'reactually able to use it with.
It doesn't help you.
You don't benefit from it.
So, I think the future of thosewho really can take advantage of
this are those that can thinkdeeply about their respective

(14:32):
area.
And know how this can help themget to what they want.
In fact, one of the things thatI love here, there was a
mathematician named RichardHamming who was a mentor to an
obscene number of Nobel prizewinners.
And there was always thispivotal moment in these Nobel
prize winners lives when theywould go to lunch with Richard
Hamming.

(14:52):
He would ask them two questionsafter hearing about their work.
He would say, What is the mostimportant question in your field
and why are you not working onit?
And that would ultimately changethe trajectory of so many of
these people's careers.
And I think in a lot of our workwe're about to experience tools
that are capable of dealing withthe boring, the mundane and the

(15:13):
routine.
And all that's going to be leftfor us are the more difficult
and the more interestingquestions in our work.
I think that's true foracademics.
I think that's true for a lot offields.
So, if I'm trying to think aboutthe second half of my career I'm
saying, What are the problemswe've always wanted to solve,
but either didn't know how ordidn't have the resources?

(15:36):
And my hunch is, now or verysoon, we do have what we need to
tackle some of those questions.

Don Drew (15:42):
Michael, let's zero in on one of those kinds of
questions or classifications ofquestions that a lot of our
listeners will be interested in.
You were mentioning proteinsearlier.
Let's talk about medicine.
What are some of the practicalapplications of AI to medicine
that are changing the waymedicine is done now?

Michael Hanegan (16:00):
There's a wonderful essay from Dario
Amadai.
He's the CEO of Anthropic.
His essay is called Machines ofLoving Grace.
And he talks about what he callsthe compressed 21st century.
He suggests that in the next 10years, we will do a hundred
years of medicine.
And that has two implications.
Obviously we'll be tackling moreproblems and coming up with more

(16:23):
solutions.
But like one concrete version ofthis is that we believe that
eventually, and in my mind,sooner than later, we will be
able to do the early stages ofclinical trials essentially in
virtual.
We'll already know how it'sgoing to go before we actually
get to the parts to where weinvolve people which can take

(16:43):
the development and release ofdrugs from decades to years.
And then if we change the waythat we do this because of these
technologies, maybe even less,right?
The other piece I think you'regoing to see is an increasing
ability for the rapidpersonalization of medicine.
You see this in experimentalplaces already.
Especially in some experimentalcancer treatments.

(17:05):
You don't have treatment forthis kind of cancer.
You have treatment for thisparticular gene sequencing of
this person's cancer.
Like this medicine is made forthis person and this person
only.
And these, this technology hasthe capacity.
To increase the scale of thatwork and to decrease the time

(17:26):
and cost of that work.
And so part of what we're seeingis huge advancements in
diagnostics.
We already have AI tools thatare better at diagnostic than
doctors without tools.
And so we're hoping that thesynergy of physicians and
technology can really level up.
I saw a study recently wherethey were predicting breast

(17:48):
cancer up to 18 months earlierthan a human could detect by
themselves when they paireddoctor and technology.
And so they were havingobviously much better outcomes
in treatment, in prevention inlife expectancy.
So I'm very optimistic aboutwhere this is going to go in
medicine.
I think it's one of the placesthat I think is most exciting.

(18:10):
I think that'll be most obviousto everybody, in the next five
to 10 years.

David Lowry (18:14):
It seems to me like there'll be a sorting out of
what's important to know, vitalto know, what's something we can
always look up and depend on.
But I'm hopeful that we'll workharder at teaching people how to
ask good questions.
I have another concern Michael,and i'm wondering what you might

(18:34):
think about this Our machinerywill only be as good as the
information that we train itwith.
i'm very concerned aboutgovernments saying, we're not
going to train it with certainkinds of information or certain
kinds of information would beforbidden.
So, there wouldn't be thisintellectual free and open
intellectual discussion anymore,but it would be highly formatted

(18:55):
in a way that's palatable forpeople in power.

Michael Hanegan (18:59):
I think it's always a concern how technology
gets used in society.
I don't think that's a newproblem.
What I do think is differentwith AI is that the gap between
open sourced and closed sourcedis not as big as it is in other
forms of technology.
Your open source tools areoftentimes, nearly approaching
the full capacity of theseclosed source tools.

(19:21):
Which essentially means thateven if you're in a country
where some of these models areunavailable, you have the
capacity through open sourcetools to have your own.
You can actually have them foryourself.
While the closed models andcompanies are still at the
frontier that gap is not thatbig in this space.

(19:42):
And so I think there will alwaysbe some capacity to access
things that are maybe lessrestrictive in that way.

Don Drew (19:49):
Michael, all three of us are educators.
We have grown up through an agewhere there was a body of
knowledge that we were allexpected to know.
We all took history classes, weall took English classes, we all
took math classes.
I'm asking you to play futuristfor just a second.
How is education going toevolve?
What is education likely to looklike?

(20:11):
10 years from now?
Give us your best thoughts onthat.

Michael Hanegan (20:14):
Yeah, I think in a decade, education looks
much more decentralized.
It's not that you have to go tothis place to get that or this
place to get that, but thatyou'll be able to get this
particular piece of learning ina whole bunch of places as
opposed to it being, buried in adegree or additional kind of
hoops that you have to jumpthrough to get what you need.

(20:35):
I think you'll see anacceleration of things like
micro credentials or modularprograms where you stack
together what you need as yougo.
I also think you're going to seea re blending for the first
time.
We didn't do this in theindustrial revolution.
In the industrial revolution, weseparated learning and work.
You went to school for howeverlong you were going to go.

(20:56):
My grandfathers went until sixthgrade.
Some people go until highschool.
Some people go to college.
Some people go to graduateschool.
But you go to school and whenyou're done with that, then you
go to work.
That's how the industrialrevolution has worked for us.
The future of learning and workis integrated.
Where there's not going to be, Idid my learning and now I do my

(21:16):
work.
It's going to be learning is anessential part of my work.
And how we navigate that in thefuture is going to be really,
really interesting.
Because industry does not havethe infrastructure for the
amount of learning that we'regoing to have to do in the
future as this technologychanges the way that we work.
So, I think it'll decentralize.
I think it'll become much moremodular and small.

(21:40):
And then I also think there willbe less of an emphasis on
expertise and more of anemphasis on proficiency.
Expertise has been the goldstandard of higher ed forever.
You have to be an absolute beastor monster at whatever you're
learning to have any kind ofcredibility.
And most of our work in the realworld does not use expertise.

(22:01):
It actually uses this kind of acompetency and proficiency.
I like to talk about basketballin this way to say in game six
of the NBA finals when MichaelJordan is shooting free throws,
he's not using his expertise.
He's using this proficiency thathe's built since he was a young
man.
Most of our work doesn't requireour expertise.
We don't have to turn that parton of ourselves often.

David Lowry (22:23):
Some people are concerned that with AI and the
knowledge explosion that why notcreate a robot who will do all
these things for us?
We probably are looking for somekind of machinery to do the work
that we need done.

Michael Hanegan (22:37):
Yeah, as robotics increasingly
accelerates, we'll have a gapbetween what we can do, what we
must do, and what we don't wantto do.
And that will have a directimpact on what things cost and
how accessible they are.
In the U S we have often amassive shortage of affordable
housing.
And part of that is it'sexpensive to build housing with
technology, both robotics andother kinds of automation the

(23:01):
cost of creating housing candrop significantly.
But the challenge is that any ofthese innovations introduce
changes that not everybody'shappy about.
If we have robotics that canbuild a home in three or four
days, because they work 24 sevenwith perfect efficiency and
safety, the cost of a homebecomes 10 grand.

(23:22):
The people who can't afford ahome are going to be really
excited.
And people who bought a house inthe 1970s for a nickel and a
smile and it's exponentiallymore valuable than when they
bought it, are going to bereally upset when their property
value declines because a houseof equal quality now can be
built for a fraction of theprice.

(23:44):
How we navigate those tensionsis going to be really difficult.
So, yeah, it opens up lots ofoptions for us.
And part of the challenge isthat we're advancing faster than
we can plan for what this mightmean.
We are used to technology thatadvances incrementally.
Right?
A lot of changes happened in ourlifetimes technologically, but

(24:04):
it didn't feel fast in themoment.
Right now, it feels exhaustinglyfast.
And the reality is that if youcould look behind the curtain,
it's faster than it feels.

Don Drew (24:14):
What kind of advice might you give our listeners to
how they should process whatthey're hearing, what they're
reading?
They're inundated now withinformation about AI and you've
given a lot of knowledge, a lotof material to think about.
But say your grandfather, whatwould you advise him to do or to
think about AI, not to freak outabout it, because it seems like

(24:37):
it's pretty crazy.

Michael Hanegan (24:38):
If you're still in your career, I would
encourage you to get very clearon where your expertise lies,
then try and understand asquickly as you can, where these
technologies will impact yourparticular field.
So, if you've been an accountantforever, a whole lot of that is
going to disappear in the nextdecade because it can be

(24:59):
automated.
We can describe enough how thisworks that we can eventually
teach a tool to do what you do.
What we can't do right now ismove everybody over to that new
world.
So, if you already haveexpertise in accounting, if you
understand how thesetechnologies work, may be the
second leg of your career ishelping people navigate whatever

(25:21):
that transition is going to looklike for the next 10 years.
So, the AI and your area is theplace where people can really
build some momentum andstability in whatever's coming
so that your expertise is notuseless but will be applied
differently as thesetechnologies emerge.
My colleagues who work inmedicine, their expertise is
infinitely valuable.

(25:42):
But the timetable on which theywork is going to change
dramatically.
The kinds of questions they canexplore are going to alter over
time.
If you're not in a place whereyou're going to work in the near
future, I would encourage you tofind some ways that you can use
these technologies to serve yourfellow human beings.
How can you make sure with whatpower and influence or with the

(26:06):
way that you vote that thesetools are used to take care of
those in your family and thosewho you have influence over who
are still working or who arestill in school to really plug
in and make a way forthemselves.
This can be a really remarkabletime in human history where we
solve a lot of problems that wehaven't had the time, the

(26:27):
energy, or the intelligence tosolve.
I love this quote from DavidGraeber and maybe this is a
great place for us to thinkabout this.
He says, the ultimate hiddentruth of the world is that it is
something that we make.
And that we could just as easilymake differently.
I think that's where we findourselves.
There are some things in ourworld that are as they should

(26:49):
be.
And there are a whole lot ofthings that could be better.
And I hope we'll take advantageof what's here and what's coming
to think meaningfully about thekind of world that we want to
cultivate and sustain goingforward.

Don Drew (27:00):
Michael Hanegan, thank you very much for being with us
today.
That was a wonderful insightinto artificial intelligence.
And I want to thank you forbeing with us.
Advertise With Us

Popular Podcasts

Bookmarked by Reese's Book Club

Bookmarked by Reese's Book Club

Welcome to Bookmarked by Reese’s Book Club — the podcast where great stories, bold women, and irresistible conversations collide! Hosted by award-winning journalist Danielle Robay, each week new episodes balance thoughtful literary insight with the fervor of buzzy book trends, pop culture and more. Bookmarked brings together celebrities, tastemakers, influencers and authors from Reese's Book Club and beyond to share stories that transcend the page. Pull up a chair. You’re not just listening — you’re part of the conversation.

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.