All Episodes

October 8, 2024 34 mins

In this episode of Smart Talks with IBM, Jacob Goldstein speaks with Rebecca Finlay, CEO of Partnership on AI, about the importance of advancing AI innovation with openness and ethics at the forefront. Rebecca discusses how guardrails — such as risk management — can advance efficiency in AI development. They explore the AI Alliance’s focus on open data and technology, and the importance of collaboration. Rebecca also underscores how diverse perspectives and open-mindedness can drive AI progress responsibly.

This is a paid advertisement from IBM. The conversations on this podcast don't necessarily represent IBM's positions, strategies or opinions.

Visit us at https://ibm.com/smarttalks

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:03):
Hello, Hello, Welcome to Smart Talks with IBM, a podcast
from Pushkin Industries, iHeartRadio and IBM. I'm Malcolm Gladwell. This season,
we're diving back into the world of artificial intelligence, but
with a focus on the powerful concept of open its possibilities, implications,
and misconceptions. We'll look at openness from a variety of

(00:26):
angles and explore how the concept is already reshaping industries,
ways of doing business and our very notion of what's possible.
In today's episode, Jacob Goldstein sits down with Rebecca Finley,
the CEO of the Partnership on Ai, a nonprofit group
grappling with important questions around the future of AI. Their

(00:48):
conversation focuses on Rebecca's work bringing together a community of
diverse stakeholders to help shape the conversation around accountable AI governance.
Rebecca explains why transparency is so crucial for scaling the
technology responsibly, and she highlights how working with groups like
the AI Alliance can provide valuable insights in order to

(01:10):
build the resources, infrastructure, and community around releasing open source models. So,
without further ado, let's get to that conversation.

Speaker 2 (01:28):
Can you just say your name? And your job.

Speaker 3 (01:30):
My name is Rebecca Finley. I am the CEO of
the Partnership on AI to Benefit People and Society, often
referred to as PAI.

Speaker 2 (01:40):
How did you get here? What was your job before
you have the job that you have now?

Speaker 3 (01:45):
I came to PAI about three years ago, having had
the opportunity to work for the Canadian Institute for Advance Research,
developing and deploying all of their programs related to the
intersection of technology and society. And one of the areas

(02:06):
that the Canadian Institute had been funding since nineteen eighty
two was research into artificial intelligence.

Speaker 2 (02:15):
Wow early, they were early.

Speaker 3 (02:18):
It was a very early commitment and an ongoing commitment
at the Institute to fund long term fundamental questions of
scientific importance in interdisciplinary research programs that were often committed
and funded to for well over a decade. The AI

(02:40):
Robotics and Society program that kicked off the work at
the Institute eventually became a program very much focused on
deep learning and reinforcement learning, neural networks. All of the
current iteration of AI, or certainly the pregenerative of AI
iteration of AI that led to this transformation that we've

(03:05):
seen in terms of online search and all sorts of
ways in which predictive AI has been deployed. So I
had the opportunity to see the very early days of
that research coming together, and when in the early sort
of two thousand, twenty and tens, when compute capability came
together with data capability through some of the Internet companies

(03:29):
and otherwise, and we really saw this technology start to
take off. I had the opportunity to start up a
program specifically focused on the impacts of AI in society.
There was, as you know, at that time, some concerns
both about the potential for the technology, but also in

(03:50):
terms of what we were seeing around data sets and
bias and discrimination and potential impact on future jobs. And
so bringing a whole group of experts, whether they were
ethicists or lawyers or economists sociologists into the discussion about
AI was core to that new program and continues to

(04:10):
be core to my commitment to bringing diverse perspectives together
to solve the challenges and opportunities that AI offers today.

Speaker 2 (04:19):
So specifically, what is your job now? What is the
work you do? What is the work that PAI does?

Speaker 3 (04:25):
I like to answer that question by asking two questions,
First and foremost, do you believe that the world is
more divided today than it ever has been in recent history?
And do you believe that if we don't create spaces
for very different perspectives to come together, we won't be

(04:45):
able to solve the challenges that are in front of
the world today. My answer to both of those questions is, yes,
we're more divided, and two, we need to seek out
those spaces where those very different perspectives can come together
to solve those great challenges. And that's what I get
to do as CEO of the Partnership on AI. We

(05:08):
were begun in twenty sixteen with a fundamental commitment to
bringing together experts, whether they were in industry, academia, civil society,
or philanthropy, coming together to identify what are the most
important questions when we think about developing AI centered on
people and communities, and then how do we begin to

(05:31):
develop the solutions to make sure we benefit appropriately.

Speaker 2 (05:35):
So that's a very big picture set of ideas. I'm
curious on a sort of more day to day level.
I mean, you talk about collaborating with all these different
kinds of people, all these different groups, what does that
actually look like, what are some specific examples of how
you do this work?

Speaker 3 (05:52):
So right now we have about one hundred and twenty
partners in sixteen countries. They come together through working groups
that we look at through a variety of different perspectives.
It could be AI, labor and the economy. It could
be how do you build a healthy information ecosystem. It

(06:13):
could be how do you bring more diverse perspectives into
the inclusive and equitable development of AI. It could be
what are the emerging opportunities with these very very large
foundation model applications and how do you deploy those safely?
And these groups come together most importantly to say what

(06:33):
are the questions we need to answer collectively, So they
come together in working groups. I have an amazing staff
team who hold the pen on synthesizing research and data
and evidence, developing frameworks, best practices, resources, all sorts of
things that we can offer up to the community, be
they in industry or in policy, to say this is

(06:55):
how we can well, this is what good looks like,
and this is how we can do it on a
day to day basis. So that's what we do, and
then we publish our materials. It's all open. We make
sure that we get them into the hands of those
communities that can use them, and then we drive and
work with those communities to put them into practice.

Speaker 2 (07:13):
You used the word to open there and describing your publications.
I know, in the world of AI, on the sort
of technical side, there's a debate, say, or discussion about
kind of open versus closed AI, And I'm curious how
you kind of encounter that particular discussion. What is your

(07:33):
view on open versus closed AI.

Speaker 3 (07:36):
So the current discussion between open and closed release of
AI models came once we saw chat, GPT and other
very large generative AI systems being deployed out into the
hands of consumers around the world, and there emerged some

(07:58):
fear about the potential of these models to act in
all sorts of catastrophic ways. So there were concerns that
the models could be deployed with regard to different development
of viruses or biomedical weapons or even nuclear weapons, or
through manipulation or otherwise. So this are emerged about over

(08:21):
the last eighteen months, this real concern that these models,
if deployed openly, could lead to some level of truly
catastrophic risk. And what emerged is actually that we discovered
that through a whole bunch of work that's been done
over the last little while, that releasing them openly has

(08:43):
not led and doesn't appear to be leading in any
way to catastrophic risk. In facts, releasing them openly allows
for much more greater scrutiny and understanding of the safety
measures that have been put into place, And so what
happened was sort of the pendulum swamp very much towards
concerned about really catastrophic risk and safety over the last year,

(09:05):
and over the last year we've seen it swing back
as we learn more and more about how these models
are being used and how they are being deployed into
the world. My feeling is we must approach this work openly,
and it's not just open release of models or what
we think of as traditional open source forms of model

(09:27):
development or otherwise, but we really need to think about
how do we build an open innovation ecosystem that fundamentally
allows both for the innovation to be shared with many people,
but also for safety and security to be rigorously upheld.

Speaker 2 (09:43):
So when you talk about this kind of broader idea
of open innovation beyond open source or you know, transparency
in models like what do you mean sort of specifically,
how does that look in the world.

Speaker 3 (09:57):
So I have three particular points view when it comes
to open innovation, because I think we need to think
both upstream around the research that is driving these models,
and downstream in terms of the benefits of these models
to others. So first and foremost, what we have known
in terms of how AI has been developed, and yes,
I had an opportunity to see it when I was

(10:18):
at the Canadian Institute for Advanced Research is a very
open form of scientific publication and rigorous peer review. And
what happens when we release openly is you have an
opportunity for the research to be interrogated to determine the
quality and significance of that, but then also for it

(10:38):
to be picked up by many others. And then secondly,
openness for me is about transparency. We released a set
of very strong recommendations last year around the way in
which these very large foundation models could be deployed safely.
They're all about disclosure. They're all about disclosure and documentation

(10:58):
right from the early days pre R and D development
of these systems, right in terms of thinking about what's
in the training data and how is it being used
all the way through to post deployment monitoring and disclosure.
So I really think that this is important transparency through it.
And then the third piece is openness in terms of
who is around the table to benefit from this technology.

(11:22):
We know that if we're really going to see these
new models having being successful deployed into education or healthcare
or climate and sustainability, we need to have those experts
in those communities at the table charting this and making
sure that the technology is working for them. So those
are the three ways I think about openness.

Speaker 2 (11:41):
Is there like a particular project that you've worked on
that you feel like you know reflects your approach to
responsible AI.

Speaker 3 (11:51):
So there's a really interesting project that we have underway
at PAI that is looking at responsible practices squarely when
it comes to the use of synthetic media. And what
we heard from our community was that they were looking
for a clear code of conduct about what does it
mean to be responsible in this space. And so what

(12:12):
happened is we pulled together a number of working groups
to come together. They included industry representatives. They also included
civil society organizations like WITNESS, a number of academic institutions
and otherwise, And what we heard was that there were
clear requirements that creators could take, that developers of the

(12:35):
technology could take, and then also distributors. So when we
think about those generative AI systems being deployed across platforms
and otherwise, and we came up with a framework for
what responsibility looks like. What does it mean to have consent,
what does it mean to disclose responsibly, what does it
mean to embed technology into it? So, for example, we've

(12:57):
heard many people talk about the importance of water marking
systems right and making sure that we have a way
to water mark them. But what we know from the
technology is that is a very very complex and complicated problem,
and what might work on a technical level certainly hits
a whole new set of complications when we start labeling
and disclosing out to the public about what that technology

(13:19):
actually means. All of these, I believe are solvable problems,
but they all needed to have a clear code underneath
them that was saying this is what we will commit to.
And we now have a number of organizations, many many
of the large technology companies, but also many of the
small startups who are operating in this based civil society
and media organizations like the BBC and the CBC who's

(13:43):
have signed on. And one of the really exciting pieces
of that is that we're now seeing how it's changing practice.
So a year in we asked each of our partners
to come up with a clear case study about how
that work has changed the way they are making decisions,
deploying technology and ensuring that they're being responsible in their use.

(14:05):
And that is creating now a whole resource online that
we're able to share with others about what does it
mean to be responsible in this place. There's so much
more work to be done, and the exciting thing is
once you have a foundation like this in place, we
can continue to build on it. So much interest now
in the policy space, for example, about this work as well.

Speaker 2 (14:25):
Are there any specific examples of those sort of case
studies or the real world experiences that say media organizations
had that are interesting that are illuminating. Yes.

Speaker 3 (14:37):
So, for example, what we saw with the BBC is
that they're developing a lot of content as a public broadcaster,
both in terms of their news coverage but also in
terms of some of the resources that they are developing
for the British public as well. And what they talked
about was the way in which they had used synthetic

(14:59):
meat in a very very sensitive environment where they were
hearing from individuals talk about personal experiences, but wanted to
have some way to change the face entirely in terms
of the individuals who were speaking. So that's a very
complicated ethical question, right, how do you do that responsibily

(15:20):
and what is the way in which you use that technology,
and most importantly, how do you disclose it? So their
case study looked at that in some real detail about
the process they went through to make the decision responsibly
to do what they chose, how they intended to use
the technology in that space.

Speaker 2 (15:39):
As you describe your work in some of these studies,
the idea of transparency seems to be a theme. Talk
about the importance of transparency in this kind of work.

Speaker 3 (15:50):
Yeah, transparency is fundamental to responsibility. I always like to
say it's not accountability in a complete sense, but it
is a first step to driving accountability more fully, so,
when we think about how these systems are developed, they're
often developed behind closed doors inside companies who are making

(16:12):
decisions about what and how these products will work from
a business perspective, and what disclosure and transparency can provide
is some sense of the decisions that were made leading
up to the way in which those models were deployed.
So this could be ensuring that individual's private information was

(16:33):
protected through the process and won't be inadvertently disclosed, or otherwise,
it could be providing some sense of how well the
system performs against a whole level of quality measures. So
we have all of these different types of evaluations and
a measures that are emerging about the quality of these
systems as they're deployed. Being transparent about how they perform

(16:55):
against these systems is really crucial to that as well.
We have a whole ecosis that's starting to emerge around
auditing of these systems. So what does that look like
we think about auditors and all sorts of other sectors
of the economy. What does it look like to be
auditing these systems to ensure that they're meeting all of
those both legal but additional ethical requirements that we want

(17:17):
to make sure that are in place.

Speaker 2 (17:19):
What are some of the hardest ethical dilemmas you've come
up against in AI policy.

Speaker 3 (17:27):
Well, the interesting thing about AI policy right is what
it works very simply in one setting can be highly
complicated in another setting. And so, for example, I have
an app that I adore. It's an app on my
phone that allows me to take a photo of a
bird and it will help me to better understand what

(17:47):
that bird is and give me all sorts of information
about that bird. Now, it's probably right most of the time,
and it's certainly right enough of the time to give
me great pleasure and delight when I'm out walking. You
could think about that exact same technology applied. So for example,
now you're a security guard and you're working in a

(18:07):
shopping plaza, and you're able to take photos of individuals
who you may think are acting suspiciously in some way
and match that photo up with some sort of a
database of individuals that may have been found, you know,
to have some sort of connection to other criminal behavior
in the past. Right, So what goes from being a
delightful Oh, isn't this an interesting bird? To a very

(18:31):
very creepy What does this say about surveillance and privacy
and access to public spaces? And that is the nature
of AI. So much of the concern about the ethical
use and deployment of AI is how an organization is
making the choices within the social and systemic structure they sit.

(18:54):
So so much about the ethics of AI is understanding
what is the use case, how is it being used,
how is it being constrained? How does it start to
infringe upon what we think of as the human rights
of an individual to privacy? And so you have to
constantly be thinking about ethics. What could work very well

(19:15):
in one situation absolutely doesn't work in another. We often
talk about these as socio technical questions. Right, just because
the technology works doesn't actually mean that it should be
used and deployed.

Speaker 2 (19:28):
What's an example of where the partnership on AI influence
changes either in policy or in industry practice.

Speaker 3 (19:38):
We talked a little bit about the Framework for Synthetic
Media and how that has allowed companies and media organizations
and civil society organizations to really think deeply about the
way in which they're using this. Another area that we
focused on has been around responsible deployment of foundation on
large scale models. I said, we issued a set of

(20:01):
recommendations last year that really laid out for these very
large developers and deployers of foundation and frontier models were
what does good look like? Right from R and D
through to deployment monitoring, and it has been very encouraging
to see that that work has been picked up by

(20:22):
companies and really articulated as part of the fabric of
the deployment of their foundation models and systems moving forward.
So much of this work is around creating clear definitions
of what we're meaning as the technology evolves and clear
sets of responsibilities. So it's great to see that work
getting picked up. The NTIA in the United States just

(20:44):
released a report on open models and the release of
open models. Great to see our work cited there as
contributing to that analysis. Great to see some of our
definitions and synthetic media getting picked up by legislators in
different countries. It's important, i think, for us to build capacity,
knowledge and understanding and our policy makers in this moment

(21:06):
as the technology is evolving and accelerating in its development.

Speaker 2 (21:12):
What's the AI Alliance and why did Partnership on AI
decide to join?

Speaker 3 (21:17):
So you had asked about the debate between open versus
closed models and how that has evolved over the last year,
and the AI Alliance was a community of organizations that
came together to really think about, okay, if we support
open release of models what does that look like and

(21:38):
what does the community need? And so that's about one
hundred organizations. IBM, one of our founding partners, is also
one of the founding partners of the AI Alliance. It's
a community that brings together a number of academic institutions
many countries around the world, and they're really focused on
how do you build the resource is an infrastructure and

(22:01):
community around what open source in these large scale models
really mean. So that could be open data sets, that
could be open technology development. Really building on that understanding
that we need an infrastructure in place and a community
engaged in thinking about safety and innovation through the open lens.

Speaker 1 (22:23):
This approach brings together organizations and experts from around the
globe with different backgrounds, experiences, and perspectives to transparently and
openly address the challenges and opportunities today. I poses the
collaborative nature of the AI Alliance encourages discussion, debate, and innovation.

(22:44):
Through these efforts, IBM is helping to build a community
around transparent open technology.

Speaker 2 (22:52):
So I want to talk about the future for a minute.
I'm true, is what you see as the biggest obstacles
to why spread adoption of responsible AI practices.

Speaker 3 (23:03):
One of the biggest obstacles today is an inability and
really a lack of understanding about how to use these
models and how they can most effectively drive forward a
company's commitment to whatever products and services it might be deploying.
So I always recommend a couple of things for companies

(23:26):
really to think about this and to get started. One
is think about how you are already using AI across
all of your business products and services, because already AI
is integrated into our workforces and into our workstreams, and
into the way in which companies are communicating with their
clients every day. So understand how you are already using

(23:49):
it and understand how you are integrating oversight and monitoring
into those One of the best and clearest ways in
which a company can really understand how to use this
response is through documentation. It's one of the areas where
there's a clear consensus in the community. So how do
you document the models that you are using, making sure
that you've got a registry in place. How do you

(24:11):
document the data that you are using and where that
data comes from. This is sort of the first system,
first line of defense in terms of understanding both what
is in place and what you need to do in
order to monitor it moving forward. And then secondly, once
you've got an understanding of how you're already using the system,
look at ways in which you could begin to pilot
or iterate in a low risk way using these systems

(24:34):
to really begin to see how and what structures you
need to have in place to use it moving forward.
And then thirdly, make sure that you structure a team
in place internally that's able to do some of this
cross departmental monitoring, knowledge sharing and learning boards are very
very interested in this technology, So thinking about how you

(24:55):
can have a system or a team in place internally
that's reporting to your board, giving them a sense of
both the opportunities that it identifies for you and the
additional risk mitigation and management you might be putting into place.
And then you know, once you have those things into place,
you're really going to need to understand how you work
with the most valuable asset you have, which is your people.

(25:19):
How do you make sure that AI systems are working
for the workers, making sure that they're going into place.
The most important and impressive implementations we see are those
where you have the workers who are going to be
engaged in this process central to figuring out how to
develop and deploy it in order to really enhance their work.

(25:39):
It's a core part of a set of Shared Prosperity
guidelines that we issued last year.

Speaker 2 (25:45):
And then, from the side of policy makers, how should
policy makers think about the balance between innovation and regulation.

Speaker 3 (25:57):
Yeah, it's so interesting, isn't it that we always think of,
you know, innovation and regulation as being two sides of
a coin, when in fact, so much innovation comes from
having a clear set of guardrails and regulation in place.
We think about all of the innovation that's happened in
the automotive industry, right we can drive faster because we

(26:22):
have breaks, we can drive faster because we have seat
belts in place. So I think it's often interesting to
me that we think about the two as being on
either side of the coin, but an actual fact, you
can't be innovative without being responsible as well. And so
I think from a policy maker perspective, what we have

(26:43):
been really encouraging them to do is to understand that
you've got foundational regulation in place that works for you. Nationally,
this could be ensuring that you have strong privacy protections
in place. It could be ensuring that you are understanding
pential online harms, particularly to vulnerable communities, and then look

(27:04):
at what you need to be doing internationally to being
both competitive and sustainable. There's all sorts of mechanisms that
are in place right now at the international level to
think about how do we build an interoperable space for
these technologies moving forward.

Speaker 2 (27:19):
We've been talking in various ways about what it means
to responsibly develop AI, and if you're going to boil
that down, you know the essential concerns that people should
be thinking about, like what are the key things to
think about in responsible AI?

Speaker 3 (27:39):
So if you are a company, if we're talking specifically
through the company lens, when we're thinking about responsible use
of AI, the most important difference between this form of
AI technologies and other forms of technologies that we have
used previously is the integration of data and the training

(28:02):
models that go on top of that data. So when
we think about responsibility, first and foremost, you need to
think about your data. Where did it come from, What
consent and disclosure requirements do you have on it? Are
you privacy protecting? You can't be thinking about AI within
your company without thinking about data, and that's both your

(28:22):
training data. But then once you're using your systems and
integrating and interacting with your consumers, how are you protecting
the data that's coming out of those systems as well?
And then secondly is when you're thinking about how to
deploy that AI system, the most important thing you want
to think about is are we being transparent about how

(28:46):
it's being used with our clients and our partners. So
you know the idea that if I'm a customer, I
should know when I'm interacting with an AI system, I
should know when I'm interacting with a human. So I
think those two pieces are the fundamentals. And then of
course you want to be thinking carefully about making sure

(29:07):
that whatever jurisdiction you're operating in, you're meeting all of
the legal requirements with regard to the services and products
that you're offering.

Speaker 2 (29:16):
Let's finish with the speed round, complete the sentence. In
five years, AI will will.

Speaker 3 (29:25):
Drive equity, justice, and shared prosperity if we choose to
set that future trajectory for this technology.

Speaker 2 (29:34):
What is the number one thing that people misunderstand about AI.

Speaker 3 (29:39):
AI is not good, and AI is not bad, but
AI is also not neutral. It is a product of
the choices we make as humans about how we deploy
it in the world.

Speaker 2 (29:55):
What advice would you give yourself ten years ago to
better prepare yourself for today?

Speaker 3 (30:04):
Ten years ago, I wish that I had known just
how fundamental the enduring questions of ethics and responsibility would
be as we developed this technology moving forward, So many
of the questions that we ask about AI are questions

(30:26):
about ourselves and the way in which we use technology,
and the way in which technology can advance the work
we're doing.

Speaker 2 (30:36):
How do you use AI in your day to day
life today?

Speaker 3 (30:40):
I use AI all day every day. So whether it's
my bird app when I go out for my morning walk,
helping me to better identify birds that I see, or
whether it is my mapping app that's helping me to
get more speedily through traffic to whatever meeting I need
to go to, I use AI all the time. I

(31:01):
really enjoy using some of the generative AI chatbots more
for fun than for anything else. As a creative partner
in thinking through ideas and integrating it into all aspects
of our lives. Is just so much about the way
in which we live today.

Speaker 2 (31:18):
So people use the word open to mean different things,
even just in the context of technology. How do you
define open in the context of your work.

Speaker 3 (31:29):
So there is the question of open as it is
deployed to technology, which we've talked a lot about. But
I do think a big piece of PAI is open minded.
We need to be open minded truly to listen to,
for example, what a civil society advocate might say about
what they're seeing in terms of the way in which

(31:51):
AI is interacting in a particular community. Or we need
to be open minded to hear from a technologist about
their hopes and dreams of where this technology you might
go moving forward. And we need to have those conversations
listening to each other to really identify how we're going
to meet the challenge and opportunity of AI today. So

(32:12):
open is just fundamental to the partnership on AI. I
often call it an experiment in open innovation.

Speaker 2 (32:23):
Rebecca, thank you so much for your time.

Speaker 3 (32:26):
It is my pleasure. Thank you for having me.

Speaker 1 (32:30):
Thank you to Rebecca and Jacob for that engaging discussion
about some of the most pressing issues facing the future
of AI. As Rebecca emphasized, whether you're thinking about data
privacy or disclosure, transparency and openness are key to solving
challenges and capitalizing on new opportunities by developing best practices

(32:52):
and resources. Partnership on AI is building out the guardrails
to support the release of open source models the practice
of post deployment monitoring. By sharing their work with the
broader community, Rebecca and Pai are demonstrating how working responsibly,
ethically and openly can help drive innovation. Smart Talks with

(33:17):
IBM is produced by Matt Ramano, Joey Fishground, Amy Gaines McQuaid,
and Jacob Goldstein. We're edited by Lydia jen Kott. Our
engineers are Sarah Brugaer and Ben Tolliday. Theme song by Gramoscope.
Special thanks to the eight Bar and IBM teams, as
well as the Pushkin marketing team. Smart Talks with IBM

(33:38):
is a production of Pushkin Industries and Ruby Studio at iHeartMedia.
To find more Pushkin podcasts, listen on the iHeartRadio app,
Apple Podcasts, or wherever you listen to podcasts. I'm Malcolm Glabo.

(33:58):
This is a paid advertised span from IBM. The conversations
on this podcast don't necessarily represent IBM's positions, strategies or opinions,
Advertise With Us

Host

Malcolm Gladwell

Malcolm Gladwell

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.