All Episodes

May 23, 2024 25 mins

AI is poised to impact the political process in profound ways. How do we navigate this uncharted territory? Hosts Beth Coleman and Rahul Krishnan are joined by experts Peter Loewen and Harper Reed to unravel the potential influence of AI on democracy and the spread of misinformation.  

About the hosts:

Beth Coleman is an associate professor at U of T Mississauga’s Institute of Communication, Culture, Information and Technology and the Faculty of Information. She is also a research lead on AI policy and praxis at the Schwartz Reisman Institute for Technology and Society. Coleman authored Reality Was Whatever Happened: Octavia Butler AI and Other Possible Worlds using art and generative AI.

Rahul Krishnan is an assistant professor in U of T’s department of computer science in the Faculty of Arts & Science and the department of laboratory medicine and pathobiology in the Temerty Faculty of Medicine. He is a Canada CIFAR Chair at the Vector Institute, a faculty affiliate at the Schwartz Reisman Institute for Technology and Society and a faculty member at the Temerty Centre for AI Research and Education in Medicine (T-CAIREM).

About the guests:

Peter Loewen is the director of U of T’s Munk School of Global Affairs & Public Policy and a professor in the department of political science in the Faculty of Arts & Science. He is also the associate director of the Schwartz Reisman Institute for Technology and Society. His research focuses on how politicians can make better decisions, how citizens can make better choices and how governments can address the disruption of technology and harness its opportunities.

Harper Reed is a technologist who served as a chief technology officer for Barack Obama’s 2012 re-election campaign. Reed has pioneered crowdsourcing at Threadless.com, founded Modest Inc. and guided the software team at PayPal. His most recent venture was General Galactic Corporation.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:01):
- From the University ofToronto. I'm Beth Coleman.
- I'm Raul Krishnan.- This is What Now? AI.
- From robocalls to deepfakes,
artificial intelligence isalready playing a role in the
2024 election.
- "This is a bunch of malarky"
- This is not real Joe Biden

(00:22):
- misinformation affects
so many aspects of our lives.
In this episode, we'll focus on AI
and the role that itplays in misinformation
with a particular focus onhow it might affect democracy
and the elections that we'llhave over the next few years.
- Harper Reed is a technologist
who predicts the future for a living.
He served as Chief Technology Officer

(00:42):
for the Barack Obama2012 reelection campaign.
Reed has pioneeredcrowdsourcing at Threadless.com,
Founded Modest Inc. and guidedthe software team at PayPal.
His most recent venture wasGeneral Galactic Corporation,
- And we also spoke with Peter Loewen.
He's the director of theMunk School of Global Affairs
and Public Policy at theUniversity of Toronto.

(01:03):
He's also a professor in theDepartment of Political Science
and the associate directorof the Schwartz Reisman Institute
for Technology and Society.
His research focuses on
how politicians can make better decisions,
how citizens can make better choices,
and how governments can addressthe disruption of technology
as well as harness its opportunities.
- The conversation withHarper was filmed live at the

(01:26):
Schwartz Reisman Institute,
Absolutely Interdisciplinary conference.
So I met with him, he wasbeaming in from Chicago
and we had a real time conversationin front of an audience.
Harper, what did you domaking the Obama data machine?
What was the strategy? Whatkind of data did you use?

(01:48):
Did you use persuasive design?
What kind of automation? Sotell us what did you guys do?
- So back in 2011, when Ijoined the Obama campaign, I was
largely just a tech person.
And in many ways I say that
because there's this importantnuance that I think we need

(02:12):
to kind of dig into a bit, which is
tech had not reallyparticipated in campaigns in the
United States before.
It had been very superficial, very ad hoc,
but in, in 2011, we didwhat amounted to a very
elaborate and deliberate tech program.

(02:33):
And so I say that to answer your question
because one of the key parts
of campaigns circa 2011, 2012 was that
by kind of default, based on the modeling,
which we can get into,we don't do persuasion.
And so our goal isentirely based on turnout.

(02:53):
And so fundamentally thetechnology we built was not about
convincing someone that at that time that
that Mitt Romney was a bad person
or a good person or whatever.
The tech was more about makingsure that you got to vote.
And to give you an idea howthis worked is we had a lot
of tech that was surveillance of one kind

(03:16):
or another, looking at voter files,
calling people on the phone,
and then reporting back toour databases, et cetera,
to just define whether someone had voted.
So an example is mybrother lives in Colorado.
That's where I'm originally from.
On election day in 2012,
he received three knocks onthe door, about an hour apart.
Then he went and voted

(03:36):
and no one knocked onhis door from then on.
And so from a tech standpoint,all we did was in, in kind
of simple terms,
all we did was maketech a force multiplier
for the volunteer.
And so that meant people were calling,
we automated the calling,people were sending messages,
we automated that messagesending, we did that
through really elaboratemachine learning driven tools

(04:00):
for like Facebook.
We did it through very brute force tools
for things like SMS.
We also had very simple toolsthat would just pop up a,
a record and say, "Hey, call this person."
And that was as simple as it was
with a script or what have you.
It's important to underscorethat we did everything.
Like from ML to not ML,to manual printed on paper

(04:22):
to automated really intensemachine learning based flows.
Like, we did everything.
Everything we did from how wepicked a database software to
what interface we were buildingreally came from Barack
Obama's perspective ongrassroots organizing.
And that was a hard bit for meto kinda wrap my head around

(04:44):
because I remember whenthey said to me,
"Harper, you may make a decision that is not"
"the right tech decision, but will be"
"the right political decision."
And that was somethingthat come, came up a lot.
And there's also thingswhere we were like,
okay, we can make the rightpolitical decision here,
but is it the rightdecision for the users?
An example of this is we created a model,
the analytics team, andthen my team helped make it so

(05:06):
it was thepublic could use it that
would analyze your Facebook interactions
with the Facebook graph
because we had very unlimited,unlimited access
to the Facebook graph, and we were able
to basically determine whoyour best friends were.
And so we, we basically hadthis script that we could run
and it would be like,here's your best friend.

(05:27):
And it was very accurate. Scary, accurate.
And we learned a lot from our early tests.
And we were just like, this,we have to not be creepy.
And one of the examples was the interface
for this user experience for this,
which thank God we had suchamazing user experience people,
which I think is actuallyone of the things
that's missing from a lot ofthe AI work right now is not
thinking about the humaninteraction aspect.

(05:49):
But we had these greathuman interaction people
and one of the things they saidis, rather than just saying,
here's your best friend, let'sactually put it in a list
of people and have you choose,
but we'll just order it by whowe think your best friend is.
And so we're not sayingthis is your best friend.
We're not showing, we're notbragging about our model.
We're not saying how cool our ML is.
We're just saying, "Hey, we just happened"
"to sort it this random way, you know,"

(06:10):
"and make it easy for youto find that person" because
before it was real creepy and real bad.
You don't, you don't necessarilywant me to surface your ex
and be like, "Hey, check this out."
- So let's talk aboutthe dark side of data,
social graph, Facebook, wasthat even legal what you did?
And is this not the source of a kind

(06:31):
of both surveillance, creepy culture,
you were talking about user interface,
and also just the, the source of
how you find your targetsfrom misinformation.
- It was very legal.
It was also within the termsof service of Facebook.
And I think that's thereally important nuance is
that Facebook then changed theterms of service in reaction

(06:52):
to what we had done.
And also Romney haddone, it wasn't just us,
but the real key there is that we had,
we had permission that ourusers gave us by interacting
with us and logging inwhat a specific permission
that was friends of friends.
And so you could see the users' data,
you can see the user's, friends data,

(07:12):
and then you could see theuser's, friends of friends data.
That meant that we basicallycould replicate the social
graph with like, you know,
10 million people loggingin, in a very nice way.
And, and I think the important thing
to keep in mind is we only need a graph
of 110 million people to,
to cover the entire votingpopulation in the us It's

(07:33):
110 million people roughly.
And so that was pretty good.
We did a pretty good job of that.
And Facebook was not happy about that
because we were basicallyable to just copy their
graph and interact with their graph.
They changed that very quickly.
And I think that thebig difference between
what we did andCambridge Analytica did is
we didn't buy the data.
We had the user's consentto every interaction we had.

(07:55):
We just say, you know,connect your Facebook account,
we're gonna ask for all the permissions
and we'll send you a bumper sticker
that cost us a little bit of money
and then we can use thatto do targeting, et cetera.
And then we did the targeting via emails,
text, what have you.
And the targeting was veryspecific where we would say, oh,
you have this best friendin this battleground state,

(08:16):
why don't you give 'em acall and remind 'em to vote?
Once again, going back to turnout,
we really did unleash a little bit of
what was Pandora's boxhere in that the
technology we used worked very well.
Obama was such a public figurethat everyone looked at it.
I had Carl Rove tell me,he read a dossier on me

(08:37):
and my team that was, youknow, a thousand pages long.
Like, like it was intense, the amount
of scrutiny that we had.
So people looked at what wedid and then they copied it
and then they used that allover the world, primarily
to do things that Ipersonally don't agree with.
You know, we saw it in Ukraine in 2014,
we saw it in 2016 obviously,and so on and so forth.

(08:57):
And there's a, there's veryfew, I think entrepreneurs,
startup people, et cetera,who can easily sit back
and point to things and say, "Hey,"
"that's not what I intended."
But we talk about unintendedconsequences a lot.
But it's hard to, it's hardto actually show an example.
And I think this is a goodexample of us inventing something
because of hopes and dreams

(09:19):
and, you know, BarackObama as an amazing person
and we were like, yes, let's do this.
And then seeing thatused against populations
that are at risk or whathave you is, is hard.
- What would you say arethe biggest threats to AI
and democracy now?

(09:39):
- I think the biggestthreat today is not the,
is not the models or AI.
I think the biggest threattoday is actually the people
who are owning these private models.
I'm not worried about AI as a technology.
I'm worried about who has access to it,
how are they using it,
and how do we make sure thatit's being used in a way

(10:03):
that we as asociety are agreeing is,
is good instead
of having corporationsmaking those decision for us.
- What do you recommend we do?
How do we, how do we defeat cynicism
and hopelessness whilecreating a future we want?
And I'm quoting you.
- Yeah, I, I don't know.I'm, I'm, I'm really,

(10:23):
I really have a hard time with this
because I think it'sthis complicated notion of
the technology that's happeningright now is the coolest
technology of my lifetime.
Like, it's, it's incredible,just incredible with a capital I
I can't, I can'tunderestimate how cool it is,
but I also think we're totally [expletive].
Like, I think that we'rereally in trouble both

(10:46):
because of how incredible it is, but also
because this is, this is a very large bell
that we can't unring.
I don't know what warning to give people
because there isn't a real,
there isn't a real easy answer.
Do we choose a pathtowards Star Trek
or we do choose a path towards Mad Max?

(11:08):
That's what I think about a lot.
And I personally like thefashion of Mad Max more,
but I don't want to die andI'll probably be first to die
and I'm a ginger, I'llget sunburned a lot.
And so, but Ido like, I do like the,
the energy independence, youknow, capitalism, independence,
et cetera of Star Trek.
And I, and I think we justhave to figure this out.
Part of it is making surepeople can afford the things

(11:29):
that they need, like foodfor example, you know, and,
and a lot of these things that,
that the US is struggling with.
But part of it alsois having a rational
regulation about how models are governed.
And I don't mean that all modelsshould be open, I just mean
that we had need to havesome framework that allows
for a company like OpenAI to build models

(11:50):
that are changing the world,
but also do it in away we don't, you know,
something weird doesn't happen and that's
'cause I don't know what the future is.
I know it's just gonnabe weird at this moment.
- Rahul, I wanna getyour thoughts on this.
I was reading an articlein Scientific American
and there was a hypothetical scenario
where an AI driven politicalcampaign use an advanced

(12:13):
language model to target individuals
to customize messages tothem, to persuade them
to vote in a particular way.
What do you think of thepossibility of a scenario like that
and what do you thinkits implications are?
- So first off, that's terrifying.

(12:35):
And I guess if I were to thinkabout it a little bit more,
the reason why it's terrifyingis if you have a democracy
where only one party hasaccess to that technology,
it gives them such an advantageover the other one that
it almost seems unfair.
And so one of two things can happen.

(12:57):
Either the election commissionin the country in question
steps up and says, nobody uses this.
And the use of this technology is cause
for a candidate being withdrawn from the
ballot, that's one option.
Or they say that if thistechnology is being used,
it should be used in a levelplaying field, which is

(13:18):
to say everyone gets access to this.
And I don't know that Iam qualified to be able
to say which one will end up happening
or will end up working betterfor democracy as a whole.
Because I,
I don't think thesethings are being done in
any control way.
We're effectively sortof using these tools

(13:40):
and rolling the dice
and seeing what happens, assuming
that it will all work out okayis maybe the worst of
the decisions we can make.
- Hi Peter. - Hi.
- Thank you for being here - My pleasure.
- So is there a particular new risk
that AI introduces in termsof thinking about democracy?

(14:02):
- Yeah, so there's lots. So let's
take the persuasion one, right?
So, and this will come up again
and again as we talk about this,
but a lot of the, a lot
of the way we think about what's right
and wrong in a, in acampaign, in a democracy,
but like in the actual practiceof political campaigning
and electioneering has to do with
how people interactwith each other, right?
So, you know, you want,you want, for example,

(14:24):
when a person's interactingwith a political candidate,
they wanna know thatthat's really the candidate
that they're talking to, right?
We wouldn't send out doppelgangersto be a candidate, right?
So there's something about theauthenticity of the person.
We wanna know that whatthe person is saying,
hopefully is rooted in fact
or is, you know, has been researched,
and we wanna know that
what they're suggesting aregood ideas and things, right?
So there's all these thingsthat are kind of like scale ups

(14:46):
of just like normal conversational norms
when we're talking to anyone.
And those are sort of implicitlyembedded in, in elections.
Now you can work througheach of those, right?
But imagine that peopleget invited to, you know,
an ask me anything with a,with a political candidate
and they're online and peopleare sending in questions
and they're seeing a candidate talk about
what they believe orwhat they're going to do.

(15:07):
Then you find that afterwardsthat it was all an avatar.
Well, there's somethingwrong there, right?
And, you know, we're both uneasy with it.
We're not quite sure why we are,
but it kind of violatesthat norm that what we're,
what we're, what we'rehoping is happening is
that people are talking to people, right?
So that kind ofraises the issue.
Right now, so you say, well what about if people

(15:27):
knew they were talking to a chatbot in some sense, right?
And it's probably a bit easier that way.
We've always sort of, we'vehad chatbots for a long time,
people are, are sort ofcomfortable with them, right?
But they may have assumptionsabout what's causing the chatbot
to give the answers that it is
that they've been intelligently determined
by political parties.
So, you know, the chatbotis just an interface.
You're asking it what is yourparty's position on abortion,

(15:49):
and it's giving you the answer
that the candidate wants you to give.
But if in fact the,
the interface is actuallymaking its own inferences, then
that's further removed, right?
From the idea that a humanhas put the answers in there.
So I think a lot of the unease
with these things really comes from
how many steps away they are from the idea
that it's people talking to,to people within the norms

(16:10):
of political campaigns, right?
So even, even just this,so you think about a,
a television commercial, right?
I mean a television commercialis a candidate recording
what they're saying, what their views are,
sending it out into the world.
And American commercialseven have these taglines,
you know, this, my nameis, you know, Joe Biden,
I approve this message, right?
With the notion that saying,this is my message. Right?

(16:31):
- Right.
- Well, what if instead of recording Joe Biden
for a message, they tell him,
"Joe, we're gonna do a wholebunch of President Biden,"
"we're gonna do a whole bunch of testing"
"and we're gonna do a bunch of AI data work"
and we're gonna buildsome econometric models."
Think of our all, all of our polling,
and then we're gonna have this AI basically record
the most persuasive message
of you talking, you know,
and we'll do theavatar, don't worry about it.

(16:53):
And by the way, we'll justhave the voice at the end say,
I'm Joe Biden and I approve this message.
Are you okay with that?
And that's like, that's just production.
It's not all that differentthan what's going on,
but there's not, there'sa human missing in it.
I think that's a, a lotof these kind of source
of the unease of what AIcan do in democracy,
just that it takes humans out of it
in ways that we're not really, we'renot really comfortable with.
- So in terms of trust,

(17:15):
let's say
I'm Joe Biden
and I approve this message,that's an authentication,
but very easily put that online
and somebody can just remix it. - Yes, yes.
- What the content is. - Yes.
- So you have layers of problems - Yes.
- In terms of trust - Yes.
- What a campaign is producing - Yes.

(17:36):
And whether they can justsay, okay, don't worry, whoever...
- Yes, - ... candidate, we'll take care of it.
We have enough of your voice and your image
that we can just manipulate this.
- You know, this like, let'sassume all everything out there
is produced by the campaigns
and in some way approved by them.
We still don't like the factthat it might be a machine
that we're talking to, or it might be a machine
that's doing the thinking.

(17:56):
But then if you layeron this dimension of,
of not knowing, right?
And, and deception
and not knowing if this actually is
the campaign that's doing it.
I mean that's, that'sI think probably orders
of magnitude worse.
Because what it does is ittakes us from the realm of kind
of feeling uneasy aboutsomething we've gotta kind
of work our way through into feeling

(18:16):
like this thing is corrupted.
We don't know who's saying what.
I don't know who to trust here.
Is anything I'm seeing real.
And that seems to me qualitativelyto be a much worse
state of affairs for democracy.
And that... think aboutthreats of AI, right?
Kinda stuff that makes us worried
and bumps up against our norms.
Okay, that's the first set.
The second part is stuff that just terrifies us

(18:38):
because we don't know what's real anymore. Right?
And, and we don't know who to trust.
You know, if you wanted to characterize what is misinformation,
you know, looks like on the internet,
it's not that it's beingproduced by a a million moles
and we've gotta whack them all.
It's that for the most part,it's being like most things,
most information in theworld, it's being produced,
propagated by a verysmall number of people

(19:00):
and then consumed by,consumed by all of us.
My concern now is that,
and this may be froma position of naivete,
but I think it's a real concern that we're getting into a space
online where the productioncosts of good quality,
good looking content,even persuasive content
that's been tested and proven
to be persuasive is nearing zero.

(19:21):
- What does a solution look like?
Because we're already at ourkind of limit in terms of
as human beings tryingto judge, is this real?
Is this phishing? What is this?
So to be on alert all the time
to have to ask, second guess over
and over again, it's just exhausting.
- Yeah. This hasn't been alinear degradation of our,

(19:44):
of our, of our democracies.
You know, I think that actuallythere was a point in time,
which wasn't that long ago,in which actually the internet
kind of got us to an almost golden age
of accountability in democracies.
So it used to be the case that, you know,
if you were running for office in Canada
or in other countries,you could go door to door
and say different thingsto different people, right?
And, and candidates were notorious

(20:04):
for speaking outta bothsides of their mouths.
Right? Which is why weknow the idiom, right?
But no, you'd go to one house,
you get a sense of what theywant, you'd say one thing.
- Yeah. There's no accountability. - Yeah.
But there was a period of timenot long ago when you would
get caught out if youwere doing that right?
Because you had, you hadvery pervasive media,
the internet was obviously universal
and it wasn't personalized.
Right? But I, so Idon't know what it's going

(20:26):
to look like, but I could imagine
that if people get the senseof how acute the problem is,
then one kind of golden outcomewould be that we say, okay,
we wanna prioritize unmediated access to
what candidates are sayingas much as possible.
And what is that? That'scandidates getting up
and giving a speech eachday, maybe at a rally,

(20:48):
but giving a speech or doing a debate,
which is then broadcastby all the platforms live
as you know, that, you know,the candidates talk at 6:00 PM
and they talk till 7:00 PM
and you can see whatthey're doing live right?
And enough people are watchingthe same thing at the same
time, that then it becomeshard to create fakes out
of it and spread it around.
That would be a, that would be a great
outcome actually, I think.

(21:09):
And that would actuallybe a bit of a renewal
of the public square.
And it would be people talking to
as many people at once sayingthe same thing as publicly
as possible to hedgeagainst the possibility
that it would be manipulated
and they'd be made to saysomething that they don't believe.
That's probably a golden age,
but there's no reason tothink that we'll get there.
There is every possibilitythat the next three

(21:29):
or four years are a massivedegradation of democracy.
Where, where people don't believe anything
and they retreat into evenfurther into, you know, pockets
of the world in which peopleconfirm what they believe
and they'll, they'll haveall those biases confirmed.
That's a terrible recipe for democracy.
But there's no reason tothink we won't go there.

(21:49):
- We're gonna wrap it up,
Peter.
How are people right nowin these next few months
with these elections, how arethey gonna be able to discern
truth from lies?
Real information from misinformation with,
with the tools that we have at hand?
- Yeah, it's very, very, very difficult.

(22:10):
But here are, here are threethings that people can,
can do practically, right?
They should relyon mainstream media.
And I know that soundskind of probably old
fashioned in some sense, right?
But, but mainstream media,whether it's television networks
or or newspapers, areguided by certain norms.
And they, and, and they dohave a professional incentive
and a commercial incentive topresent things that are true.

(22:31):
So that's the, that's the
first, that's the first thing, right?
The second one is, is thatif something seems too good
to be true, it might be, you know,
and you should, you should try
to find other information about it.
And confirmation biasmakes that really hard
and it's so salacious, right?
But if you read somethingthat's interesting,
that's says something, you know,
nasty about a candidate youdon't like, it might be nice
to believe that it's true,
but you should look intowhether, into whether it is true.

(22:52):
But the third thing is thatI'll just say is that, you know,
elections aren't only a contestof parties dancing in front
of one person at a time.
They're all of us makinga collective decision.
So if you wanna know whetheran idea is a good idea, let,
let's leave aside trueor not for a second,
but is an idea a good idea?
Is a candidate good for me or not?
Is a party good for meor good for the country?

(23:13):
Good for my city? Shouldtalk to other citizens
and really listen to them, right?
And say, what are you,what are you thinking?
Why have you come to that position?
And return to the art of askingother people questions about
what they're thinking and whythey feel the way they do.
Right? And then you may find reasons,
and a lot of what we're doing in elections
is not making a decision.
It's, it's finding reasons for decisions
that we already wanna make a lot of

(23:34):
that can then give us a chance
to feel like we're making a good decision.
One, we're proud of. One that'sbased on reasonable things.
Elections aren't only fightsover facts, they're fights over
what we wanna do together,who we want to help, you know,
what side do we want to be on in terms
of where we're in society.
So I think I'd encouragepeople to just
to worry about what's true or not,
but also ask themselves questionslike, what's good for me?

(23:56):
What's good for my neighbors?What's good for my country?
And then, you know, have conversations
with people about that.
How will humans know what is real?
- So I recently saw a setof AI generated images.
Now I only know they're AI generated now,
but when I saw them I had an immediate
visceral effect on me.

(24:17):
And it was basically Trump
and Biden working together on an article.
And instantly,
I became quite cynicalabout the political process,
how much gaming we really knowhappens in the background.
But this image kind ofbrought it all together
for me instantly.
And then my, of course,my thinking was this is a,

(24:37):
this is a deepfake, this is not real.
- If AI is a stimulus, acatalyst to thinking bigger,
it's also a stimulus tothinking harder about some
of the problems that we're living with
and that we will produce witha kind of AI acceleration.

(24:59):
- Absolutely. And I thinkit is very important
that people spend a lot ofconscious effort in trying
to think of making sure everyone is
brought along for the ride.
And it's not left to justa select few to accelerate.
- One of my favoriteoutcomes from our short
but impactful season is seeingthe rise of your fan club.

(25:24):
So, you know, you don'thave to be shy about it.
It's good. Like,
- I did not know I had a fan club.
- Oh, I've seen yourface like six feet tall,
circulating around.
- Well, you've been right there with me.
- No, no, I'm, I'm notalways in the pictures of,
it's just like, Rahul, and the other person

(25:45):
- From the University ofToronto, this is what now air.
- Listen to us whereveryou get your podcasts
and watch us on YouTube.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.