All Episodes

December 21, 2023 58 mins

On this episode, we’re joined by Patrick Hall, Co-Founder of BNH.AI.

We will delve into critical aspects of AI, such as model risk management, generating adverse action notices, addressing algorithmic discrimination, ensuring data privacy, fortifying ML security, and implementing advanced model governance and explainability.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:06):
Welcome to Fiddler's AI Explained.
I'm with Patrick Hall.
Patrick Hall is the co-founderand principal scientist of BNH.AI.
He's also the visiting faculty atGeorge Washington School of Business.
Um, so he advises clients on AIrisk, um, and we're going to be
diving into a lot of stuff today.

(00:27):
I'm really excited to hearwhat Patrick has to say.
Um, my name's Mary Regan.
I've worn a couple of different hats in mylifetime, started out as a data scientist,
uh, moved into product management, andmost recently I've been helping with
our community efforts here at Fiddler.
Um, Patrick, anything that you want to sayjust to sort of kick us off and introduce

(00:48):
yourself, um, that I might have missed?
Uh, I think just briefly some ofthe things that we were talking
about before people jumped on.
Um, I've helped my firm support the NISTAI Risk Management Framework, which I
think is a really important developmentfor, um, people working in our space.
And also, as, as we were discussing,I sit on the board of what's known as

(01:09):
the AI Incident Database, which is a,um, a sort of open source intelligence
effort and an information sharingeffort around public failures of,
um, AI and machine learning systems.
Uh, but, yeah, so, so, it, I, I justbring those up because I, I hope
they sort of frame some of the, uh,points that we'll get into later.

(01:30):
Great.
Um, so, yeah, let's just get started.
So, your law firm, it helps businessesmanage, like, a variety of things,
privacy, fairness, security, transparencyof AI, and it, you know, you're
providing really highly technicaladvice to manage, um, AI models, and
this ends up, of course, affecting,you know, hundreds of thousands to

(01:52):
millions of people around the world.
Um, can you just give usan overview of your work?
Yes, and, um, another one of thethings that I should have said, uh,
when you asked me if I had anythingelse to add was, I'm not a lawyer.
I am not a lawyer.
Nothing I'm saying is legal advice.

(02:13):
Um, if, if, uh, anybody in the audiencewould like legal advice in these topics,
I can connect you to actual attorneys.
Um, so, so I run the, um, technicalside of the house at BNH.AI, and, um,
yeah, I mean, you, you gave a goodbasic description of, of what we do,

(02:33):
uh, and, and to kind of layer in somemore details, I would say we, we do
a lot of data privacy work, and I'm,I'm probably less involved in that.
Um, some of my attorney partners areprobably better qualified to work on
especially legal data privacy issues.
I do sometimes advise on sort ofthe privacy enhancing technologies

(02:55):
aspect of data privacy.
Um, we do a lot of work in nondiscrimination, as I'm sure
audience members are aware, um, formany reasons, not just bad data.
AI and machine learning systems have anasty tendency to perpetuate, sort of,
um, existing social biases, and, um,we, we do a lot of work around policies,

(03:22):
governance, testing, and remediationof, of those biases in machine learning
systems, um, we do a lot of model audits,and that, that, you know, that can mean
different things to different firms.
But that's essentially when we come inand kind of investigate, um, a model and
try to, uh, you know, hold it accountto some standard, whether, whether it's

(03:46):
an existing law, or whether that's likethe NIST AI Risk Management Frameworks,
or it depends on, on the, um, client'sneeds, uh, and then, yeah, so for
transparency, it's often about adverseaction notices, and, and finance, lots
of finance companies have questionsabout how to generate accurate adverse
action notices with machine learningmodels, um, um, And there's, there's even

(04:11):
sort of red teaming, you know, sort of,sort of looking into the security of,
of different machine learning systems.
And so that's, yeah, please.
Can you define, because we actuallyhad a conversation a few weeks ago
from someone from Lavender AI whowas mentioning red teaming, and
it was actually a new term for me.
So can you define what that means?
Yeah, it it it's an interesting term.

(04:34):
So I think like, so, all right,like I, you know, I want to take one
step back and say, one of the mostinteresting things we've done with NIST
is work on this glossary and work ona glossary of trustworthy AI terms.
And I think if I, if I have asecond during the conversation,
I'll try to go find the link andput it in the chat or something.

(04:55):
But, um.
I think there's a huge vocabularyproblem in AI and data science in
general, and I don't think trustworthyor responsible AI or ethic, you know,
we're seeing what the issues are justtrying to determine what these terms mean.
And I would say red teams,red teaming is the same way.
So I hear people using red teaming,um, in the more traditional information

(05:17):
security sense, which is where youhave a separate team or an external
team come in and really, um, testyour systems for vulnerabilities.
And, uh, a very adversarialand thorough manner.
Um, I also hear red teaming used, um,almost, uh, almost the same as a model

(05:39):
audit or a model validation exercise.
And I think just.
Especially in our work with, uh,generative AI, that does seem to
be the kind of preferred way totalk about, uh, validation efforts
around generative AI systems.
So, instead of saying we're validatingthis, you know, generative AI
system, I often times hear peoplesay we're red teaming system.

(06:01):
And I think that's totally fine.
Like, I, you know, at some, in some wayswe need clear terms just to communicate
and that kind of bugs me, but in the end,as long as people are sort of testing
systems in a thorough and adversarialand objective way, um, I, I also
don't care what you call it, I guess.
So, so red teaming can be a verysort of specific, um, sort of

(06:23):
bringing external or a separate teamto, um, Really adversarially test
the system for vulnerabilities inthe information security context.
But it seems to have just takenon this more broad notion of model
validation in generative AI world.
And I think both are fine.
That totally makes sense.
Um, so one, thank you for that answer.

(06:44):
And two, like, you know, you'resort of in the thick of it, right,
you're every company that is comingto you is is actively seeking out
ways to improve their responsible AI.
But that maybe isn't the whole landscaperight there isn't necessarily pressure,
you know, whether it be legally orfrom policy perspective you can get,

(07:04):
companies really to install any sort ofyou know responsible AI framework, I'm
curious how you view that and really,like, what you think the most significant
barriers to any kind of widespreadadoption of responsible AI practices are.
Well, I do think You know, themarket pressures in AI machine

(07:26):
learning are immense, right?
Like, people just want to developand deploy as quickly as possible.
And, um, that holds true, and I thinkit's sort of the main barrier of adoption.
For responsible AI practices,and in my opinion, except
in, um, regulated verticals.

(07:48):
And so, when you're working in employment,or you're working in consumer finance, or
you're working in housing, it's like, it'sreally like being in a different world.
It's just a different world than, thansay, uh, a big unregulated tech company.
Right.
But, You know, so, so if, if sortof market incentives are the main
kind of, um, blocker to the adoptionof responsible AI practices,

(08:13):
then I would say the main sort ofaccelerant of adoption is regulation.
And, um, you know, when we startedBNH.AI, our thesis was that
AI regulation would be coming.
And, you know, at least to acertain extent, that's true.
So there's, there's, um,the new New York law.
New York City law that mandates AI biasaudits for tools used in employment.

(08:40):
There's the EU AI Act, which for smallercompanies in the US may not be such a big
deal, but a lot of our clients are sortof large multinational organizations, and
for them the EU AI Act is Is a huge deal.
And, and for them, AI um, isreally expected to be regulated
by the end of this year.
You know, the, the development and useof AI is really supposed to be regulated

(09:02):
in the eu, um, by the end of this year.
And so, and, and there's constanttalk of sort of AI bills and AI
regulation and stuff in the us.
I, I'm, I'm not optimistic aboutthat, but I do expect the, um, EU
AI act to hit like a ton of bricks.
And if you're paying attention,which I don't know why.
You know, most people wouldn't be, butthere are more and more local laws,

(09:25):
um, around the use of AI in the UnitedStates, and that's sort of what kicked
off, um, the, the big, you know, that,that's sort of what led cybersecurity,
both products and services, to kindof take off was state level breach
reporting requirements about, you know,starting to hit about 20 years ago.
So, I, I see, you know, insane high marketpressures preventing the adoption of,

(09:50):
of, um, responsible AI, but I also seesort of a steady drumbeat of regulation,
uh, that, that should acceleratethe, the adoption of responsible AI.
I see, and so, like, your customer baseright now, would you say, primarily in
the verticals that are already have,you know, like, finance, for example,
that you mentioned, that has heavypolicy, or are you kind of seeing, yeah?

(10:13):
I just, just, I'd say abouthalf and half, really.
Interesting.
Um, I was curious too, so, because,you know, I was looking over your
website before we had this, um,and you wrote there, the biggest
barriers to the adoption of AI,um, are not technical, they're
legal, ethical, and policy related.
And I just, like, I thought that wassuch an interesting statement, and I

(10:37):
would love to hear you break that down.
I, I guess I still think that's true.
Um, I mean, we know, we know how todo machine learning, um, and I'm, you
know, we, we certainly don't alwaysget it right, but, you know, the, the
basics of sort of training machinelearning systems to make specific

(10:58):
decisions about, um, sort of um, Uh,task in a controlled environment,
um, I would say that's a, that's afairly well known, uh, uh, commodity.
Uh, turns out we can also teach machinelearning systems to write documents
and generate pictures pretty well, too.
Uh, so, so I think a lot of the basicsof the technology are worked out.

(11:21):
Um, you know, and, and, you'll haveto forgive me, like, I don't, I
don't consider the, the giant deeplearning hyperparameter Easter egg
hunt that happened between, youknow, 2006 and today to be some kind
of huge step forward in technology.
Like, like, we had a, we had bigbreakthroughs in, in sort of visual

(11:42):
pattern recognition, you know,starting around, you know, the,
the mid aughts into 2010, 2015.
Uh, now, now we seem to have bigbreakthroughs in generative AI.
So, I think You know, I know fortechnologists, there's this sort of
constant focus on, you know, oh, youknow, we, we figured out how to make this,

(12:02):
this part of the optimization processbetter, um, we figured out how to, you
know, make some attention mechanism worka little bit better, but I think for
consumers, you know, and not to be mean topractitioners, but I think for consumers,
those are fairly irrelevant developments,um, and from a consumer perspective or an
end user perspective, like, um, Machinelearning is somewhat well figured out.

(12:24):
It's, it's just, can wemanage the risk around it?
Can it be legal?
Can we make sure it'slegal and used legally?
Can we align it to humanvalues and human ethics?
Um, so I think, you know, my, mythesis continues to be the basics of
the technology are mostly worked out.
Now, um, it's about It's about realworld adoption, and, and in particular,

(12:48):
in, in high impact use cases.
And both, both decision making andcontent generation in many sort of high
impact use cases are already regulated.
Whether it's done by a personor a computer is, is again
a little bit irrelevant.
Um, it may be harder tounderstand how the laws apply.
It may be harder to regulate orenforce how the laws apply when

(13:08):
a computer does these things.
But for me, um, You know, it's,it's really these alignment, um,
risk management and policy questionsaround AI that I think are going
to be the biggest, um, sort ofroadblocks, not for the adoption of
responsible AI, but for the adoptionof AI and machine learning in general

(13:32):
over, say, the next 10 to 20 years.
So like in that vein, how have you seen,you know, I think all of us, I imagine
all of our audience members too, everyonecan have an example that comes to mind of
how AI risk was handled poorly, but I'mcurious, based on the position that you
sit, if you could walk us through someexamples that you've seen, of course, um,

(13:53):
you know, predicting and, your clients.
Yeah.
So, oftentimes, all right, so, so it'sprobably safer to talk about the AI
incident database in, in this case.
Um, and if you haven't looked atthe AI incident database, you should
Google it and go, and go check it out.
You, you know, you might be better offjust looking at AI incidents than, than
listening to me, uh, talk for the next 40minutes or whatever but the, there's just

(14:21):
all kinds of failures of these systems.
And so anything from like a chessrobot breaking somebody's finger
to a, um, a robot in an Amazon
Pause you, for a second.
Yeah, yeah.
Can you even explain what the IncidentDatabase is for people who don't know?
Like what, what is the purpose of that?
Let's start there.
Well, okay, alright, so there's,there's I think this is actually

(14:45):
a really important point.
I think, um, large organizations, inmy experience, and even individual
people really struggle with notions offairness and privacy and transparency.
Right?
We all have different expectationsaround what those mean and around
how they might be implemented.
And your definition offairness is probably as good
as my definition of fairness.

(15:06):
But when we go to implement themmathematically, we might find
out that, you know, we have somekind of mutually exclusive norms.
Um, and so these notions are really,these sort of ethical notions,
I think, are really hard forlarge organizations to deal with.
Incidents, on the other hand, Ithink, are a known quantity from
information security and transportation.

(15:27):
Um, incidents don't really have to involveanyone's sort of politics or ethics,
they're just Bad things that happenthat cost you money or hurt people,
and it's not really debatable whethermoney was spent or people were harmed.
Uh, and so I think incidents are agood thing to build around to motivate

(15:47):
sort of adoption of responsible machinelearning or AI because they can help you
sidestep some of the really challengingissues around fairness and privacy.
Um, and so the incident databaseis a database of incidents.
It's indexed, it's searchable, it'sinteractive, and it has about 500, it has
thousands of public reports of, of, aboutmore than 500 known public AI and machine

(16:12):
learning system failures at this point.
And, um, there's two goals of,of incidents, or, or sort of
incident reporting in a database.
The first one, and the most important one,is don't repeat past failed AI incidents.
So let's come back to that as probablythe best example of mishandling AI risk.
And then, um, you know, there's sortof a, um, reputational ding that may

(16:39):
disincentivize companies from sort oftaking their most from, from acting
on their, their sort of most high riskdeployments, if they have some, you
know, some, some inkling that theymight show up in this public database.
So I think in, you know, the incidentdatabase has two, two purposes.
One, information sharing toprevent repeated incidents.

(17:00):
And two, just sort of, um, makingorganizations and the public aware
of these failures and in thatway maybe disincentivizing some
of the highest risk uses of AI.
So, so getting back to what I wouldconsider like a really good slash
bad example of mishandling AI risk,um, There was a chatbot in 2021

(17:27):
released in, um, South Korea or SouthKorean social media on the Kakao app.
A very popular me, um, app, um,Korean language app, and, um.
It, all of it, it started makingdenigrating comments about, um,
various different types of people,uh, and, and had to be shut down.

(17:51):
Now, this was a near exact repeatof Microsoft Research's very
high profile Tay chatbot failure.
where Tay was sort of poisoned by Twitterusers and started making all kinds of
um, racist and obscene statements, um,and, and if you look at the marketing

(18:12):
of these chatbots, if you even, they're,they're both sort of, um, you know,
anthropomorphized, which is bad, um,they're both sort of anthropomorphized
cartoon images of young women, and theyhave, like, light around their faces.
It, it was like, it was like the,the ScatterLabs designers just fully
repeated the Tay failure, and, and Ithink that that points to the real level

(18:35):
of maturity in a lot of the design ofthese AI systems, you know, people are
repeating the most, people are, peopleare still repeating just the most
famous AI incidents, and, uh, had theyperhaps checked the good old AI incident
database before they got started, theymight have thought, oh, hey, there's a
risk that this chatbot, um, you know,might make some biased statements

(18:55):
towards different kinds of individuals.
And then I have to add on, onLee-Luda, this is all public.
Um, you know, it was also handing outpeople's personal information because
they failed to scrub, they apparently,you know, failed to scrub personal
information from the training data.
So it was both insulting peopleand, um, violating data privacy law.

(19:19):
And, and so, yeah, I, I'd saythat, that's, that's a There's,
there's a lot wrong there.
But to me, the most wrong thingis repeating a failed design.
And so that's what that, that's a majorreason why I'm into the AI incident
database is to get as much informationout there about failed design so that
people hopefully stop repeating them.

(19:39):
So you said one thing, you said,you know that the chatbot was
anthropomorphized and that is bad.
I'm curious.
Yeah.
In my opinion.
Yeah.
Well, yeah.
Can you talk about whyyou think that's bad?
Yeah, so I think it's bad becauseAI and machine learning systems are
nowhere near as smart as people, and,um, if we over rely on them the way

(20:00):
we might rely on human intelligence,people are going to get hurt, and,
um, I think self driving cars...
Go ahead.
Yeah, so people are gonna gethurt just because, yeah, maybe
you're gonna go right into one.
Yeah, so I, I think the most, Ithink the most obvious example of
this is, um, self driving cars.
I, I just don't think they're possiblyready yet to drive around, you know,

(20:24):
sort of crowded, congested streets.
Like, we can argue back andforth, like, can they, can they
operate in closed environments?
Can they operate on sortof predefined routes?
Yeah, maybe, maybe.
But what's absolutely clear to me is wedo not know, yet know how to make self
driving vehicles that can operate in sortof real dense urban, um, environments.

(20:44):
And so, you know, if we anthropomorphizeAI systems, people start to
think You know, the, the public,the, the, the consumers of the
system often start to think thatthey're as smart as people, right?
And I think that's been a bigpart of issues around ChatGPT is
that it, it's sort of presentedas this human like intelligence.

(21:04):
And so, you know, the, themain danger is that we, um, By
anthropomorphizing these systems, wesort of imbue them in the public's
mindset with human like intelligence.
And that's dangerous because these systemsdo not have human like intelligence.
And they, you know, go lookat the AI Incident Database.

(21:26):
They fail in myriad ways.
And, um You know, it, it's just, it'sjust not a good thing to overhype the
sort of capabilities of consumer products.
And, and especially when we know thatthey can lead to sort of harmful outcomes.
And I think anthropomorphic cessation,um, plays a large role in that

(21:47):
kind of overhyping of, of currentAI machine learning capabilities.
I do not think we're at the dawn ofartificial general intelligence, okay?
Sorry, I'm sorry.
Um, yeah, I think that's really clear.
I agree with you on that a lot,so thanks for spelling that out.
I, you mentioned, so right before wegot on, you were talking about how the
incident rate had changed in the database.

(22:08):
Yeah.
I don't know if you want to.
Yeah, yeah, sure, sure.
I'm happy to get into that.
So, um, the, yeah, I, at theend, I'm, I'm, I'm, At one point,
I was a leading contributor ofincident reports to the database.
That's how I got into it, and, um,and again, I got into it because
I want to hear that story too.
Yeah, yeah, no, no, it's, it's simple.

(22:29):
It's just the thing I was talking about.
I, like, you and I, or, or, you know,a lot of people that, Like, with
advanced degree, a lot of Americanswith advanced degrees that work in
tech might think that we have goodideas about fairness and privacy.
Um, but it turns out that thoseideas are often unworkable with

(22:50):
inside of large organizations, oftendon't align to, um, other valid
notions about fairness and privacy.
And so I just found that to be a reallydifficult sticking point to implementing
Responsible AI, and I found incidents tobe a much easier thing, uh, to, to sort
of motivate people around ResponsibleAI, because we might disagree on what

(23:10):
fairness or privacy or transparencymeans, but we don't want to look stupid,
and we don't want to hurt people, andwe don't want to cost our company money.
And so, so I found, I thinkincidents are just a really important
motivator for Responsible AI.
Um, but yeah,
I just want to say, I want to underlinethat, because I have been thinking
about, you know, obviously at Fiddlerwe think about responsible AI, and I

(23:33):
think that you're absolutely right.
That's a kind of a brilliant insightfor me, and you know, I'm so thankful
for you sharing that, but you're right,because that's date, like cold data,
that then we can just point to and say,hey, the incidents are going to, it's
a clear metric, Yeah, we don't haveto argue exactly like you're saying
about these sort of nitty gritty howare we going to measure fairness for
this group, who's defining fairness,is it individual fairness, is it group

(23:56):
fairness, what are we asking for, no,right, let's just look at the instance.
Yeah.
Okay.
And yeah, yeah, so I'm glad that makessense to you, um, so yeah, so, you know,
I don't know the exact numbers, maybe Ishould as a board member, but you know, as
a, as a, as a contributor and a user, um,What there was a commercial when I, that

(24:19):
will date me when I was growing up, it waslike, I'm not, I'm not just the president,
I'm a, I'm a client or whatever, um, soI don't just sit on the board, I'm an
active user of the AI Incident Databasetoo, and so, you know, around the end of
the year, there are about 300 separateincident reports in the Incident Database,
and now, You know, we're a few months into2023, there's like two or three hundred

(24:42):
more incidents and they're the majority ofthem from a, you know, quick qualitative
analysis standpoint are from generativeAI systems and from self driving cars.
There's others, there's others,but, but the bulk of the new
incidents are from generative AIsystems and self driving cars.
Yeah, and so the way you put this tome, for our audience here before, was

(25:07):
with the release of ChatGPT, so thatwas in November, the incident rate
has doubled basically since then.
Yeah.
It, I, I think that's roughly true.
I think that, that something alongthose lines is at least roughly true.
Yeah, that's wild.
I think, um, I mean, so this, Ihave this, I have this question laid
out and it kind of goes into it.
So what do you think the mostsignificant AI risks for businesses

(25:30):
that they should be aware of today?
Okay, it's a good question, and I thinkfor business people, um, I think there's,
there's really three main uses of AI,and they Two of them have similar risks,
and one of them has very different risks.
And so, you know, from my wholecareer, um, until generative AI

(25:54):
really started taking off, machinelearning was used for two things.
It was learned for pattern recognition,so like, facial recognition,
and it was used for decisionsupport or decision making, right?
So, Um, this is, this isessentially classifiers, right?
And, and whether it's a big fancyclassifier that picks your face
out of a million other faces, orwhether it's a, um, you know, more

(26:18):
kind of traditional classifierthat, that decides whether you're
likely to pay your loan back or not.
Um, you know, we were basically usingAI machine learning to make decisions
and or to detect patterns that we wouldsubsequently make decisions about.
And for those, um, Types of applicationsthat I think still make up the bulk

(26:39):
of, of real world applications.
Um, you know, your main risks are thethings that we've been talking about.
Um, bias and discrimination, uh, dataprivacy violations, poor performance,
you know, problems with robustness,problems with reliability, um,
issues with transparency, um, and,and so that's sort of where the, the

(27:01):
field of Responsible AI comes in.
sort of grew up around the riskof decision making AI systems.
What I really want business peopleto understand is that generative AI
is not for decision making, okay?
Like, you should not take a systemthat was trained to generate content

(27:21):
and not trained to make decisions Andthen use that system to make decisions.
So I think a major risk, so generative AI,to me, has a very different risk profile.
Um, a major risk with generative AI,um, and, and this, I would say this
is shared with, um, with traditionaldecision making or pattern recognition
applications, but, but I think it'smore acute with generative AI, is

(27:45):
automation complacency or over reliance.
And that's the, that's the exact issue.
That's the fancy way to, to say thatexact issue I was talking about.
You can ask ChatGPT for stock tips.
Um, and, you know, they mightbe as good as a, as a human
investors, but they're still silly.
They're still, it's just, it justpredicted the next word, right?

(28:06):
Like a lot of humans aren't going toI'm not good at stock picking either,
because it's a random walk, but, um, youknow, I'm just trying to pick something
off the top of my head to make the pointthat you can ask ChatGPT for a decision
making, you know, a decision makingquestion, and it will give you a good
sounding response, but it's really, reallycritical to understand the only thing
that has happened is it generated themost likely response conditioned on the

(28:31):
tokens you gave it, and that is not anadequate mechanism for decision making.
And so I think.
You know, that, the major risk tome with, um, generative AI is, is
automation complacency or over reliancebecause people don't understand
it's not for decision making.
Um, there's significant risk there, or, orat least significant unanswered questions

(28:53):
around intellectual property, right?
There, there's lawsuits flyingaround like crazy on intellectual
property with generative AI.
There's significant questionsaround data privacy.
Um, there, there's, um, You know, there'salso issues of, and I hate this term,
but I'm going to say it so people knowwhat I'm talking about, hallucination.

(29:15):
What, what in the good olddays we just called errors.
So these systems make errorsall the time because all they're
doing is predicting the next word.
And so, you know, sometimesthey get that prediction wrong.
And so I think, There's, you know,there's a lot of risk with these systems,
depending on how they're deployed.
They can be deployed in lowrisk or high risk settings.
Um, I think they're better off, allmachine learning is, today, is mostly

(29:38):
better off deployed in low risk settings,unless you really know what you're
doing, but then generative AI reallydoes have a different risk profile than
sort of more traditional decision makingor pattern recognition applications.
And this, I think, goes, so all of thatgoes back to your earlier point of the
dangers of anthropomorphizing a system.
Yeah, yeah, yeah, this is why wedon't want to anthropomorphize.

(29:58):
Yeah, because humans, humans,humans have the ability to detect
patterns and make decisions.
Yeah, um, I just, I just don'tthink we're there yet with, um,
2023 AI and machine learning.
And I don't, you know, I don'tunderstand why, why people sort of feel
pressured not saying things like that.
Um.

(30:19):
But, but yeah, I just think if, if we'rerealistic, we're just not, we're just
not up to human intelligence yet andmay never be, um, and so anyway, I'll,
I'll, I'll be quiet, but please go ahead.
Yeah, I mean, I guess it's also in, youknow, you would say any of the companies
who are releasing the large languagemodels, you know, GPT-4, so OpenAI,
whoever, it's in their interest to say,oh, it's near human intelligence, right?

(30:42):
Some buzz around the model itself.
Yeah.
So there's a publicityaspect I think to that.
That's those.
That's those market incentives drivingpeople away from responsible AI.
Right, right.
Um, so I'm gonna, I'm, I'm notsure if I should jump to some,
um, audience questions yet.
We are getting a few, but I think, um,maybe I'm curious before we, we get there.

(31:07):
I, I, one question I see is how doyou foresee the role of AI ethics
evolving in the coming years?
All right.
I, I'm bleak on the outlook of AI ethics,and, and, and not, not in academia, uh,
I think in academia, it's likely to, tosort of flourish slowly, like, like other

(31:27):
branches of ethics do, um, in, in sort ofthe research world, but in the commercial
world, I, I think I've seen the future ofAI ethics, and it's kind of sad, um, so,
Don't break my heart, Patrick.
Don't break my heart.
All right, all right.
Well, it doesn't have to be all, itdoesn't have to be all companies, but,
but, I mean, I think the trend that I'veseen has been that, um, you know, it, it

(31:53):
was, it's almost like company, companiesaccidentally hired people with ethics.
To do, uh, to do AI ethics,and that led to real sort of
commercial problems, right?
And, and while I might agree with alot of the sort of famous AI ethicists
that saw their big tech careers come toan end, you know, you can also see it
from the company's perspective, like,their job is to make money and sell

(32:16):
products and get products out the door.
Um, and so, So what I've seen thoughis, is the companies kind of fixed this,
that bug in their system, right, likenow I feel like AI ethics groups in large
research, large commercial research labsare focusing on sort of the, the fake
Terminator apocalypse, which is You know,I, I, I do think generative AI and other

(32:39):
as AI systems can cause catastrophic risk.
I, I think a deepfake, you know, a,a well timed deep fake, a well done
deep fake could, could cause a war.
I mean, I, I, I don't think thatthere are, I think catastrophic risk
are possible, but I think this, thissort of pretending that we're at
the dawn of a GI and the computersare gonna link themselves together.

(33:00):
And do bad things is, is pretty silly.
And I see that, you know,being a focus of AI ethics.
And then also I see just peopleinvestigating what I would also call
really silly topics around like thefeelings of large language models.
Um, I don't possibly see how largelanguage models have feelings.
And so I, you know, I could be wrong.

(33:20):
I, I could have, I could just betoo traditionalist here, but, but
unfortunately I think I've seen acommercial sort of Um, practice of AI
ethics head in, in pretty silly directionsthat, that have no chance of sort of
impeding a company's commercial goals.
And so, you know, it, it may havebeen a mistake to mix for profit
companies and anything involvingethics together in the first place.

(33:44):
Um, but, but what I see there The remnants there are, are
pretty silly and pretty useless.
And so I'm hoping that, that in academia,AI ethics will be more serious and focus
on more real and substantive topics.
Um, and, and I really look towardsthe future of risk management.
And so I think for companies, ethicswill continue to be very difficult.

(34:06):
Um, but, but hopefully risk managementis is a better lens for companies
to think through how to align theirsystems with sort of their own
corporate or other human values.
So, um, you know, that,that's my answer there.
I hope it's not too bleakor too upsetting, but, um,
No, I think, yeah, you're, it'sa really real answer, right?

(34:27):
I mean, I think everyone I imagineon this call knows, you know, that
at the end of the day, you know, it'syour own business's profit, right?
Like that's what's goingto make or break you.
So even I think that's why, you know,it's hard, it's difficult to have
regulation come, as you're saying,you know, from the inside, right?
Because ultimately that first piece, youknow, your profit is going to, is going

(34:48):
to win out over any sort of ethical.
Yeah, yeah, in an organization that's,that's basically mandated to make profits.
Yeah, it's, it's just right.
It's, uh, it's, it'ssomewhat straightforward.
It's somewhat straightforward.
Yes.
Um, so, okay, let's see.
We've had a few questions coming in.
I want to make sure that the audiencefeels heard and gets the chance to ask.

(35:10):
Please.
Yeah, yeah, please.
So, um, Praveen is asking howdo you handle data protection?
If AI can easily scrape some ofyour personal and professional
data, as was with the case withChatGPT and the incident at Samsung.
Oh, I think it was with KaKaoTalk.
Mm hmm.
Mm hmm.
Well Alright, so, so one, Imean, it's, it's a good question.

(35:33):
I, I'd like to sort of correct or pushback on the premise, one premise of
the question, which is that companiescan easily get your personal data.
Um, that's less and less true,at least in the legal sense.
Now, technically, can they go out andkind of scrape and buy or, or, you know,
by hook and by crook, get your data?
In the U.S.

(35:54):
and other countries, yes.
In other countries like theEU or China, not, not so much.
And so, so one, I, you know, I thinkthe public needs to be more, um,
needs to advocate better for theirown data privacy rights because
while it's It's absurd that we don'thave AI regulation in this country.
It's even more absurd that we don'thave data privacy regulations or

(36:16):
federal level data privacy regulations.
So I think, um, one, thereare some federal data privacy
laws in different verticals.
Two, there are more and more stateand local laws around data privacy.
So I think it, it should be harderand harder for companies to sort
of access your personal data in,um, predatory and deceptive ways.

(36:36):
But the reality is today,at least in the U.S.
Um, that, that it isstill mostly possible.
Um, so I think you just Sadly, youjust have to take responsibility
and, you know, be careful what yousign up for, be careful what you
post on social media, um, be careful,uh, how you store your information.

(36:58):
I mean, I'm, I'm a, I use password, youknow, I use password managers, VPNs, um.
You know, all, all these kinds ofthings in an attempt to, to keep my,
my data as private as possible, but,but really the issue is that we should
just have better data privacy regulationin the U.S., generally speaking.
So one, this makes me havea, a question about that.

(37:20):
I want to go a little bit deeper, butactually realizing that this, what this
person was asking about was the incidentwith Samsung with ChatGPT, Korea, where.
someone inputted their own.
They used CPT inputtedtheir own sensitive data.
And then now that is OpenAI has that data.
Mm-Hmm.
. So the slight,
I think there's some, I, I wannabe, I think there's some question

(37:45):
as to whether OpenAI has the data.
Um.
So I want to be clear about that, but,but, the, the very clear advice, I'm
aware of this incident, and, and, youknow, I can talk about that more, but to
me, the very clear advice, and I thinkthis really pushes back at, at the value
of these generative AI systems, they'regreat for writing Christmas cards, they're
great for writing, you know, poems tomy four year old, I don't see really

(38:10):
how they can be used today in high riskapplications within large organizations
because of the controls that are needed.
And so the controls that are neededthere are, you should not copy and
paste into or from the UI, okay?
So you have to type in information or getit in there another way that doesn't just
replicate your company's proprietary data.

(38:31):
Not, not necessarily because OpenAI hasaccess to it, but more because We're
not sure what OpenAI does with it.
We're not sure what hackers or attackerscould do with it if OpenAI does have it.
And we're really not sure aboutthe intersection of, um, local
and federal, depending on thecountry, and international data

(38:53):
privacy laws and the use of these.
These complex new AI systems, right?
So I, I don't think it's as clear cutas, as OpenAI stole their data, but I
do think there's significant risk there.
And that, um, we don't really knowwhat OpenAI does with the data.
Um, we, we don't have any good answersfor really how data privacy laws

(39:14):
and IP laws that exist are, and areenforced work with these systems.
So I think in all of that uncertainty.
The only thing you can do is notcopy and paste into it, which
makes it pretty hard to use.
And I think, you know, I, I'm awareof many, so companies don't like
to come out and say this, but I'maware of many companies that just ban
ChatGPT and other sort of generativeAI use case, use, user interfaces.

(39:38):
from their servers the same way they banFacebook and Snapchat and everything else.
Um, and, and so, you know, I'llleave it at that, but, but the, the,
the, a normal human being would havegiven the answer of, don't just copy
and paste into the user interface.
In fact, You really should not copy andpaste into and from the user interface.

(39:59):
Um, so I'm really curious, I actuallydon't know much about data privacy laws,
so can you just quickly break down how wedon't have data privacy law protection?
Like, how you see that?
Sure, so, and then I saw the littlesnip about, um, are we just going
to ignore OpenAI's privacy policy?
And that's why, that's why Isaid, no, we're not ignoring it.

(40:19):
Um, we, companies have a, have a longrun, companies really struggle to
follow their own privacy policies, okay?
And then moreover, there's a lotof open questions about how complex
LLMs interact with existing dataprivacy and IP laws, even if the
privacy policy is followed to a T.
But getting to your, you know, gettingback to your question, the U.S., the U.S.

(40:41):
tends to take, and again, I'm a, I'm aself taught policy person, and if there's
any real policy people on the line,they'll, They'll see that immediately.
But the U.S., the U.S.
tends to regulate at the, and withinverticals, within industry verticals.
And so there are strong dataprivacy laws in education.
There are strong dataprivacy laws in healthcare.

(41:02):
Um, but there's not, like there isin the EU, or in China, or in many
other countries, a broad sort ofoverarching, um, country wide law
that sets a, that sets out, um, Thatsets out sort of basic rights around
how organizations can use data.

(41:23):
And so in the EU, you, there's,there's like six legal bases
under which data can be used.
And none of those legal basesinclude Training your cool AI system.
Um, and so literally you have to, youhave to have a lead, you know, one
of the most basic things about dataprivacy law is, is it sets up legal
reasons for which you can use data.

(41:44):
And if you are using data outside ofthose reasons, then it might be illegal.
Um, Consent is problematic, but, buta big part of data privacy law is
where we consent to having our dataused, um, you know, things about how
long data can be stored, how long datahas to be kept, um, the conditions
under which it's stored, those are allkind of basics of data privacy law.

(42:07):
Okay, yeah, thank you, that's very clear.
Um, I'm gonna fold, so here's another,audience question about and i'm going to
fold it into an existing one that i hadas well um so their question is there a
document or a checklist or even a rubricwhich can be used to gauge the AI risk
or AI applications and i want to foldthat kind of into the larger question
that i was going to ask you which is howcan companies very broadly you know or

(42:30):
just ensure that they're developing anddeploying AI systems in an accountable way
So broadly, I would send people to thenew, you know, as of, as of January 2023,
the NIST AI Risk Management Framework.
Um, there, there's a million, youcan Google a million different
Responsible AI Checklists orTrustworthy AI Checklists and, and

(42:53):
some, some are better than others.
Um, but, Likely the best one, orlikely some of the best ones, come
from institutions like NIST and ISO,International Standards Organization.
Um, and so, if, if you're lookingfor the best advice, I would send
you to places like NIST and ISO.

(43:13):
They have different standards,but, but they do work on sort of
making sure the standards align.
And, um, you know, I, I, I, It dependswhat kind of guidance you're looking for.
ISO is more like a checklist.
You know, ISO basically does supplythese very long checklists for like, you
know, validating machine learning models,ensuring reliability in neural networks.

(43:35):
There's, there are now ISO standardsfor this, and they really are kind of
like deep, long, technical checkliststhat you can apply to your system.
NIST is a little bit different.
Um, NIST is, does more sort ofresearch, and then synthesizes
that research into guidance.
And so, so NIST has just put forward, um,really a mountain of AI risk management

(43:58):
guidance with their AI Risk ManagementFramework Version 1, and that, that
would be one of the best resources inthe world that I would send people to.
Now, accountability is one of my favoritetopics, and um, I'm gonna, I'm gonna
kind of tackle that a little separately.
Um, how, how do you ensureaccountability in AI systems?

(44:20):
Um, I think there's one direct way, andit's all about explainability, and the
kind of stuff that Fiddler does, whichis, uh, you enable actionable recourse.
Okay, so you tell people how a decisionwas made and provide them a process to
appeal it in a logical and timely manner.
Okay, that, that is, that is probablythe key rubber hits the road.

(44:41):
How do I make an AI systemaccountable is I give people the
opportunity to appeal wrong decisions.
That's like, that's the point ofthe sphere on AI accountability
in my, uh, in my opinion.
Now, you know, one other thing that Ialways like to point out is Uh, banking
regulations, where the use of machinelearning and predictive models, um,

(45:04):
has been regulated for a long time,for decades, uh, they, at, at the
largest banks, um, mandate or, or notmandate, but, but sort of, it's become
tradition, and regulators would have alot of questions for you if you didn't
do this, um, they put one single humanbeing in charge of AI risk, and that

(45:27):
one single human being gets You know,they're, they're well paid and they get
big bonuses when the AI machine learningsystems work well, and they get penalized
and potentially fired when they don't.
Moreover, that one single humanbeing, which, this stands in direct
contrast to most of the responsible AIdirectors I run into, um, has a very

(45:47):
large staff and a very large budget.
Um, and, very crucially,they are independent from
the technology organization.
You know, I run into all kinds ofresponsible AI directors who have
no budget and no staff and reportup through the technology chain.
That's not, that's not,that doesn't do anything.
And so In model risk management, theyinstate a single human being with all

(46:12):
of the accountability, and that persondoes not report through technology,
they report to the board risk committee,and they're hired and fired by the
board, not by the CEO and the CTO.
So there's both sort of governanceinternal structures that are needed
to make AI systems accountable, likechief model risk officers, and then
there's also this notion, I thinkthere's a lot of things you can do to

(46:34):
make AI systems accountable, but on thetechnical side I think the point of the
spear is allowing people to appeal wrongdecisions, which oftentimes entails
describing how a decision was made andallowing people to appeal that, you
know, the logic of the decision or thedata on which the decision was based.
Yes, that really makes sense.
Yeah, I think it's also really interestinghow you're seeing most people sort of

(46:55):
being folded into the technology wing.
Yeah.
I mean, yeah.
Great points.
So I'm going to fold in together sortof two questions that I'm seeing that
are sort of broadly on the topics ofethics and sort of who, you know, so
this person is saying, when we sayethics, who's setting the baseline?
And the second person is, is asking,you know, AI and data are deeply

(47:16):
political and how do we balance theinterests of countries at different
economic and social trajectories?
And I think thesequestions are kind of tied.
Right.
And, and I think,
Globally agreed upon statements.
Yeah.
Yeah.
Yeah.
Yeah.
I, I think.
So you can, if you want to see notglobally agreed upon standards, but

(47:37):
standards on which You know, bureaucratsand civil society in a large number
of countries have agreed on standards.
You can go look at the OECD AI work.
Um, and, and so OECD does have AIstandards just like NIST and ISO.
And OECD is sort of, um, alarger group of countries.

(47:58):
And so it is possible, it is possibleto come to some kind of understanding.
But, but I think, you know, thisgets at what we were saying earlier.
You might have good ideas about fairness,I might have good ideas about fairness.
And they might be really different.
Also, I mean, it's just obvious thatthere's too many, uh, white dudes and,

(48:19):
and just not enough diversity whenit comes to setting the baselines of
what's considered fair in technology.
So, so what I really try to do is Um,one, you know, work with many different
kinds of people, both in terms of, ofsort of demographic background, but,
but perhaps as importantly in termsof, um, professional backgrounds, like

(48:43):
I like to work with statisticians,and economists, and psychologists, and
social scientists, and I like to talk tothe people who use the systems, right?
I like to get a broad audience.
Spectrum of perspectives on systems,but, but, you know, irregardless of
what we can do to deal with theseissues around bias and privacy, bias
and privacy, and their sort of normativemeanings across different cultures.

(49:07):
That's why I look todifferent kinds of solutions.
Like I look to the AI incidentdatabase and incident response.
I look to risk management, whichis actually a pretty boring state
field that that is fairly mature.
Um, and does a good job handlingrisk, whether that risk is, um,
an airplane, you know, crashing,or an algorithmic system illegally

(49:32):
discriminating against millions of people.
So, um, I, I think that I'm not anethicist, I'm not a trained ethicist, um,
I almost, in my professional work, tryto sidestep this, this notion of ethics,
because it's a wicked problem, and that,that's a real term, wicked problem.
Um, it, it's You know, it's borderlineunsolvable, but what we can't, but

(49:56):
there are things that we can do,which are, you know, focus on incident
response and reducing incidents.
There are things that we can do,like just bring sane risk management
practices into the AI world, that Ithink are actually a lot more easy and
direct than trying to to tackle thesereally tough ethical and political, and

(50:16):
yeah, I mean, the question is right.
These are, data and models are verypolitical, and they will be used, they are
and will be used for political purposes.
So I think, you know, weneed to be aware of that.
We need to deal with those risks.
They are very serious, and I, forme, the way that I do it is by
focusing on more concrete things.
Incidents, incident response, preventingincidents, and risk management.

(50:39):
Excellent.
Maybe could you say, like, becausejust in that answer, you said, you
know, of course, focusing on incidentresponse, but then also doing,
you said, sane risk management.
Could you just give a couple of bulletpoints on some things that you think
would be universally applicable?
Yeah, so, so having one single person withactual responsibility and accountability

(51:04):
in charge of AI risk is important.
Um, not a responsible AI directorwith no staff and no budget.
There are reports to the CTOthat doesn't do anything.
Um, having executive support for riskmanagement around AI systems, so having
senior executives in the board understandwhat's going on in an organization around
AI, having them approve of those things,that, that really changes the tenor of

(51:28):
what organizations want to do with AI.
Um, having clear and transparentand written policies and procedures
that people get education on so thatthe rules of the road are, are well
known and available to everyone.
Um, and, and really this is,this is the most basic thing.
This is the most basic thing.

(51:49):
If you want people to do risk management,you have to pay them to do it.
Okay, it's just, it's really simple.
It's, it's really, that's reallypretty straightforward, right?
Risk management is a boring,tedious, difficult job that has
to be done by people that arejust as skilled as the developers.
So, having equal stature is anotherbasic risk management point between
validators and, and developersis another really basic point.

(52:12):
And, you know, the risk managementwork is never going to be as exciting
as the development work, so youjust have to pay people to do it.
So people have to be incentivized, peoplehave to be properly incentivized to
take, um, to take on risk management.
And then again, you know, like Iwas bringing up, you really have
to have the right organizationalstructures to do risk management well.

(52:33):
And so if, if people are wonderinghow I know all this, it's
actually all in one document.
called, uh, SR 117 by the Federal Reserve.
It was released in 2011.
Um, it's about 20 pages long,written in plain language.
You can and should go read that, andyou will have a very, you know, a
much more clear idea about how to dorisk management around these systems.

(52:54):
So, um, those were just, and all right,so one more basic risk management.
Nobody, not JPMorgan,not Meta, not the U.S.
government, has enough resourcesto manage all of their risk.
And so you have to measure and prioritizerisk and focus risk management resources
on the most, um, perceived mostdangerous risk or most threatening risk.

(53:14):
And that's another waythat we do risk management.
So super helpful.
Thank you.
Um, so a few more audiencequestions in our last minute.
So do you The clarity emerging in Clay'slaw that deals with the, quote, "right to
be forgotten" from ML training data, ifa single individual wants to be, quote,
"forgotten," does that automatically meanthe death penalty for all models derived

(53:37):
from datasets that included the individualwho wants to be forgotten entirely?
No, I, I don't, I, I'm not a lawyer,and I don't practice, and even more than
not being a lawyer, I'm not a lawyerthat practices in the EU where they
actually have a right to be forgotten.
We don't have that in the U.S., okay?
So, um, what, but, but manycountries do have this, and this

(53:58):
kind of gets back to the discussionwe were having about ChatGPT.
I assure, I work with some of thebest data privacy people in the world,
at least I think I do, and the, thehonest, The truth is, like I was saying,
people do not have a clear idea ofhow the right to be forgotten is going
to interact with complex AI systems.
People don't have, um, a clear ideaof whether there truly is a right to

(54:22):
explanation under the GDPR in the EU,and if that right does exist, how is
that going to work with AI systems?
Um, how do already existing dataretention limits and data storage
requirements interact with AI systems?
People don't, the, you know, the, thequestions, the answers to these questions
are sort of on a case by case basis,like, like the questioner said, sort of

(54:45):
coming up in, in various court cases or,or other sort of technical problems, um,
and, and my impression is there's just noreal clear answer, and that's why I would
say that technologies like ChatGPT arerisky for large organizations because,
you know, say that you use ChatGPT ina country or, or a jurisdiction where
there is a strong right to be free.

(55:06):
Forgotten?
Nope.
What's going to happen?
People don't know theanswer to that question.
And so it's not that OpenAI is stealingeverybody's data, it's just more that
there are these unknown risks aroundusing machine learning, any kind of
machine learning, um, and sort of theburgeoning, uh, field of data privacy law.

(55:27):
There's just a lot of open questions.
And if you're a large organization witha big operating bank account that can be
sued for a lot of money, or face reallyserious regulatory damages, um, those are
risks that you might not want to take,despite the technology seeming so cool.
We have, gosh, there's so many, stillso many good questions coming up.

(55:47):
I'm going to ask this last one beforewe sort of wrap up, which is, so what do
you think about the position of Elon Muskand others, you know, who think about
holding back on AI development in any way?
Oh, that's really simple.
Elon Musk means Needs people to thinkAI is better than it actually is.
That's why he signed that letter.
Okay, um, OpenAI needs its competitorsto slow down their development.

(56:10):
That's why they signed that letter.
Um, I don't think anyone of thesophistication of an Elon Musk or or
the CEO of a major AI company thinksthat we're at the dawn of AGI, right?
I think that they signed theseletters for sort of, um, uh,
you know, political reasons.
Like we were just saying, oh,this is all political, right?

(56:31):
Uh, Tesla needs people, needs,needs the government and consumers
to be confused about the currentstate of self driving AI.
Which isn't that good.
Go look at the AI Incident Database.
Um, OpenAI needs their competitorsto slow down, because their, many
of their competitors have a lot moremoney and a lot more people than them.
And so, you know, just to be asdirect as possible, that's why I think

(56:55):
those companies signed those letters.
And, and, you know, I know wemight be going over here a lot,
but this is actually one thingthat, that makes me the most mad.
Um, I am very, very frustrated by effortsthat divert AI risk management resources
to a perceived, you know, future or nearfuture catastrophic terminator type risk.

(57:16):
It's fake, and in the worst cases,it's, it's deceitful, right?
Literally, there are people involvedin, at the highest levels of these
efforts who are trying to movedollars away from efforts that
might actually work to manage risk.
Because that will slow down theirprogress and move money into sort
of these fake efforts around, youknow, this is the dawn of AGI and

(57:36):
we need to be worried about Skynet.
And so I get really animatedon these topics, but just to be
clear, you know, there's no Skynet.
We're not close to Skynet.
We're not close to the Terminator.
If people are advocating moving spendingtowards those kinds of efforts, then
I would question, you know, whetherthey're doing so in good faith or not.

(57:57):
Yeah, I'm so glad we asked that question.
What an excellent, um, point to end on.
Patrick, I've just enjoyedthis conversation so much.
I've learned so much from you today.
Um, I'm so glad that you could be here.
I want to say that we'll probably continuethis conversation on our community.
So if people want to sign up onour Slack channel, um, we can Uh,

(58:18):
populate with some of the questions wedidn't get to and answer them there.
Um, otherwise, I just want to saythanks to all the attendees for coming.
And again, I know I alreadysaid thank you, Patrick, but
man, really fascinating hour.
I've learned a lot.
Um.
Very kind of you.
Very kind.
Happy to be here.
And, you know, audience, pleasefeel free to connect on LinkedIn.
That's the easiest place to find me whereyou can tell me I'm smart or stupid.

(58:39):
People do it all the time.
So you can have your turn.
But, uh, great to be here.
And, and thank you so much.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.