All Episodes

October 27, 2024 67 mins
Mis/Dis: Exploring Misinformation and Disinformation takes a look at how fake news, misinformation, disinformation, and Deepfakes are being handled by media companies and journalists. In Part 2, we speak with Dr. Siwei Lyu, a professor of computer science and engineering with the University at Buffalo in New York. Lyu says he has been studying ‘Deepfakes’ for years and tell us not only the origin of the term, but how people can verify content through a new program he and his students created. You can learn more about the professor here. And here’s more on the professor’s, DeepFake-O-Meter.  

Up next is Shayan Sardarizadeh, a senior investigative reporter with BBC Verify, one of the newest units at the global news channel. Sardarizadeh walks us through how he and his team vet and verify content before it makes it on air and online. Learn more about BBC Verify here. And follow Sardarizadeh here. And finally, California State Assemblymember Marc Berman wrote a bill to clamp down on social media companies who allow Deepfakes to be posted, and forces the companies to allow people who claim to be victimized by Deepfakes a chance to have their fake images removed. Learn about the new law here.
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Welcome to KFI on Demand. I'm Steve Gregory. Thank you
for downloading the KFI news special miss dis Exploring Misinformation
and Disinformation. This is a composite of the four hour
radio show heard on KFI AM six forty.

Speaker 2 (00:19):
As misinformation and so called fake news continues to be
rapidly distributed on the Internet, our reality has become increasingly
shaped by falsom from the Internet.

Speaker 3 (00:27):
Was supposed to be the democratizing force in our elections,
in our dialogue, and in our country and in around
the world, and what we're seeing now is the opposite.

Speaker 2 (00:36):
Time after time after time that we have consumed or
been exposed to inaccurate information. You've got a massive company
like Facebook that is out there allowing misinformation to be
displayed on their platform.

Speaker 3 (00:51):
It's causing a lot of confusion. People don't know if
the videos that they're watching are real, if the voice
is in audio that they're listening to has been doctored.
The Internet was supposed to make us, you know, more savvy, right,
how do we get to this point?

Speaker 4 (01:04):
And what see?

Speaker 2 (01:04):
People don't know the difference between something real and something
created to what.

Speaker 3 (01:08):
Do we do about it? Like, we can't just let
the WORL function this way.

Speaker 1 (01:15):
Officials in local, state, and federal governments say misinformation and
disinformation are serious concerns for democracy in our society. The
Pew Research Center there's about sixty four percent of Americans
believe fabricated news stories cause a great deal of confusion
about the basic facts of current events. A study by
MIT found that false information is seventy percent more likely

(01:37):
to be forwarded and shared than true information. A report
in twenty twenty estimated misinformation in financial markets could lead
to losses in the hundreds of billions of dollars due
to misguided investor decisions. A Gallop poll found that nearly
seventy percent of Americans express concern about the prevalence of
misinformation and disinformation. I'm Steve greg For the next two hours,

(02:01):
we talked to media professionals, subject matter experts, journalists, and
scientists about the dangers of fake news and the weaponization
of digital media and how people can spot it, vett it,
and make a more informed decision. This is part two
of the KFI News special miss disc exploring misinformation and disinformation.

(02:23):
Thank you for joining us. Deep fake a fairly new
word in the missdys lexicon, But according to experts, one
of the drivers of disinformation joining us now is Suwaylou,
a professor of computer science and engineering with the University
at Buffalo and an expert on deep fakes.

Speaker 4 (02:39):
I do research in artificial intelligence machine learning with a
special focus on media forensics, and this is the field
studying algorithms that can expose any sort of digital alternation
or manipulation or synthesis of media, including images, audio, video syntax.

Speaker 1 (03:02):
So, professor, when did you realize that you needed to
research this? When did this become such an issue that
it warranted this kind of attention.

Speaker 4 (03:13):
Well, I started this research when I was in grad school.
That was twenty three years ago. Starting two thousand and one,
I entered the grad school and one of the professors
in the introduction class gave a talk on the problem

(03:34):
of for media forensics. And back then it was mostly photoshop,
photoshopping images and splicing voices together using digital signal processing.
And it caught my attention because this is such an
important problem to me. Even though think about twenty years
ago it's mostly some simple operations in comparison with today,

(03:55):
but already people can make very realistic digital fakes. So
that's where I got interested in this problem. I've been
working in this field for that many years until twenty
eighteen when I started noticing my colleagues actually bring this,

(04:19):
brought this up to my attention. There's this new trend
of new way of making digital auderies using artificial intelligence,
using algorithms, and the other side of my research is
in AI, in machine learning. And you know, previously, I've

(04:40):
been always thinking, you know, there maybe one day this
two world will collide with each other. And that's the
day it happened, and it happened so fast. Then I
started realizing this is a problem. So I began to
work on mitigation matters for defects. Ever since early twenty eighteen,
I was among one of the first researchers who focus

(05:02):
in this area, and ever since then we start to
develop several develop several alg reisms, methods and systems in
this regard.

Speaker 1 (05:11):
We're talking with Professor Siweylu. He's with the University of Buffalo,
a professor of computer science and engineering with a focus
on media forensics. I find it fascinating that you know,
much like other professors, professors and experts in the field,
that it that the misinformation disinformation sphere has has been

(05:32):
so prevalent that it's had to create people like you
to keep an eye on it, research it, figure it out.
Going back to twenty eighteen, do you remember the first
photo video or any kind of a document that that
was a deep fake? Do you remember what it was?

Speaker 4 (05:51):
Yes, it was It all started. Actually, I actually take
one step fother than that. I already noticed the because
I research in artificial intelligence. I participated technical conferences in AI,
so even earlier than that, like twenty fifteen twenty sixteen,
I started to see research in this area where people

(06:15):
use AI models to recreate images or fixing a degraded
images to generally something looks better. So, but it was
mostly innocuous and people trying to show what a I
can do in this regard. It was in twenty I
think it was late twenty seventeen, was first reported by

(06:40):
a pickup of a reporter where there was a redded
user who was accompanying is deep fake. So the term
itself is combining deep learning, which is like one of
the most well developed technology modern technology, machine learning, and

(07:00):
fake media. So that user up to this point is
still anonymous. Now who is that person behind the account,
but use that account to start spread pornographic videos where
they replace faces in the original video with the faces
of a celebrities, actors, actress, mostly well, actress, mostly mob

(07:24):
all of them are women, and spread them online and
people got interested in this and that. The deepake videos
spread very fast and cut attention of that reporter. She
wrote a story about that, and then I read about this,
and I start to pay attention to this problem.

Speaker 1 (07:42):
So the technology is getting better and better and better
and allowing every day people like me to be able
to do something like this, Isn't isn't that kind of frightening?

Speaker 5 (07:54):
Absolutely?

Speaker 4 (07:55):
I think the most concerning part of this whole phenomenon
is the democratization. Because I work in this area. There
are a lot of new development, for sure, But the
core algorithms, the models we're using for creating deficts today,
they exist almost at least you know, ten or twenty

(08:17):
years ago. Just back then, we don't have powerful GPUs
graphical processing units or powerful computers. We do not have
Internet with infinite amount of data and social media to
spread the fakes faster, and we do not have the
kind of intention of using a model for this kind

(08:39):
of the purposes. But I think we're seeing a perfect
storm where technology, channel, data, source and storage, everything now
become available for someone who want to make defics and
spread them fast, and the tools are becoming easier and
easier to use.

Speaker 5 (09:00):
From twenty eighteen.

Speaker 4 (09:01):
You know that when the first time we try to
reproduce some of the defails, it took me and my
grass student about a whole month just to figure out
how to set up the code and you know, write
everything we You know, it is a hassle is you
have to have a computer science PhD d grade to
somehow understand what you are doing there. But right now

(09:22):
it's a matter of you know, someone with a web
internet access, go to a website, you know, payale small fee,
put your idea in text and then images.

Speaker 5 (09:33):
All these videos.

Speaker 4 (09:34):
Will be created for you in a matter of you know,
a couple of seconds. So I think that's how easy,
you know, how how quickly the situation have the whole
landscape have changed?

Speaker 5 (09:47):
Yeah? Absolutely, six seven years.

Speaker 1 (09:49):
More with Professor Low. But first, this is the KFI
News special miss dis exploring Misinformation Disinformation. This is the
KFI News special miss dis Exploring miss Information and disinformation.
Welcome back. We're talking with Professor Sawaylou. He is a
professor of computer science and engineering with a specialty in

(10:12):
media forensics digital alteration. We're talking about deep fakes, its
impact on society, how it came to be, and before
the break, professor you were kind of explaining the democratization
of deep fakes. Now how everyone has access for a
couple bucks, they can go on and create their own
deep fake. The technology as usual, technology is being developed

(10:36):
to help in the in medical field and maybe in
all kinds of other ways that help humanity, but it
always takes a few bad actors to take that technology
and completely ruin it for everybody. How do you, as
someone who studies this intently, do you work on how
to curb its abuse or do you work on how

(10:57):
to spot it and how to prove or how to
fight back with it?

Speaker 5 (11:03):
Yeah?

Speaker 4 (11:04):
I work all kinds of countermatters to defis for and
the most foremost we work on detection.

Speaker 3 (11:12):
Uh.

Speaker 4 (11:13):
These are methods that can expose defects, can automatically provide
information about the lackritical the probability of accountent being generated
or manipulated.

Speaker 5 (11:26):
By a models.

Speaker 4 (11:29):
We also work on protection, which means you know how
we can do something as ofternate users to protect our
data from being used by a models to create deficts
of ourselves. And I also work on tracing defikes, like
can we do something? Can we say something about how

(11:49):
this piece of defix was made, what model was behind it?
And knowing a little bit about the model may give
us information about who is behind and with what kind
of intension.

Speaker 1 (12:01):
Is it true that most of the deep fakes that
that might be trying to manipulate an election or manipulate
public sentiment? Is that coming from a foreign country? Are
those being created by what they call foreign bad actors?

Speaker 4 (12:16):
I do not have the food statistics, BUTUS certainly there
is there is uh they are a major player for
that purpose. But we have also seen not in the US,
but in other countries, especially countries with low media literacy
like for instance India, Brazil. They come with that, they

(12:37):
come from within. So the political different political groups or parties,
they actually use gener b ai or deep fakes as
a means for getting their message as halt. So it
can happen from anywhere simply because there is it's easy
to use.

Speaker 5 (12:55):
So many people can use it, right.

Speaker 1 (12:56):
So yeah, So when we say deep fake, and when
you use the term, is that is that sort of
all encompassing or does it only mean video or does
it only mean photo? What is actually your definition of
deep fake?

Speaker 4 (13:10):
Well, from the very beginning, people used to use the
fact only referring to a special kind of AI generated videos.
That's the first kind I talked about, which is known
as face swaps. So you take a video, you're swapping,
you you replace the face of the original subject with

(13:30):
some other person's face. Make sure that the new faces
of the other person mirrors the same facial expression as
the original person. So that's we cannot do that by
hand or by any other traditional computer algorithms, but with

(13:52):
AI we can do that. So we started with a
very narrow definition of deface, specifical meaning to that kind
of face walk videos. But slowly the term deface got
more gut broadened, and now I think basically any type
of all of your visual content that were created or

(14:15):
modified either complete with AI or with the help of AI,
people rusely refer to them as deface.

Speaker 5 (14:22):
And I think, you know, that's that's justified.

Speaker 4 (14:25):
It's really the technology behind that and the intent behind
you know.

Speaker 1 (14:30):
Striding them, because I think when you observe a deep
fake video, I think it's easier to spot that maybe
than it is to figure out on the audio side.
In my world, when I listen to something, you know,
it's it's easy to assume that it's accurate, it's authentic,
it's genuine. But you know, lately, with all these AI
generated voice features that you see online and apps and

(14:53):
things like that, it's really hard to figure out when
something has been AI generated on the voice. Are you
finding the same thing?

Speaker 5 (15:02):
Absolutely?

Speaker 4 (15:03):
Not only that I talk about the democratic democratization problem,
but the quality of the AA generated contents steadily and
I will say rapidly improved over the years. So the
thing the contents, the defaate contents I'm seeing today compared
to something I saw the first time in twenty eighteen

(15:26):
is like, you know, Heaven and Earth is so drastically different.

Speaker 5 (15:32):
And a lot of the artifacts we used to.

Speaker 4 (15:35):
Rely on visually or you know, by perceptual artifacts have
mostly been removed from the newest generation.

Speaker 1 (15:45):
Of explained professor explained artifacts. Just so people understand what
you mean when you're saying identifying the artifacts.

Speaker 4 (15:53):
Yeah, artifacts just means the contents have some I'll say,
have some aracteristics that is in contradict with what we
expect to see in the real physical world. For instance,
if you see somebody sending in a sunny day outside
and you don't see a.

Speaker 5 (16:13):
Shadow, that's very unlikely.

Speaker 4 (16:16):
And that's the kind of artifacts created by early generations
of defate models. People can easily identify them, like the
twenty eighteen In your twenty eighteen the Face Walk videos,
you can clearly see in many cases where the face
was splested in you can see the boundaries in the colors,

(16:37):
the skin tone are not matched up with the original person.

Speaker 5 (16:41):
So those are i would say.

Speaker 4 (16:46):
Deceptive, but not like super difficult to tell them apart.
But fast forward into today, all these artifacts are basically gone,
so we cannot simply say this, there's something wrong here.

Speaker 5 (16:59):
It's clearly a defake.

Speaker 4 (17:01):
Now we have it takes some time to actually, you know,
look closely at the image or were listening.

Speaker 5 (17:07):
To the audio.

Speaker 4 (17:08):
There are still artifacts. They're never going to be no
artifact generation.

Speaker 1 (17:11):
Oh that I'm glad you said that, because I was
going to ask you, is is it going to ever
get to the point where you're never going to be
able to detect a deep fake.

Speaker 4 (17:20):
I think it's just getting horder and harder. But I
don't think any defaiate could become like completely undetectable because
to the end of the day, if it is created
from sin there does not have a correspondent in the
physical world. Just by the old fashioned fact checking, we
will be able to recover what is what actually happened
at the time and the place and compare that with

(17:42):
what we're showing, what we're seeing.

Speaker 5 (17:45):
Or listening to.

Speaker 4 (17:47):
But what I think the deficates are just increasing the
cost for us to do that fact checking, that verification.

Speaker 5 (17:54):
That is the main trouble. Uh they were produced after
this point.

Speaker 1 (17:59):
Okay, when we come back, we'll wrap up more with
Professor Lou. But first, this is the KFI News special
miss dis Exploring Misinformation Disinformation. This is the KFI News
special miss dis exploring Misinformation and disinformation. Welcome back. We'd

(18:20):
speaking with professor Suweylu with the University at Buffalo. He's
a professor of computer science and engineering. We've been discussing
deep fakes. He's been studying this for many years, and
in twenty eighteen is when he was first kind of
on the scent on the hunt of the deep fake,
and we were talking a lot about as the deep fake,

(18:40):
or as the technology gets better and better, the AI
generated technology gets better and better, it's getting harder and
harder for you to become more aware of it or
detect it. So when we look at it the way
it's starting to be mainstream, you know, deep fakes were
starting to be sort of underground, and people would post
them on subreddits or in reddits and certain and certain

(19:01):
private accounts, and now it's just out there. And I
recall a story a few weeks uh before the election
that small towns, somebody on a campaign was using a
deep fake to malign the opponent. And unfortunately, in these
small towns you're not talking about billions of dollars or
millions of dollars in campaign money like you do a
presidential election, So the opposition had no recourse. They didn't

(19:25):
have the money to fight back. What how would you
counsel someone to fight back if they've been the victim
of a deep fake?

Speaker 4 (19:33):
I think, well, uh, the current situation is not very
friendly for somebody who is a victim of defikes. And
I think, you know, we talk about. We mostly focus
on political compends up to this point, but I have
to say the majority of victims of de fakes are
actually women and underaged girls, and the pornographic materials created

(19:59):
with defates are the dominating have a dominating fraction in
the deficts we have online, So it.

Speaker 5 (20:09):
Was in that part I think the attack is at
the personal level.

Speaker 4 (20:11):
You know that the victims usually get a lot of
emotional and sometimes financial damage from those kind of attacks.
So generally speaking, it's still quite hard for victim to
actually I would say, mitigate and reduce the impacts of deficts.

(20:34):
Once it happened, it goes really fast, but there are
still some measures to take. I will say, first of all,
if you suspect you become a victim of defeates, well
let let me take one step back.

Speaker 5 (20:47):
So sorry. The first step is protect.

Speaker 4 (20:49):
Yourself, so you know, be sure to share your information
to people at least you know you have certain level
of trust, because this is where it's actually you know,
Malitia's defict makers, that's where they get their data. It's
like their view of their defict generation engines. Without data,

(21:09):
they cannot do much. So be sure to share our
data cultususly and you know, to trust parties. And secondly,
I think also do some protections to our data when
you upload images to social platforms. Make maybe consider adding

(21:30):
a little bit water watermark, some sort of specific uh
features there so that you can later use identify you know,
this is the data I shared. But if you if
you suspecting you become a.

Speaker 5 (21:46):
Victim of defics. I think the first thing is to.

Speaker 4 (21:47):
Document everything, because everything become transient on online and the
people share this kind of contents, they may retract it back.
So keep records of everything you have encounter that's potentially
be a deficult of yourself, put them in the document,
and then talk to the platform. That's where I see

(22:10):
the platform can do a little bit better because right
now is the victim initiated action to ask them to
remove defects from the platform from further spreading. But the
problem is most of the burden is on the victim
to demonstrate this is effect of me. I'm the person
reporting this problem. I'm the victim, and also I have

(22:32):
to find all incidences on the platform that is a
defect or some kind of derivatives of the original content
of me, and then ask them to remove that. So
that's actually that's something that's not easy, but you know,
expecting that. And then I think at a certain point,
at certain point view if this is like a serious

(22:53):
attack with substantial damages, bring this up to legal ACTIONSAW enforcement,
law enforcement, I f B, I UH actually take this
kind of cases. Also report to places like i FTC
because they are also if this is a financial fraud,
I f TC and i f CC will also take

(23:15):
those cases. So I think, you know, a ring a bell,
there was like a ring a bell to outter people
to about the problem.

Speaker 5 (23:23):
There is a nice.

Speaker 4 (23:27):
Program by ARPASO when a senior become a victim of
defake or on a kind of online fraud. You can
also use your experience to alert outer ballol citizens about this.

Speaker 5 (23:43):
Yeah, look excellent.

Speaker 1 (23:44):
And so before we wrap up, because in the last
couple of minutes here, I do want to talk about
something that you and your team UH created at the
University of Buffalo, and that is the deep fake Ometer.
Another way to determine and sort of help that information.
If if you're consumer of news, I think unfortunately we
have to spend a little extra time to vet our

(24:04):
news now in order to trust what we're seeing and hearing.
But I wanted to guide everyone to the deep fake Ometer.
It's a website and talk a little bit about how
that came to be and what it does.

Speaker 4 (24:16):
Sure well, first of all, a difficult meter is a
research platform, and the very intention we made that platform
is actually purely from research purposes because I was trying
to test out and compare. I do research in media forensics,
so one part of a major part of our work

(24:39):
is developing new detection algorithms of various kind of defates.
But I was frustrated that everybody's code is online. You know,
it's like a research code. I have to download them, compile,
compile them, and make sure they run on my computer.
And then I started to test out and compare different orgorithms.
Then I'm thinking, you know, if I'm okay doing that

(25:00):
as a researcher because this is part of my daily work.
Think about an ordinary user. They just run across and
photo or some sound samples. They want to just know
if this is any chance, any possibility this is created
by AI. They don't want to really go through this
hassle to have to use the code. So how about

(25:21):
we provide something that's convenient for the users that can't
have a quick access to cutting edge research in defate
detection and then test out something they run across. So
that's the motivation we build difficult Meter. It's a platform,
it's a basket, so it's not a single detection algorithm,
but it's a collection of detection algorithm coming from most

(25:44):
recent research in this area. So we have up to
this point, I think twenty close to twenty five, I
think twenty twenty three twenty five methods on difficult Meter.
All these methods are from research. Their open source, one
hundred percent open source with you know, complete transparency.

Speaker 5 (26:05):
Even the platform itself is open source.

Speaker 4 (26:08):
But our purpose here is not to make decisions for
the users because these detection algorithms are not one hundred
percent accurate.

Speaker 5 (26:16):
They are statistical, and we just want to.

Speaker 4 (26:21):
Make sure that the users have easier access through these
classical algorithms. They will provide additional information about the media,
uh they're interested in.

Speaker 1 (26:32):
Yeah, I actually put a mugshot of a of a
criminal in that to see how it came up. And
you're right that the all the different algorithms that you
have in there that check all had different percentages of
accuracy in there. But it's a good way to cross
reference though. It's a good way to it's a good
starting point.

Speaker 4 (26:50):
Yeah, and it's completely free, so that why you know,
you just give it a try and get some additional
information about the media.

Speaker 5 (26:58):
You're interesting.

Speaker 1 (26:59):
Yes, thank you so much for your time. This has
been wonderful, wonderful information. I wish you all the best
of success. And again it's a Professor Suwaylou University at Buffalo,
Professor of Computer science and Engineering.

Speaker 5 (27:11):
Thanks again, thank you so much for having me.

Speaker 1 (27:13):
Coming up, one of the largest broadcast news networks in
the world has a new verification unit to vet incoming
video and audio and we get to speak with one
of their investigative journalists. But first, this is the KFI
News special miss dis exploring Misinformation Disinformation. The BBC is

(27:33):
one of the largest news organizations in the world and
among the most respected. It routinely covers conflicts across the
globe and is more and more video pours in of
alleged attacks on children and civilians, even a drone attack
on the Kremlin and Moscow, most of which was artificially created.
The BBC decided it had to do something to vet
the inbound content, so it created BBC Verify. Cheyennes Artizida

(27:57):
is a senior journalist on the team and joins us now.

Speaker 6 (28:00):
PEC Verify is a department that brings together about sixty
investigative journalists and data analysts from different BBC departments that
used to exist before Verify and now I'm now all
working under the same name on the same umbrella as
BBC Verify. You're set up in April of last year

(28:23):
and it was basically a reaction to what has become now,
in my opinion, a necessity in modern newsrooms, which is
a team of journalists who basically focus on covering breaking news, conflicts, emergencies,
you know, attacks, be terrorist attacks, shootings, stabbings, any sort

(28:49):
of breaking news event of that nature in different parts
of the world where your first point of contact for
understanding what's going on is actually online videos, social media
ten rather than sort of sending people which obviously you know,
being on the ground and observing things and talking to eyewitnesses,
all of those things are still vital parts of reporting

(29:12):
and journalism, but in this day and age, in order
to cover the war in the Middle East or the
war in Ukraine unnecessarily always have to be underground. There's
a ton and ton of content that is being posted
every day online sort of documenting the events that are happening.
It's just a case of first of all, having journalists who
specialize in verifying and analyzing that content and determining that

(29:38):
there's definitely legit and they're real, and also being able
to sift fact from fiction. Because although there's a ton
of valid information and valid content being posted online every day,
which should form basically I should help and inform journalists

(29:59):
around the world the reporting and understanding complex events, there's
also a ton of misleading, falls out of context material
that is being posted online sometimes that can have sort
of severe consequences in.

Speaker 7 (30:12):
The real world.

Speaker 6 (30:14):
So we try to do all of those things together
and hopefully help the BBC's reporting of breaking news events,
major events, as I say, like the war in Ukraine,
like the conflict in the Middle East, like say the
current US election campaign, and the sort of information war
around that campaign. The way the two campaigns are sort

(30:35):
of posting content online, their fans and supporters are posting
content online sort of help the BBC's reporting and sort
of helpful audiences understand which bits of information that they
see online is accurate and reliable on which bits.

Speaker 1 (30:50):
Are Was there one particular story or situation that was
sort of the catalyst to bring all of these reporters
together and say we need a unit, a dedicated unit
for this. What was there one issue or one incident.

Speaker 6 (31:04):
I wouldn't say it was one issue. I would say
it was an amalgamation of several major global events that
happened with succession, one after another. The first one was
definitely COVID sort of an unprecedented story sort of four
years ago, when all of us were sort of locked
inside our homes dealing with this new disease that we
knew pretty much nothing about or very little about, and

(31:28):
we were all concerned about the consequences of it, and
a lot of people started sort of posting theories and
ideas online, and you know, the lockdowns were sort of
a kind of a new thing for us, and the
scale of it and the size of it a public
health emergency of that of that scale.

Speaker 7 (31:46):
And then after that.

Speaker 6 (31:47):
We had the sort of very very controversial US presidential
election and the aftermath of it when we remember, obviously,
of what happened two months after the election at the
US Capitol, which was directly influenced by the sort of
theories and falsehoods that were shared online in the aftermath
of the election. Videos and claims that when mega viral

(32:09):
online and plenty of people basically just sort of assumed
that the election has been stolen from from former President
Donald Trump, which led to people attacking the US capital.
And then after that, the conflict in Ukraine, the invasion
of Ukraine by Russia, and the volume of content that
we were seeing being posted online from the conflict, and

(32:31):
also the volume of claims that seemed to be baseless,
particularly coming from Russian officials about the campaign and some
of the attacks and some of the consequences in Ukrainian cities.
All of those things basically convinced us that, you know,
rather than having different people in different departments of the BBC,

(32:51):
and the BBC is a pretty big news organization, so
we thought, you know, bring all those people with specialized
in content verification, analyzing, open source material data analysis and
also coverage or missing disinformation, bring all those people together
on the same umbrella colorbips to verify and let's sort

(33:12):
of make sure that different departments in different parts of
the UK, in different parts of the BBC a notre
sort of duplicating each other. Everybody works under the same brand,
everybody works together, which I think was probably the sort
of sensible decision.

Speaker 1 (33:29):
Is this a twenty four hour, seven day a week
situation or or do you just work shifts? Or because
I know BBC, as you mentioned, very large news organization,
very well respected, and I know it's twenty four hour operation,
does that hold true for the Verify.

Speaker 7 (33:42):
Unit pretty much? So, yes, it's definitely.

Speaker 6 (33:46):
We have coverage every day, including weekends, and depending on
sort of breaking new situations, the severity of the situation,
we also have nighttime coverage. But I myself, you know,
this weekend, for instance, I was supposed to be off,
but I was sort of asked to do. I've volunteered myself,

(34:07):
but you know, they were basically asking around our colleagues
who's available to cover what's going on in Lebanon at
the moment, and just sort of videos that were coming in,
and I did sort of two night shifts basically trying
to gather as much videos and as as many as
as much data as as I could about what was
happening on the ground.

Speaker 7 (34:26):
In Lebanon.

Speaker 1 (34:27):
More with the BBC's Cheyenne Sorta Risida. But first, this
is the KFI News special miss dis Exploring Misinformation Disinformation.
This is the KFI News special miss dis exploring misinformation
and disinformation. Welcome back. We're speaking with Cheyenne Risida, a

(34:50):
senior journalist with BBC Verify in London. So, Cheyenne, what
is it in the unit itself? Who who decides what
it is you're going going to dissect? How how does
that come to be in is there certain sources of
information that you're always vetting or is it random?

Speaker 6 (35:10):
Well, basically all of us every day are sort of
online and on as many social media platforms as we
possibly could be, and we're basically looking for stories. We're
looking for things that could potentially turn into stories. You know, obviously,
when you have a conflict like the one that is
happening at the moment in the Middle East one in Ukraine,
the stories sort of directly come to you in a

(35:32):
sense because obviously there are sort of major global events
with impact for everybody, either directly or indirectly, So you know,
there's such a huge volume of content ONDI, but also
some other stories that might go on reporter particularly you know,
Western news organizations kind of usually tend to focus too

(35:53):
much on stories from the West, and sort of there
are stories in other parts of the world, you know,
in other continents go unnoticed, which sort of are very
much valuable and very important. But the BBC has the
sort of blessing of having a large news organization also
plenty of language services. You know, there are tons and
tons of journalists who work here in our headquarters in

(36:15):
London and also in our bureaus scatter around the world
who are specialists in different parts of the world, in
different regions, speak multiple languages, and we have a relationship
with them as well, so we'll get in touch with them.

Speaker 7 (36:29):
We ask them, what are you seeing? Why? You know,
is there anything we can do?

Speaker 6 (36:32):
We ourselves, we go online and we look for stories,
and then every day we sort of pitch the things
that we're seeing every morning in our editorial meeting, and
then obviously our editors decide which stories could actually turn
into stories, whether it be an online story of video,
sometimes some form of a long term investigation, some social

(36:55):
media video, whatever it may be.

Speaker 1 (36:57):
I suppose that any kind of war conflict is probably
at the top of the list since social media has
become a sort of a breeding ground for lack of
a better term, breeding ground for this misinformation disinformation. And
I suppose that you're already kind of honed in in
your radar is probably already on these conflicts. So do

(37:17):
you have people that are dedicated to just looking at
Lebanon and looking at Israel Hesbala and looking at these
international conflicts?

Speaker 7 (37:26):
Pretty much? So, I mean, I would say all of us.

Speaker 6 (37:29):
It's it's sort of it's such an integral part of
the work of a team that calls itself and you know,
open source team covering conflicts using you know, all sorts
of publicly available in daytime information that these days, if
your researcher, if your journalist, are all available to you
for you know, mostly for free, you know, tracking ships,

(37:52):
tracking flights, tracking military vehicles, looking at videos on the ground,
or sort of movements of soldiers, fighter jets, explosions here
and there. You know, add all of those things together,
there's a wealth of data publicly well what data online
is which can help you in form audiences about a

(38:16):
conflict that is thousands and thousands of miles apart without
actually even necessarily being on the ground, and you can
actually tell a very good story of and also put
all those pieces of information together m also turn it
into something that is sort of a much more informative
peace or report about what's been going on over a

(38:36):
certain period of time. You know, what's the movement of
Russian troops underground, what's the movement of Ukrainians over sort
of the course of six months, even months, Israeli soldiers,
what has Hamas been doing, what has hesbald I've been doing,
Whether the rockets have been landing, which sort of areas
have been their focus, all of those things. With the
sort of publicly available information, open source information now on

(38:57):
the Internet, it sort of facilitates that type of reporting
for us. And also obviously we're not the only ones
doing it. Major news outlets like The Toms, Washington Post, CNN,
Finacial Times, the Guardian, they all basically sky newsed, they
all have similar teams in their newsrooms.

Speaker 1 (39:14):
Now the one, I think the one video and I'm
just so you know, I am a consumer of the
BBC and I've watched you for years and because I'm
quite frankly you're probably probably the only one that's as
bad as objective as it gets in this world. And
when I was watching I remember specifically the video where

(39:36):
the Russians claimed that the Kremlin was being attacked by drones.
That was a video that I recall being the one
that that's when BBC Verified got on my radar because
that was sort of that was a big story because
it looked like, you know, the Kremlin was being attacked
or was it, Yeah, it was the Kremlin of the
Senate building, I can't remember, but there was an explosion

(39:58):
on the dome and you guys went through and sort
of and really dissected it down and proved that it
was not really a drone attack. Do you find that
you're also countering any of this disinformation or propaganda that's
coming from news organizations in other countries.

Speaker 6 (40:19):
Well, hopefully, I mean, obviously it all depends on we try,
as you say, we strive and try really really hard
to be completely impartial. You know, take you know, personal
opinion out of everything that we do. Just look at
the evidence and try to report events as accurately as
you possibly can. Now if that means you know, you

(40:41):
just let the evidence take you where it does. And
if that means a government, be it a Western government
or be it a government elsewhere, a government that is
hostile to Western governments or friendly. If they say something
or if they claim something that is either partially inaccurate
or completely inaccurate, or any major news organization, then you know,

(41:04):
so be it. You have to You have to report
things accurately to the audience. And that's the only way
that basically you get any kind of credibility and the
audiences could trust you.

Speaker 7 (41:15):
It's just it's the.

Speaker 6 (41:17):
Case of relying on the evidence and just reporting things
they can one independently confirm. If you can't confirm it independently,
if you don't one hundred percent sure, then you don't
do it. But if once you do, once you've got
the evidence that this thing that's been reported widely, or
this thing, this video that's been shared or disclaimed by

(41:37):
this particular government is false, then you have a duty
as a journalist to report it.

Speaker 1 (41:42):
More with the BBC's Cheyenne Risida. But first, this is
the KFI News special miss dis Exploring Misinformation Disinformation Welcome back.
We've been speaking with Cheyenne Sara Risida, a senior journal
with BBC Verify in London, so Cheyenne. Have you ever

(42:03):
run into any stories that you said, you know, kind
of like a corner or medical examiner that's undetermined. We've
hit an impasse. We just don't know all the time.

Speaker 6 (42:13):
Every day there's there's loads and those of content that
we sort of see and we think is really important
and we want to get something on it, and we
think it's a really valuable, important story, and we sort
of dedicate enormous resources to it. Look, you know, several
of us take days and days and days, sometimes a

(42:33):
couple of weeks just looking at whatever we can in
or that to be able to authenticate and verify something
and report it to audiences because we think it's in
the public interest, it's an important story to be known.
But it's sometimes it does happen you can't confirm something.
You know, there's there's a video that you cannot verify.
You know in the middle of somewhere, say you know,

(42:54):
some sort of country where you know there's there's authoritarian rule,
and journalists fear for their lives by sort of reporting
anything that sort of deviates from the official narrative of
government of that government or the state, and therefore.

Speaker 7 (43:09):
You don't want to put local reporters at risk, so
you try to do it yourself.

Speaker 6 (43:14):
But you know, it's a video in the middle of
nowhere in a desert where some sort of something that
you assume is basically illegal activity or something that needs
to be exposed happens, and you see a video of
it and you want to confirm it, and you can't.
You try really hard, but it's not possible. That's happened
to us, and it's a shame.

Speaker 7 (43:35):
But you have to. You have to be one hundred percent.

Speaker 6 (43:38):
If you cannot verify it independently one hundred percent, if
you can't get every aspect of it right, then.

Speaker 7 (43:44):
You shouldn't report it.

Speaker 6 (43:45):
Because if it's a story that I think is in
the public interest and valuable and people should know about it,
then you have to be able to when people ask you, Okay,
what where's the evidence for this? How can you tell
us this is one hundred percent true? Where's the evidence
for this bit? Then you have to you know, you
have your reporting has to stand up to scrutiny, and
you have to have all the pieces of evidence to

(44:05):
show people, particularly if it's going to make some impact.
If you think this is something that could lead to
even you know, something more serious.

Speaker 7 (44:12):
Then you have to have the pieces.

Speaker 6 (44:14):
Of evidence put together and you have to be able
to show it to people when they ask you questions.
And if you can't, no matter how much time you've
spent on it. And trust me, that have been cases
of you know, stories that we've looked at for weeks
sometimes but where sort of we haven't been able to
be able to confirm all the details, and we have
we've had to sort of reluctantly go, Okay, we can't

(44:35):
do this. We have to because you know, one thing
is you can sort of obsess over a story for
a long time, but then there are other stories, like
you know, things happen around us. Unfortunately, bad things happen
around us all the time in all parts of the world.
So you can't confirm one thing, you know, focus on
something else.

Speaker 7 (44:53):
But that's I guess the nature of the job.

Speaker 1 (44:56):
How has AI changed what you're doing now? Is that
become is it major jobs more complicated or difficult?

Speaker 7 (45:03):
Definitely, But I would say it's still we still haven't.

Speaker 6 (45:07):
Hit the sort of we should all get scared and frightened,
and you know, we haven't hit the panic button yet,
and I think that's because although there's sort of advancement
and progress in AI, particularly jen AI in the last
couple of years has been really impressive, and I've been
following it very closely.

Speaker 7 (45:27):
We've been following it very closely, and we've.

Speaker 6 (45:29):
Got to we've got to a stage where particularly uh
generative AI images, videos, audios, and also you know, the
AI bots that sort of provide that sort of provide
text you like chatyrpt for instance, they've really got to

(45:52):
got to got to a stage that there's sort of
become They've become very reliably good in some areas, but
I think we're still some way from a world where
you know, you and me sat in our bedrooms on
our smartphone or on our gadget or on our laptop,

(46:13):
within five minutes, using publicly available free tools, we can
actually create a really, really one believable image or video
using AI.

Speaker 7 (46:24):
We're still not there yet.

Speaker 6 (46:25):
I think if you want to create something really good
right now, you need to spend some time, and you
need to spend some money, and you need to go
to some specific people who specialize in in creating really
good AI content. We're still not at the stage of
you know, every random person anywhere in the world can

(46:47):
just generate really, really believable materials. So we've seen it
definitely this year and the year before that. We've seen
some in the in the context of the current US
presidential campaign. By the way, we've seen some influence operations
that have tried to one in particular that we're reported
on recently from Russia, where they've tried to use AI

(47:10):
in order to create some sort of really controversial stories
about the US election. But I would say the vast
majority of controversial, misleading and false information that we see
today as I'm speaking to you is still the sort
of old school stuff. You know, some doctored image using

(47:32):
you know photoshop, some you know video that has been
taken out of context or deceptively edited, some claim that
that lacks any sort of reliable source, some rumor that
sort of secular.

Speaker 7 (47:43):
Online and becomes really viral. That's still the vast majority
of what we're seeing.

Speaker 6 (47:48):
But AI is definitely creeping in, and I expect a
few ys from Obviously I can't predict the future. That's
not my job as a journalist, but the direction of
travel seems to be Soon enough, my job will become
much much more difficult.

Speaker 1 (48:02):
Yeah, in deep fakes too. That's the other thing that
all of that is somebody is working behind the scenes
to make that more efficient and more attainable. I'm for sure.
What is you know, Shane, I wanted to find out
your definition of misinformation and disinformation I.

Speaker 6 (48:19):
Would say to me personally, I mean, I've seen all
sorts of definitions, you know that sort of people have
their own ideas of what miss and disinformation is. My
personal view is if you know for you, if you
want to ask me, I would say, and this is
what I say to everyone ask me this question. I
would say, disinformation is when somebody deliberately, deliberately and knowingly

(48:43):
post content that is false or misleading with malicious intent,
in particular, be it to politically mislead, or to make
money or to financially benefit. I would put the emphasis
on malicious inters whereas with misinforma, which is definitely more prevalent,
I would say, is when you see a piece of

(49:06):
content and be it the fact that you're driven by
your own personal bias, or your own views or or
your own opinion, or you just haven't bothered to check
the facts, or you've just sort of seen something that
is very viral and you think it must be true.
You share it, and you're not basically knowingly or maliciously

(49:28):
trying to mislead people. You're just sharing something or posting
something that you think is true while it isn't, So
I would in my view, misinformation is when you're sharing something,
you're seeing, you're sharing something or creating something without actually
knowing definitely that it's that is force, So you don't
have any malicious intent. You're not going to personally benefit

(49:49):
from it politically or or financially. But missing but disinformation
is where malicious intent gets in and ulterium motives play
a role in the content that you're sharing and creating.

Speaker 1 (50:04):
More with the BBC's Cheyenne start A Risida. But first,
this is the KFI News special miss dis Exploring Misinformation Disinformation.
Welcome back. We've been talking with the BBC's Cheyenne start
A Risida. He's a senior journalist with the Networks Verify unit.

(50:24):
You know, one of the interesting things that you folks
do is you also walk the viewer the listener through
how you arrived at your conclusion, whether it's being a
fake story or an accurate story. So it kind of
goes to this transparency in our business, transparency and journalism.
Why was that an important part of it?

Speaker 6 (50:45):
Yeah, definitely. I mean, let's not kid us all. We'll
live in an age. Our trust in news has to
be gained. It's not given to you by audiences automatically.
Regardless of how big you are and how well known
you are. People are sort of more skeptical now of
anything they see because of this blood of misinformation and
sort of misleading content online. It's it's become much more

(51:09):
difficult to determine what's true what's not. So we decide
that if it's saying, you know, this this piece of information,
this video, this article, this piece of news that you've
seen online and it's got you know, it's been viewed
by millions of people, and it's really viral, and you assume,
just because it's really viral it's true.

Speaker 7 (51:29):
Is actually not true.

Speaker 6 (51:31):
You have to then walk people through why it's not true,
show them how you've determined it's not through, show them
how particularly the aspect of it. I'm really interested, show
them how they can do it themselves. In some cases,
like some of the work we do, you know, You
don't necessarily need to have some sophisticated training or sort
of information. You can train yourself up to do it,

(51:51):
and some some of it. If you're just familiar with
some of the publicly available tools that are now available
to everybody, you can do it in five minutes for yourself.
Like some of the videos, some of the sort of
really amateurish videos about conflicts or about elections that go
viral online, you can you can just very easily within
five minutes. You can just sort of check for yourself

(52:12):
whether this is this is real or not, whether this
is recent footage or not. And we want more people
to basically do it for themselves. We want more people
to because once people can do it themselves, once people
see how it's done, once people see why. We're not
just saying, you know, we're not saying basically to code
like a well known online I mean, trust me, bro.
We're not saying trust me, bro. We're saying, look, this

(52:32):
is it, this is how we did it. You can
go look at it for yourself, you can go do
it for yourself.

Speaker 7 (52:37):
You can just sort of walk through it for yourself.

Speaker 6 (52:40):
And in that sense, I think you kind of not
only are you sharing that information with other people, hopefully
to create more.

Speaker 7 (52:47):
Journalists out there.

Speaker 6 (52:48):
Journalists don't necessarily always have to be employed by major
news organizations. There's incredible stuff that people do this. They
sat in their bedrooms, you know, at their laptops. That
blows my mind all the time. So it's basically creating
first of all, new journalists and researchers and investigators, helping
them to sort of get that information and hopefully do

(53:12):
it much better than I do. But also for people
who are more skeptical, for people who actually rightly want
to see the evidence of what you're saying and what
we're reporting.

Speaker 7 (53:21):
There's the evidence, you know, Shane.

Speaker 1 (53:24):
As we wrap up, I want to ask if you
could walk us through, give us an example of a
story that you took on, you observed it, you went
through it, and let's say it's a video of some kind.
What recent story are something that really illustrates a great
example of what you do, and walk us in the
listeners through how.

Speaker 5 (53:43):
You did it.

Speaker 7 (53:44):
Definitely, there is this story that we covered.

Speaker 6 (53:47):
Actually, let me give you a story that is sort
of hopefully much more colds at home and much more
relevant to your listeners.

Speaker 7 (53:55):
There's a story. There's a video that.

Speaker 6 (53:57):
Went viral but early September actually where there was this
five minute video that got millions of millions of views
online where it was sort of a bumbshut story and
it claimed.

Speaker 7 (54:11):
Claimed to report that.

Speaker 6 (54:13):
Kamala Harris, who obviously is the Democratic nominee in the
presidential election, was involved in a hit and run incident
in San Francisco in twenty eleven that had been kept
hidden from the public. And now they had some sort
of a whistleblower who happened to be a thirteen year
old girl who was hit by Kamla Harris reportedly or

(54:34):
allegedly in twenty eleven San Francisco had gone to this
news organization, local news organization to report, now twenty six
years old, report this had happened to her and she
saw she and her mother saw Kambla Harris basically in
the car hitting them and then fleeing the scene. And
now she was sort of putting this out to the

(54:57):
American people head of the election. Now, obviously, when we
saw that, we were like, okay, we want to investigate
this properly with any preconceived ideas that this is forced,
this is not accuri and particularly because it was very viral.
So we started from looking at the evidence that was
providing in this video, and there was a website. There

(55:18):
was a website that claimed to be a local news
website in San Francisco, and we went and looked at
that website, and we saw that website is pretty suspicious.
First of all, we couldn't find any evidence for this
outlet existing in San Francisco, for this news outlet. And second,

(55:40):
we sort of noticed very quickly that most of the
stories that appeared on this San France local news websites
in San Francisco allegedly appeared to be basically ar generated
stories that were sort of genuine stories reported by American
outlets that had been sort of rewritten by AI basically
to give the idea that this was a genuine news website.

(56:02):
And then this one story happened to be the one
that it seemed like somebody had actually sat down and
tied down the story and put together the evidence. The
other thing that we noticed was the top image used
both in the video report and also in the story
on the website happened to be which sort of claimed
to show evidence of this hit and run incident provided

(56:25):
by the whistle blower. We actually found that specific picture
in a rapport published in twenty eighteen in Asia, and
obviously that meant that there was something clearly wrong with
the story. We also saw the name of this whistleblower
had been spelled differently in different parts of the story
and also this individual because a name had been provided

(56:48):
for this individual, we started looking in online databases, and
also the allegried whistleblower basically was telling us exactly where
this incident happened. So we started looking for incidents in
San Francisco in twenty eleven, and in that particular area
of San Francisco, we couldn't find any record of in
any public record of it. And we looked at base
publicly available hospital records whether there was any sort of

(57:11):
eleven year old victim of a him run incident in
San Francisco back then.

Speaker 7 (57:14):
We couldn't.

Speaker 6 (57:15):
And then we also looked at other evidence that this
report had provided, which was basically medical scans of this
individual who said should been left paralyzed by that incident,
and we saw these sort of scans that sort of
were supposed to prove that this individual, this this young
woman had been left parallelsed by that hidden run incident.
All of those scans had been taken from publicly available

(57:38):
medical research articles on the Internet. One of them in
particular was taken from a research purpose published by Chinese scientists,
another one by Dutch scientists. So putting all of those
together and the fact that we couldn't find any record
of this anywhere, we were pretty sure.

Speaker 7 (57:55):
That this report was fake.

Speaker 6 (57:57):
But then we also we also later found that this
particular website was basically the work of We believe is
a man named John Mark Dugan, who've been reporting on
for some time, who creates these fake local news websites,
be it a website in Boston, a website in San Francisco,
a website in Washington, DC, and all of them basically

(58:20):
are supposed to report these sort of bombshell stories which
turn out to be completely false to mislead audiences, and
most of them, basically, through sort of IP records and
email records, seem to link to this gentleman called Mark
John Markdugan, who used to be a cop in Florida
who's now based in Russia.

Speaker 1 (58:41):
Shy insider Risiden, this is fascinating stuff. Senior investigative journalist
at BBC Verifi. Thank you so much for your time.
We appreciate your inside.

Speaker 7 (58:50):
Thank you really appreciate it.

Speaker 1 (58:52):
Steve coming up, a California lawmaker gets a law passed
holding big social media platforms responsible for deep fakes. But
first this, this is the kfi Knews special miss dis
exploring Misinformation Disinformation. Joining us now is Assemblyman Mark Berman,
and he was the author of AB twenty six fifty

(59:15):
five Assembly Bill twenty six to fifty five specifically targeting
deep fakes. So we want the assemblymen to explain it himself.
Assemblyman Berman, thank you for joining.

Speaker 5 (59:23):
Us, Thanks for having us.

Speaker 1 (59:24):
So let's talk about the beginning. What was sort of
the impetus for you to put pen.

Speaker 7 (59:29):
To paper on this.

Speaker 8 (59:31):
It was twenty eighteen, and I don't know if folks remember,
but Jordan Peel, the actor director, he did a deep
fake of Barack Obama back in twenty eighteen, and it
was one of the first deep fake videos. I think
that really that one viral that people saw. But back
then it was very difficult to create deep fakes. You

(59:53):
needed a lot of technology, you needed a lot of
technical expert piece. But when I saw this video. I thought, huh,
that's that's funny. He was doing a funny impression of
President Obama. That's also a little bit terrifying because this
technology creates the ability for people to put words into

(01:00:14):
elected officials' mouths that those elected officials never said, and
it allows people to create images that look like an
elected official is doing something that the elected official never did.
And so I passed a bill to I think it
was the first bill in the country introduced to regulate

(01:00:36):
the use of deep fakes in elections back in twenty nineteen.
But that bill went after the content creator, the person
creating the image or the audio or video, because very
few people could actually do that back then. Fast forward
five years and artificial intelligence has been democratized. It's now

(01:00:57):
it's something that if you have an iPhone in an app,
you can create one of these deep fake videos or
deep fake audios. And so I thought, you know, it's
more important to put more responsibility on the social media
platforms themselves to regulate this kind of very deceptive content.
And the reason for that is so that we can

(01:01:19):
protect the integrity of our elections by removing the most
malicious deep fakes, the most realistic deep fakes that falsely
portray a candidate or an elected official doing or saying
something that they never did or said. And we want
to try to stop that from spreading online leading up

(01:01:41):
to an election.

Speaker 1 (01:01:42):
And did you know that what kind of a oh gosh,
like a hornets nest you were getting into here, because
now you're going to be going up against big Tech,
You're going to go against sort of advocates of the
First Amendment in parody and whatnot. So did you realize
that you were going to have any pushback in the beginning.

Speaker 5 (01:02:00):
Absolutely.

Speaker 8 (01:02:01):
I definitely realized that this was going to be a
really complicated effort, partially from the experience five years ago,
which was also very complicated. And then I represent big Tech.
I represent Silicon Valley in the California State Assembly. Some
of these companies are constituents in my district. Some of
the other companies are. I could hit a driver from

(01:02:23):
my district and hit these companies headquarters. And so I
fully appreciate how complicated it is whether you are dealing
with First Amendment concerns, whether or not you're dealing with
federal preemption under Section two thirty of the Communications Decency Act.
You know, what I've come to say when it comes

(01:02:43):
to the First Amendment, which I think is incredibly important,
is that the First Amendment gives you the right to
say what you want to say. It does not give
you the right to put your words in my mouth.
And that is what this technology, you know allows, deep
fake technology gives the opportunity for you, you know, Steve Gregory,

(01:03:06):
to put your thoughts into my mouth and make it
look like I'm saying your thoughts. And I personally think
that that's inappropriate, and I don't think that the First
Amendment protects that type of speech.

Speaker 5 (01:03:21):
And so we drafted the.

Speaker 8 (01:03:24):
Bill to be very very narrowly tailored to address a
compelling government concern. And that is the sort of strict
scrutiny that courts will evaluate when they're trying to determine
whether or not something violates the First Amendment. Now, I've
been very honest the entire time with my colleagues in

(01:03:45):
the Assembly and the Senate and with the public. I
don't know what a court will determine, and so I
don't know, you know, what courts will, how courts will
interpret the law, and whether or not they'll deem it
unconstoans aitutional that's up to them. But my job as
a legislator is to say, hey, I think that this

(01:04:05):
is inappropriate, and I don't think that you know that
it should be permitted around these very narrowly timed you know,
these very narrow time frames leading up to and right
after an election.

Speaker 1 (01:04:18):
Yeah, because I think that one of the biggest challenges
is who's going to be the arbiter of what's fake
and what's not fake, and and how do you hold
those folks responsible that are that are intentionally or maliciously
putting out deep fakes?

Speaker 8 (01:04:31):
Yeah, And so you know, this bill, instead of going
after the content creator, it puts more responsibility on the
social media platform, and it says, hey, social large social
media platforms, you have to have a complaint process where
somebody can file a complaint with you saying that they
believe that somebody has uploaded an elections related deep fake

(01:04:55):
that creates the impression of an elected official or a
candidate or an elect actions official doing or saying something
that they did not do, uh, and that it would
falsely appear to a reasonable person to be an authentic
record of the content that's depicted. And so it's a
reasonable reasonable person standard. And then the social media platforms

(01:05:18):
have three days. We put in there a time frame
because we want them to respond quickly. To be honest,
I wanted it to be a day and a half.
And then we compromised laid in the legislative process and
we we doubled that that that time period to three
days to give the social media companies the time to

(01:05:39):
resolve the complaint. And then if if the candidate or
elections official, or if the candidate or elected official or
elections official that's depicted in the media, if they disagree
with the determination made by the social media platforms, they
can then seek an injunction in court, uh, just to

(01:06:01):
have that content removed from the social media platform. So
there is no criminal penalty, There is no civil penalty
in terms of a fine or anything like that. The
only relief that somebody can seek is injunctive relief to
get that content taken down. We did that very specifically
to not violate Section two thirty of the Communications Decency Act.

(01:06:23):
And then it's up fair court to decide, Like courts
do you know, in many different contexts, the court will
have to decide whether or not that deep fake does
actually falsely appear under the reasonable person standard to be
an authentic record of the content depicted of that individual.

Speaker 1 (01:06:44):
California Assemblyman Mark Berman, thank you for your time. Much appreciated,
my pleasure.

Speaker 8 (01:06:48):
Thanks to you.

Speaker 1 (01:06:50):
Miss dish Exploring Misinformation Disinformation is a production of the
KFI News Department for iHeartMedia, Los Angeles. The show's produced
by Steve Gregor and Jacob Gonzalez. To hear both parts
of this program, download Miss Diss on the iHeartRadio app.
Advertise With Us

Popular Podcasts

Bookmarked by Reese's Book Club

Bookmarked by Reese's Book Club

Welcome to Bookmarked by Reese’s Book Club — the podcast where great stories, bold women, and irresistible conversations collide! Hosted by award-winning journalist Danielle Robay, each week new episodes balance thoughtful literary insight with the fervor of buzzy book trends, pop culture and more. Bookmarked brings together celebrities, tastemakers, influencers and authors from Reese's Book Club and beyond to share stories that transcend the page. Pull up a chair. You’re not just listening — you’re part of the conversation.

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.