All Episodes

March 2, 2020 60 mins

Who are the people who spread online disinformation? The so-called trolls you hear about in the news whose jobs are to distort facts and create chaos? Camille Francois knows them well. She’s the chief innovation officer at Graphika - a social media analytics firm hired by major companies to identify and fight online disinformation. Her team was a big part of uncovering the extent of Russian influence during the 2016 election. She spends her time in the darkest corners of the Internet taking on one of the most extraordinary digital threats of our time… But it might just be her humanity that gives her an edge.

Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

See omnystudio.com/listener for privacy information.

Listen
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
First contact with Lori Siegel is a production of Dot
Dot Dot Media and iHeartRadio.

Speaker 2 (00:09):
I talked to trolls and hackers who made a living
doing disinformation on propaganda for higher people, who said, initially
I joined because I wanted to do campaign messaging for
you know, if I'm a candidate to win. And then
I woke up and I was just standing rape threats
to woman journalists using fake accounts and wondered, what, you know,

(00:32):
what happened? What am I doing there?

Speaker 1 (00:39):
Who are the people who spread online disinformation, the so
called trolls that you hear about in the news, whose
jobs are to distort facts and create chaos. I want
to introduce you to someone who knows them well. Camille Francois.
She's a security researcher who spends a lot of her
time in the darkest corners of the Internet. Camille is

(01:01):
the Chief Innovation Officer at Graphica, which is a social
media analytics firm hired by major companies to identify and
fight online disinformation. To give you some context, her team
was a big part of helping us uncover the extent
of Russian influence during the twenty sixteen election, and she
knows a lot about trolls. Unlike most of the people

(01:22):
talking about them and the damage they're causing. She actually
spends a lot of her time talking directly to trolls
to understand the why who they are and why they
manipulate social media to distort the truth. I'm excited for
you to get to know her. She's unsuspecting and quick
to laugh, which I actually think is pretty powerful to

(01:45):
give you a sense. She just kind of roamed right
into our production studio, a New York City high rise
with lots of security. No one stopped.

Speaker 2 (01:54):
Her so well, no one asked me what I was
doing there, so.

Speaker 1 (01:58):
So don't under a to meet her. She's taking on
one of the most extraordinary threats of our time, and
it's her humanity that gives her an edge. I had
no idea when I set across from her at a
New York dinner party that I would leave asking myself
a question that I want to pose during this episode,

(02:19):
what if the key to fighting disinformation online and some
of the most alarming cyber threats coming in the future
starts with empathy. I'm Laura Siegel and this is First Contact,
So welcome to First Contact. I'm super excited to have

(02:40):
you here.

Speaker 2 (02:41):
I'm super excited to be here. I have to say,
I love the name of the podcast. I'm a huge
Star Trek fan, so first contact speaks to me.

Speaker 1 (02:48):
Oh, I love it. I love that. And you know, well,
first of all, I should do kind of a quick
intro to you, and which will probably play into like
why you like Star Trek, right, And just like your
background is really really cool and interesting. You've spent your
whole career kind of at this intersection of cybersecurity, public policy,
and human rights. You've worked for the French government, advised

(03:11):
governments on all sorts of policy issues, and then you
were a principal researcher. I've seen a Jigsaw worked at
Google many times. Jigsaw this is this think tank and
technology incubator within Google. And now you're the chief innovation
officer at Graphica.

Speaker 2 (03:28):
Right, that's right.

Speaker 1 (03:29):
So I mean you've essentially spent your whole career digging
into fascinating interesting issues around cybersecurity and human rights.

Speaker 2 (03:39):
Yeah. I really looked out. You know when I went
to school, I said, I think this is what I
want to do for a living. It's like the intersection
of human rights and tech and politics, and I think
back then people were laughing saying that's not a job.
I think I really looked out being able to work
on these issues and for so long.

Speaker 1 (03:56):
And so how we start with our first contact. As
we talk about our first contact, our first contact happened
at a dinner party, right, I guess we should say
to our listeners, like we were at a dinner party
where no one could talk about what they did, which
is I guess this new concept of like we're not
defined by what we do, although what you do is

(04:18):
just the coolest thing in the world and it's so
bad ass. But we set across from each other and
everyone guessed. We were sitting around a table and everyone
was like, guessing what you did? Do you remember what
people were saying?

Speaker 2 (04:29):
Yeah, it was really fun. So for the dinner, I
had brought a bottle of natural wine, biodynamic wine, and
that's kind of the only thing that you know, people
knew about me, and so they were like, well, you
obviously you work in the wine world, and I think
I get that because I'm French. And then some people
were like, no, she works in sustainability, which frankly is
kind of true given what I actually do. I work

(04:50):
on the sustainability of our conversations online. I don't think
this is where they were going with it, but yeah,
that's that's what they said. I don't know what you got.
What did people say about you?

Speaker 1 (04:58):
There's this weird guy next to me that kept saying
everyone was a dancer.

Speaker 2 (05:01):
Do you have that story?

Speaker 1 (05:02):
That was super weird?

Speaker 2 (05:04):
I thought you were doing something super specific. I'll come
back to me. Yeah, even I thought you were a
diplomat or something like that, Like I had something super
specific for you.

Speaker 1 (05:12):
I will totally take that. And I'm forgetting what I said,
but I think I ad managed chaos or something.

Speaker 2 (05:17):
But also true.

Speaker 1 (05:18):
Yeah, but I mean what people didn't really realize is like,
sitting around this small dinner table was like someone who
was at the forefront of fighting disinformation and nation states
and who works in the dark corners of the web
and sees the craziest stuff ever. And I don't know
if this is like the right thing to say, because
like I hate when dudes do this, but like you
do seem really unsuspecting, right, Like you know, like we

(05:41):
were just joking about how you kind of social engineered
your way into this building, like we didn't come get
you in time, and.

Speaker 2 (05:47):
Like you kind of like no one asked me what
I was doing there.

Speaker 1 (05:49):
So right, you're like this, you come in with this
like cute shirt that's his facts, and like no one
questions you. You just to have this like this ability
to understand, and I have I'm going to go somewhere
with this. You have this extraordinarily human quality about you.
And I think that's what a lot of people picked
up on at the dinner table. And I think that
might be if I could say anything, what is kind

(06:11):
of your competitive edge when it comes to what you
do for your day job?

Speaker 2 (06:15):
Yeah, you know, I think I think people don't realize
that empathy today is a critical component of cybersecurity. A
lot of people thought in order to do cybersecurity right,
you need to be really good at like managing tubes,
breaking tubes, you know, managing networks. I think today we
realized that if you want to secure systems, secure conversation,

(06:37):
secure the way we live online, you really have to
have an empathic heart and to understand the types of
threats and how they evolve.

Speaker 1 (06:46):
Like, so, can we get really quick into because I
want to go back to your background, but just so
people know, like what is your day to day, Like
I just envisioned you in an office like fighting the
troll farms online and making sure democracy is ruined, but
I don't know what that actually looks like. It's I know,
big tech companies call on you to help with these

(07:06):
different campaigns, and you've had some pretty high profile reports
that have helped us understand the extent of Russian influence,
But like, what does your day to day look like?

Speaker 2 (07:15):
Or you know, first, I don't do this by myself.
I'm very lucky that I get to manage a really
fantastic team. Yeah, people coming from like different types of background,
and so together we analyze online conversations and we look
for markers that they are being manipulated. Now, what's really
fun in twenty twenty is, you know, we tend to

(07:36):
think when conversations are manipulated it's either trolls or bots
or Russians. But today this quite a big diversity of
actors and ways in which people manipulate public conversations. And
so we investigate because we want to be able to
have more people do that and to have a public
that's better informed and better equipped to tackle these threats.

(07:58):
We also build tools to make that easier. So we
have an R and D lab. They just do scientific
research on how can we better detect patterns of manipulation
of online conversations.

Speaker 1 (08:10):
So you can essentially see it coming, understand if people
are being manipulated, understand kind of who's behind it, and
try to stop it. So I'm going to put it
in this way of how I love to look at
the hacking community and security researchers, and I got really
fascinated in the security community years ago. I went to
black Hat and Defcon for the first time, and for
our listeners that those are like hacker conferences in Las Vegas.

(08:33):
So if you could just imagine a bunch of like
security researchers like who are finding vulnerabilities online and like
in weird Vegas hotel room. I mean, it's just such
an interesting community. And I remember my first exposure to it,
I was like, WHOA, these people are like these modern
day superheroes, right, Like they have the ability to fight
the bad guys online and they have this skill like

(08:56):
they can as we talk about social engineering and you
being able to kind of walk in here and understand
how computer systems work, like you could use this skill
for bad or for good?

Speaker 2 (09:05):
Right, Yeah?

Speaker 1 (09:06):
And so I remember going to black Cat and Decon
for the first time and everyone had code names, right
do you? I mean right? Like is that a thing
in your community?

Speaker 2 (09:14):
You say that because last year I was a black
hat with Bert Schneier and Eva Althren and art panel
was please use your skills for good.

Speaker 1 (09:22):
I mean like back like six or seven, eight years ago.
It was like, even when I started going to these
hacker conferences, it was like if you found a vulnerability
or if you started saying, hey, this isn't good, you
could get into trouble for saying it. It's again yeah, right,
and so and there was something there was something very
anti establishment about it and being that person who was

(09:44):
fighting the bad guys and who was calling attention to
this and who was kind of saying this. I mean,
do you have a code name?

Speaker 2 (09:51):
No? I have a I mean yes, and no it's
not the greatest code name. I go back Keemtronics.

Speaker 1 (09:56):
Okay, So like yes, easy, Well, I.

Speaker 2 (09:57):
Mean like it's it's not the most secret code name. Yeah,
that's kind of cool, though, what does it mean kem Tryna?

Speaker 1 (10:03):
Yeah, like what's behind what was behind it?

Speaker 2 (10:07):
You know, I can't even remember. It's been with me
for a little while. It followed me around.

Speaker 1 (10:12):
Yeah, that's interesting, right, So but there was I guess
what I'm getting at is there was always the spirit
of the community that had a lot of conviction and
wanted to do something and raise awareness. And so now
we're in a certain moment, and so I guess I'm
curious about your background, like before we get into like
all the stuff you're doing to help fight kind of

(10:33):
this this current moment in time, I'd be curious to
know just like, how did you get into all this?

Speaker 2 (10:38):
I'm just I'll into that in a second. But I'm
really interested in what you're saying about how this idea
that initially was quite scary, right let's enable a bunch
of kids to poke at the systems from the outside
and find the holes in it kind of became our
best idea for how to do security in a complex world.

Speaker 1 (10:56):
Right.

Speaker 2 (10:57):
And I think what's really interesting is that model is
going to apply to more and more questions. So I
think a lot about biases and algorithm Right, So when
machine learning makes decisions that are deeply racist or deeply sexist, right,
and all those problems that we're seeing emerge, and frankly
we don't have a lot of solutions to tackle. Could

(11:22):
the model of a bias bounty also be applied there? Right?
How many more learnings can we draw from the hacker
mindset and from the security community how we do security
in an age where we have more complex problem with technologies? Right?
I think there's a lot of innovation and promising areas
to explore there.

Speaker 1 (11:42):
That's interesting. So tell me about how you got into this.
Were you always kind of a free spirit?

Speaker 2 (11:46):
I mean, who answers no to that question is not
people in a man So, you know, like, no, I
don't know how I really got into this. As a
real answer, I think I was always quite obsessed with technology.
I grew up in friends and very quickly I realized
that the type of questions that I wanted to ask

(12:07):
I needed to you know, to go to the US
to to to ask and to work on.

Speaker 1 (12:12):
But like taking back there, like were you were you tinkering?
Were you like playing on the internet? Were you in
weird chat?

Speaker 2 (12:17):
Okay, the truth? Yeah, it was a very optimistic person
and I still am, you know, like my team like
often says I'm the most optimistic person looking at like
the darkest stuff. I was really excited by the promises
of the Internet. I really thought that the Internet was
going to bring democracy, was going to bring more diversity,

(12:39):
was going to connect connect us to one another. I was.
I was just really excited about the promise of the Internet.
Perhaps it's a generational thing.

Speaker 1 (12:49):
Was it just how you're raised, like, was it just
you love no spirit of it?

Speaker 2 (12:53):
Yeah? I love the spirit of it. I loved it.
I remember, you know, when my father got me my
first computer, I just thought it was magic. And so
I think the first you know, projects that I worked
on were just so extraordinarily optimistic. So I remember with
a closed friend of mine, we had this project called
citizen WiFi and we were knocking on doors around in

(13:14):
Paris and we were asking people, could you remove the
password on your Wi Fi so that everyone can connect
to your Wi Fi and we'll give you a little
sticker that says citizen wi Fi. I mean, it's not
removed the pass where it's like, you know, create a
guess network so that more people can access the Internet
and you're part of the guest WiFi community, which you know,
again was very extraordinarily optimistic idea of just like, let's

(13:36):
put more people online and things are going to be great.
So yeah, I think I really come from a background
that looks at the Internet with very rosy, optimistic, cheerful eyes. Well,
but I still do, way after all the crazy stuff
I see, I still, you know, I still have hopes
for what you know, digital technologies can bring this.

Speaker 1 (14:00):
Well. I also saw you wanted to be a space baker,
which I don't even know what that What on earth
does that mean? Because we're going to get into Russian
influence and fighting the bad guys. But I feel like
we have to we have to start at some point
with space baker, you know.

Speaker 2 (14:13):
Like Neelaks and Voyager or you know, Quark on Deep
Space nine. Just just have like a little bakery or
bar and you're on a space station and you may
cross song. You know, it's kind of nice. You meet
people from everywhere. You're like, hey, which quadron are you from?
Do you want a cross song today? Like just like
a nice little space baker.

Speaker 1 (14:35):
By the way, No, no, no, it's not that I'm
like speeches. I'm just like, wow, what an interesting dream?
Like I wanted to be a Broadway actress. Unfortunately that
you know, I didn't really live into that provision, but
I wish I were cool enough that that was my
dream to do that.

Speaker 2 (14:48):
I love that.

Speaker 1 (14:49):
And so you ended up going and getting a degree
in human rights and international security.

Speaker 2 (14:54):
Yeah, that was was far from the space baking. Yeah,
like the space baking route was not straightforward.

Speaker 1 (14:59):
Yeah. Was there was there a certain moment that you
were that that dream was crushed? Was there any any
indicator that wasn't gonna be it?

Speaker 2 (15:06):
No, I'm just really bad at cooking.

Speaker 1 (15:09):
That's totally. I totally you're unsafe. You're at a safe
space fight.

Speaker 2 (15:13):
Yeah, I'm just so bad at it.

Speaker 1 (15:15):
Well, it's good because the things you're good at we
really need right now, which is like fighting really bad
guys online. So you ended up getting a degree in
human rights in international security and you ended up going
to DARPA eventually. Can you tell us a little bit
about your work.

Speaker 2 (15:30):
There, Yeah, I mean that was a long time ago.
It was. It's interesting.

Speaker 1 (15:32):
It was a.

Speaker 2 (15:33):
Project that I was working on at the end of
my grad studies and it was just looking at privacy
and security. So I had a fairly conservative cybersecurity professor
at Columbia who who called me a hippie, and I
told them, you know, I'm really not very radical hippie.

(15:55):
And what I'm telling you about digital rights, I really
don't think it's very radical. I think it's just really
something that needs to be heard and discussed. And I
was telling him, like, you know, I'm really concerned by
the growing gap between national security discussions on one end
and what I consider it to be important digital rights
and human rights security discussion on the other hand. And
and I think more should be done to bring digital rights,

(16:18):
to bring human rights at the core of how we
discuss cybersecurity.

Speaker 1 (16:22):
That what does that mean? Can you explain that a
little bit?

Speaker 2 (16:24):
You know, privacy. Privacy is a good example, right, so
you had a lot of conversations on what do we
need to secure the Internet from bad guys? And none
of these conversations back back then. Right, I think we're
in a much better place now, but a few of
these conversations consider that privacy was a very important element
of that, right it was. I think back at a
time where most people consider that there was a tension

(16:46):
between privacy and security. Honestly, I think we've done a
long way since then and today, I think there's a
recognition that privacy and security go hand in hand. Right,
if you have a system, you know, that leaks out
private information on people, that also is a system that's
easier for hacker to exploit. And so I was, you know,
working on these these topics, and my professor was like, okay, fine, well,

(17:09):
well you know, I know a project that could use
a hippie and then well you'll just get to work
on this project as a student. So yeah, that was fun.

Speaker 1 (17:18):
We're going to take a quick break to hear from
our sponsors, but when we come back, Camille talks about
her work on the front lines of terrorism online and
how isis was actually really good at micro targeting. Also,
if you like what you're hearing, make sure you hit
subscribed to First Contact in your podcast app so you
don't miss another episode. And you also worked at Google

(17:54):
and specifically Jigsaw, right, which is for folks who don't know,
it's kind of the I think is really an interesting
part of the company, which is where a lot of
this technology humanity, this like think tank of sorts, where
a lot of these hard problems like AI and bias
and some of these efforts to counter extremism. A lot
of the people who are working on these problems are

(18:15):
in that realm, and you were kind of at the
forefront of that. So what did you find there?

Speaker 2 (18:20):
Yeah, it was it was really fun working on these issues.

Speaker 1 (18:25):
By the way, I love that you say it's fun,
Like you work on like counter extremisms, so like you're
like probably spending lots of time looking at isis recruiting videos.

Speaker 2 (18:32):
Yeah, but you work on those issues with people who
really care about solving them and who are keenly aware
of the different trade offs, and that I think is
a very fortunate position. Working inside Google on this issue
is also meant like working on an organization who really
wanted to get to the bottom of it, and with
colleagues Jigsaw, but also you know, frankly old across Google

(18:52):
and engineering and policy, who wanted to make a dent
in the problem, who were willing to experiment with creative,
innovative ideas, but who also had a very clear picture
off the trade offs, off the constraints of making sure
that we don't go on the other side of the line,
making sure that we protect freedom of expression as we

(19:12):
think through these problems. So yeah, it's a privileged position
to work on these issues.

Speaker 1 (19:17):
What were some of the I mean, you worked on
a research program to counter online propaganda, so like what
like can you tell me, like what were the issues
that you saw? Like what did you kind of come
up with?

Speaker 2 (19:28):
Yeah, so it's a program that was called the redirect method,
and I think what I was really interested in when
we started, you know, thinking through this project is a
lot of people think about, Okay, terrorist propaganda. You have
to just remove the content. That's a great first step, right. Indeed,
there's content that's harmful, sometimes illegal, It shouldn't be online,

(19:50):
and so you worked to detect it and you were
to take it down, but it doesn't you know, still
have the problem that you still have issues with the questions. Right,
Like there's users who are coming to your platforms and
they're saying like, well, I'm here to consume you know,
a piece of content that you've removed, And this is
where the redirect method operated, right Like, when you have

(20:11):
users who come and who are looking for content that
is no longer there, do you still have an opportunity
to reach out to them and to propose something else. Now,
you don't want to trick them into something else. Right,
So we really do want to redirect and propose, Hey,
here's a playlist of alternative content that we think might

(20:32):
be interesting. And so you want to find you want
to find the most transparent way to do that, and
you want to find the most sort of clever way
to do that, again avoiding all the all the potential
traps that that are that are all around this this question.

Speaker 1 (20:49):
I mean, it was such I remember covering it and thinking,
this is such an interesting idea, Like I didn't realize
that you were behind it.

Speaker 2 (20:56):
It really was you know, teamwork from a lot of
researchers who, as you said, like really close to the question, right.
The idea was we had to sit with people and
really understand when you end up in a rabbit hole
of consuming terrorists propaganda, what led you there?

Speaker 1 (21:11):
Right by the way, I don't think people understand. Maybe
this is me having done a documentary on like someone
from Isis who was killed in a drone strike. He
was kind of a hacker type and one of the
things he was actually in charge of their social media.
His name was Janete, who's saying AKA trick. Maybe you've
heard of him, but like I remember he was in
charge of their propaganda stuff, and I remember like thinking like,

(21:33):
oh my god, like he's making this is gonna sound
really weird to say, we might have to cut it.
Like he makes Isis look very human, but he makes
Isis out to be like punk rock, like you know
these or like these rap videos and like these videos
that are so compelling and so I'm sure you know,
and in very human. So when vulnerable people are going

(21:55):
to to you know, look at these videos like and
this is where I think the human thing comes in.
It's like we all sit here and we think like
these people are crazy they're going to join ISIS, or
these people are crazy they're Russian trolls or whatever it is.
But there's like a lot of humanity behind how people
end up getting into these spaces, right, and it's so.

Speaker 2 (22:14):
Much more subtle, Yeah make it up to be Yeah,
you know, you know who was really really good at
micro targeting?

Speaker 1 (22:20):
Though?

Speaker 2 (22:21):
Who isis?

Speaker 1 (22:23):
In what sense?

Speaker 2 (22:24):
What was really interesting is we tend to think that
terrorist propagandiz is just one big thing where it's like
one bucket of very clear graphic imagery, But what we're
observing actually is tailored narrative target it add very specific community,
very specific people. Right. It was you don't convince a
young British girl the same way then you convince uh,

(22:45):
you know, a Chinese Muslims and online it's just a
much more nuanced and subtle and micro targeted picture that
we often imagine.

Speaker 1 (22:56):
And so you left and joined Graphica. And then your
first assignment seemed like a pretty significant one, right, You
were involved in like a super secret project for the
US Senate Select Committee on Intelligence. Can you just like
give us the like, don't give us the company line,
like tell me, like they came to you. What did
they say, Like, what what was the mission?

Speaker 2 (23:18):
You know, at that stage, I had been working on
Russian interference for for a little while already, and so
I was I was already pretty you know, pretty obsessed
with it. And I remember actually when my boss called
me and said the CEO of Graphic I called me
and say, like, hey, Cam, what if like we had
all the data from you know, everything that the Russians

(23:39):
have been doing across platforms and and we could really
entangle and understand what's going on, Like wouldn't that be great?
And it was kind of like what's up with John? Yeah? John,
that would be great. Just give me magic data box
stop me just super great. And he's like, Okay, well
I think we're gonna do that, and I was like

(24:00):
what wow? And yeah. That was basically the assignment. The
Senate Select Intelligence Committee really wanted to get to the
bottom of what had happened. And I think we don't
often recognize how little we knew then, and we still
have gaps in our understanding of how really this campaign

(24:20):
unfolded in twenty sixteen, but also before and after, and
so it was extraordinarily exciting to be able to help
the Senate who had gathered all this data and really
gave it to us with total free reign. Right, they said,
tell us what you see. I think my first instinct was,
you know again, as I said, like, at this stage,

(24:41):
I had all this already in my heart in my head,
and so I was already looking for bits and pieces
and it was IRA data, and so I remember the
first thing we did is I was like, oh, well,
here are the things I expected to see in that
data and that are not there. And so it taught
us very quickly that the IRA, the Internet Research Agency,

(25:02):
the troll farm that's based in St. Petersburg was one
part of the problem, but it was not the full campaign,
and so I knew of other campaigns. Later we realized
they were the gru campaigns right who were lacking in
the data set. And so it's been like this for
seven months before the report got public, and then again
after and after, and honestly, it's, you know, it's continuing
to be an endeavor, a puzzle of having to figure

(25:24):
out what really happened. Why were the different entities involved
in this campaign, how did the targeting take place, What
is the exact relationship between the hacking and the trolling
and the targeting, How did the platforms respond, and even
more fun, how did the Russian controls respond to the
platforms responding? And so we had all of that in

(25:46):
sort of millions and millions of data points.

Speaker 1 (25:48):
So what does that mean, like millions and millions of
data points? Like how are you as you do a
whole analysis around it?

Speaker 2 (25:54):
Or like yeah, So we wrote this really long, you know, report,
and we tried to talk about the big trends and
everything we observed and the role of the different platforms
and how long this had been going on. I think
the few trends that we really tried to highlight was
this was not just a campaign against the US. It
was a campaign that had been waged against the Russian

(26:16):
domestic population first, right against other populations in Eastern Europe
and also a little bit in Canada and in Germany.
And similarly, I think people were very focused on twenty sixteen,
and we were able to demonstrate that it had been
happening before. Right. So Project Lakta, the big US focused
project of the IRA, actually started in twenty fourteen, and

(26:39):
in those two years before the election, there's a lot
of fascinating detail of the you know, the Russians really
learning to play the Americans, right, like what are the
hot button issues? Like what are the triggers? What can
we try? And so we also looked at those two
years of experimentation in which really they're bizarre cases. Right,
There's a like what, oh, I think in twenty fifteen

(27:01):
around Thanksgiving, the Russians are trying to freak out everyone
telling people that the turkeys they would buy at Walmart
will have salmonella. Yeah, do that? And so this is
just like a bunch of weird you know, bizarre what
else like food hoaxes that are fun? Does of course,
I'm a famous case I called Columbian Chemicals. That's hun
before that one is twenty fourteen. It targets a small

(27:24):
community in Louisiana. This one's interesting because it involves SMS,
and so they're telling people, releasing video texting officials saying
a chemical plant has exploded, and they're trying to create
a panic. It works to some extent, as in, like
you know, it's reported a bit and then the message circulates,
but very quickly the local authorities say, actually that that's

(27:44):
a hoax and it's not true, and they kind of
move on, which, to be honest, you know, I understand.
I think in twenty fourteen in Louisiana, if you wear
to have said it's a hoax and we think it's
a Russian troll farm, I think you would have sounded
insane when you went around you you know. But the
things like this for at least two years before the election,
and of course they continued targeting the American public after

(28:09):
the election. Right, So twenty seventeen is a really interesting
year too because people are talking about Russian trolls in
twenty seventeen, right, it's in new reality, and so the
Russian trolls themselves are making jokes about it, right, So
you have fake profiles that start making messages saying, oh,
I'm reading all these stories about Russian trolls. It is ridiculous.

(28:30):
Next time I'll be accused of being a Russian trolls
haha ha.

Speaker 1 (28:33):
Right, so they kind of like adapted toniative of the Russian.

Speaker 2 (28:36):
Troll absolutely, and then they start adapting to platforms responding
to this activity. I worked a lot on the part
of that activity that targeted Black American activists in the US,
and part of this effort was to create fake activists
to organizations and to work with a real activists on
the ground to do events together and to really sort

(28:58):
of like you know themselves in that community. And there
was a specific group called Black Matters US and when
Facebook determined that Black Matter's US was a fake group
and was a Russian entity, they removed it from the platform,
but they didn't coordinate with the rest of the industry,
and so what really happened is the group went to

(29:19):
Twitter and started complaining about having being kicked out of Facebook,
saying we're really upset that Facebook supports white supremacist and
then they started going on Google and they bought a
lot of ads to redirect people to their new websites
because they had to direct the traffic away from Facebook
they had been kicked out. And so twenty seventeen is

(29:39):
this like really sort of you know, surreal year for
the Russian trolls where they're playing cat and mouse with
the industry who still doesn't fully have you know, their
mechanisms well set and doesn't really have their policies while
set either, So it's kind of a chaos and confusion
for everyone. And then the Russian trolls start talking about
Russian trolling yet a bit meta, and then of course

(30:02):
in twenty eighteen they're the midterms. In twenty nineteen, they're
also showing a different It's just just like a kind
of I think what's interesting from my perspective is people
often think that the Russian campaign is one year and
one thing. I've seen it evolve over so many years
and show so many different facets.

Speaker 1 (30:20):
Yeah, do you when you interact with these people out
of curiosity, like do you just sit and watch them
from afar? Do you go do you have like an
undercover name or something where you're talking to Russian trolls
as someone else like, what's your deal?

Speaker 2 (30:34):
So I've talked to a few people who have worked
in troll factories Russia and Russian and others. It's funny
that you mentioned undercover. That's not the type of work
I do. But one of the reason we know so
much about specifically the Russian Internet Research Agency is because
a young Russian journalists went undercover and published everything she

(30:58):
could find, and she did that quite early. I think
what's interesting, it's an interesting reminder that honestly, the activist
communities and the investigative journalist community knew about this and
really went through great pains to document it before the
rest of the world in Silicon Valley really cared about it.

Speaker 1 (31:16):
You said, as something that I thought was really interesting,
you said, this work is two parts technology in one
part sociology. Yeah, what did you mean by that?

Speaker 2 (31:23):
A lot of that is really about understanding socio technical systems, right,
So when you think about information operation, it's not really
like hacking, right, It's not looking for a technical vulnerability.
It's looking for a social vulnerability. It's looking for what's
going to play well into a society's division, what's going

(31:46):
to fall in between two rules that a platform has
and that that's going to make them not catch me. Right,
A lot of this is really playing with social systems
as much as it is playing with technical systems.

Speaker 1 (31:59):
Speaking of the humanity of it, you talk about kind
of bringing a hacker mindset to the data security problem,
and like, what I think is so interesting about you
is I mean you went and like talk to trolls, right,
Like we have this whole misk and set. Who are
these people who are doing this like in America where like, oh,
the Russians are trying to mess with democracy? And you actually,

(32:20):
maybe this is just me selfishly as like a journalist
who like loves to talk to the other side and
like loves to talk to the dark corners where people
aren't looking and hear the other side. You did that, right,
Like take me into that. So you actually found people
who were working in Russian troll farms and talk to them.

Speaker 2 (32:38):
Not just Russians. I think I've I was always very
interested and I think you know, brought that mindset to
my work. Was really interested in in understanding more from
the other perspective, right, So yeah, I talked to trolls
and hackers who made a living doing disinformation and propaganda

(32:58):
for Higher Take.

Speaker 1 (33:00):
Me into the rabbit hole? How does one decide to
do that? Like, are you sitting at your desk and
you go to these plot how do you even get
in touch with these people?

Speaker 2 (33:06):
Like and again it's so different. Yeah, and you know
as a journalist you get that right, free story is
really different.

Speaker 1 (33:13):
Well that's why not. It's really probably challenging. So you've
got this was why I'm kind of sitting here being
like props, Like, how do you give give me some
specific examples, like who are there any people that really
stick out to you that you spoke to that just
surprised you from I.

Speaker 2 (33:25):
Means they were all fascinating stories. Uh, I have to say,
I've heard so many different stories that I would be
I would really struggle to paint it with one brush.
Things that come to mind is I've talked to Hacker,
who did propaganda for hire all across Latin America. And
that was way before people were worried about Russian troll farms,

(33:48):
you know, it was more Yeah, the entire disinformation for
Higher trolls and bots and fake profiles in Latin American politics,
that was quite fascinating to people.

Speaker 1 (34:00):
In what sense, Like do you like, why'd they do
it just for it just might.

Speaker 2 (34:04):
Helps them win an election? Right? Like it's just like a.

Speaker 1 (34:08):
Patriotism was it just money?

Speaker 2 (34:09):
Like it's just like why do people work on political campaigns?

Speaker 1 (34:13):
Right?

Speaker 2 (34:13):
That was his stick. It's like, you know, you're assembling
a political campaign, You're getting a communications specialist. Do you
want this guy who can bring you a little army
of fake profiles and bought controls? Right? He kind of
like made a niche for himself like that. It's pretty successful,
and it sort of badly because he got caught and
ended up in jail. It's like a oh yeah, that's

(34:35):
like one story. It's interesting because that the campaigning angle
came a few times, right, So talking to people who
went into doing digital campaigning and really by by patriotism
to support their candidates, right and slowly saw the campaign
apparatus evolve into like a state propaganda machine after their

(34:57):
candidate became in power. And so there are a few,
you know a few stories like this of people who said,
initially I joined because I wanted to do campaign messaging
for you know, for my candidate to win. And then
I woke up and I was just standing rape threats
to women journalists using fake accounts and wondered, what, you know,
what happened? What am I doing there?

Speaker 1 (35:18):
Someone said that to you, Yeah, wow, what did they say?
They were just just that, you know, like.

Speaker 2 (35:23):
That that it that it slipped, and that they went
in for one thing and that with the success of
the candidate and the evolution of the machinery, they ended
up just really doing something else where.

Speaker 1 (35:35):
Can you give any details or of the candidate that
this person like it was a sendme that.

Speaker 2 (35:39):
Was a story that happened in India.

Speaker 1 (35:41):
Wow.

Speaker 2 (35:42):
But but again, like I've heard that a few times,
and I think that the story of doing something for
political reasons that ends up sort of like putting you
in the middle of a machinery that's no longer what
you had had joined is one that that's more common
than what we think. There's also been other researchers who've

(36:02):
done great ethnographic field work talking to trolls. Someone you
know specifically who comes to mind is as a friend
who wrote a report called the Architecture of Disinformation that
looks at what happens in the Philippines, and it's a
really fantastic report. And he's talking to people who self
identify as doing this activity. They don't say trolls, right,

(36:24):
they don't say I'm a troll, but they say, yeah,
you know, I make my living by having a lot
of fake profiles. And if you're a candidate and you
want to pay me for this activity, I will I
will do that. And I think in his work what
you see come through is a is a question on
when did that become an illegitimate activity? Right, Like, does

(36:46):
indeed a real business of people who do this for
hire and who suddenly are told like, you're a troll
and you're going to be deactivated. And I think I
think part, you know, part of what you hear when
you talk to people on the other side is okay,
wait a minute, because I've been doing that for a
little while and I thought it was okay, right, Yeah,
I was new.

Speaker 1 (37:06):
Did you ever find yourself really liking these people?

Speaker 2 (37:10):
Yeah? You know, you got empathy for and again, like
such different trajectories, right, Like you have empathy for someone
who works for a candidate and suddenly says like what
am I doing here? You? Yeah?

Speaker 1 (37:23):
Did you ever learn about how they learned how to
pose as American? Like what's the secret sauce? Like, what
is the secret sauce to posing as an American these days? Inline?
I mean, I'm sure it's changed over the last couple
of years, and it might not be rocket science, but.

Speaker 2 (37:39):
No, actually it's fairly complicated. So we know a lot
about how the IRA learned how to pose as an American, right,
And as I said, like, this is where the early
days of the ARA are really fun, because this is
when they have to learn, right, this is why they're
playing around with like, oh, how much can we freak
people out by talking salmonella and turkeys around things? Right, Like,

(38:00):
this is them trying to figure out like where America's
hot buttons. We know they were watching House of Cards,
which I still think it is hilarious.

Speaker 1 (38:08):
They were watching House Yeah, okay, how do you know that.

Speaker 2 (38:11):
It's in It's in a defector's testimony. But really here
at the the legal indictment have a lot of sort
of like crazy details on everything that the IRA did
to to sort of like learn to be American. Right,
So we know they took field trip. That's part of
how some of the employees ended up being indicted was

(38:33):
they entered the country with tourist visas, and I think
a few years after the government was like, I don't
think you were here for tourism.

Speaker 1 (38:41):
So troll farm field trip.

Speaker 2 (38:44):
It's a troll farm field you know. You know, you'll
observe people, you understand how how they how they act,
what they talk about. They were also looking at their
social media metrics very closely, right, So whenever they were
out a new posts in a group, they would take
notes on what's performing, what's not performing. They were talking

(39:06):
about how to target specific groups and other groups. And
of course I think the thing that we tend to
forget is they were also targeting Americans, right. They were
talking to Americans. They were using their fake personas to
have long dialogues with American activists on all sides of
the spectrum, saying, hey, what do you think about, what

(39:29):
does the community they think about? How are we going
to do an event together? And so honestly, they were
doing serious research.

Speaker 1 (39:40):
We've got to take another quick break. But when we
come back, it's not just Russia using sketchy social media tactics.
Could American political candidates be using fake accounts to win
your vote? And if you have questions about the show,
comments honestly anything. You can text me on my new
community number nine one seven five four zero three four

(40:02):
one zero. Did you ever worry? Just because I know

(40:25):
you're kind of in these dark corners of like you know,
dealing with trol farms of the gau like I mean
also like real like well funded governments who are trying
to influence outcomes in some of these very dangerous ways.
Did you ever worry? Maybe this is an extreme question,
but about your safety, Yeah.

Speaker 2 (40:43):
That comes to mind, comes to mind. Yeah, of course,
I you know, try to be as safe as I can.
I also don't worry about it too much because I
also work with a lot of people who are I mean,
it's not a race, but you know, thinking about people
who are much greater personal risk it also helps both

(41:05):
prioritize and and put some relativity on it.

Speaker 1 (41:09):
What does that mean?

Speaker 2 (41:11):
So someone I've worked really closely with along the years
on these questions, just for instance, the amazing Mary Aresa,
who is the executive director of Rappler in the Philippines
and who's a fantastic journalist. She's been arrested so many times,
She's been targeted so harshly by her government that you know, sure,
sometimes I worry about my own safety, but I think

(41:33):
more often than not, I worry about that of my
friends a bit more.

Speaker 1 (41:36):
Yeah, So many of these disinformation campaigns, the idea is
also to silence people, and as women. I mean, I
guess it's a lot of it is also silencing women
and silencing female journalists and what it's you know, not.

Speaker 2 (41:48):
Silencing is definitely a key goal. I'm glad that you're
bringing this question, because besides my own safety or that
of my friends, I am really passionate about how do
we build technology to protect users from very well founded
and well resourced threats? Right, And when you think about it,
it's a very difficult problem, right, Like, how what can

(42:09):
you do when you know that a journalist is targeted
by a nation state. There's a little known feature, I mean,
outside of a security circle. That's a feature that's really
near and dear to my heart. It's called the state
sponsored warning. And I've been working a lot on this
and thinking about all about this. Sometimes, when a platform
knows that as a user you're being targeted, they would

(42:32):
actually give you a little notification that says, hey, we
think you're being targeted by a state actor. Why don't
you go and do this ten things right? Change a
password and able to factor authentication, etcetera, etcetera. And I
think a lot about how how much we should you
celebrate in these systems. They're not much, but they're almost
the only things that exist sometimes, and how we should

(42:53):
invest in making sure they're as strong and robust as possible.

Speaker 1 (42:56):
Yeah.

Speaker 2 (42:57):
Oh, and by the way, that isn't ask if you've
ever received a state sponsored warning in your inbox and
you have thoughts about like your experience and want to
talk about it should be an email. I love to
hear those stories.

Speaker 1 (43:07):
And by the way, how first of all, can we
just take a step back, like how scary would it
be if you're just like checking your email and you
get like a state sponsored war like warning.

Speaker 2 (43:15):
You know, it's really fun because I've talked to a
lot of people over the years.

Speaker 1 (43:18):
It's so strange.

Speaker 2 (43:19):
I mean, I think it's great.

Speaker 1 (43:20):
Like I love that you talk about like doing taking
on isis propaganda and you're like this is fun, And
you talk about someone getting a state sponsored warning and
you're like this is so fun.

Speaker 2 (43:28):
I'm like, no, you're right, this is terrible. It is horrifying.
But but it's fun than the right word. One thing.
It's what's really interesting is that people have such different
reactions to it.

Speaker 1 (43:38):
Right, But it's important, right, I would rather know and
want to know.

Speaker 2 (43:42):
It's extraordinary important, which is why, Like I have met
users who tell me like, yeah, we received this, but
you know, we think it's a drill with things that
really platforms tell us dad, just to keep us on
our tip toes. And I'm like, no, it's not a drill. Right,
if you receive that warning, please know this is not
a drill. You really do have to think about your security.
You only do have to enable to factor authentication, do

(44:02):
those things, right. But you also have users, frankly who
for whom it's been so terrifying and often sometimes so
tragic that this becomes a symbol for them.

Speaker 1 (44:13):
Right.

Speaker 2 (44:13):
So, like, people have very different reactions to it, and
for some it really is sort of the you know,
the beginning sign of a journey that can be quite
quite horrifying and frightening and tragic.

Speaker 1 (44:26):
Yeah, as someone who's spent a lot of time looking
at influence in the twenty sixteen election, who just spends
so much time like what are we missing. I know,
I watched you on c Span, you know, and research
for this, I took some time and watched you testify
and I saw and do you know, I just I
saw you talking about how the thing we're missing is
it's not just the IRA. We're talking about the g

(44:47):
are you and like and how this is very well
funded government backed campaigns and we're not really talking about that.
How also we're not even measuring like private messages of
people being targeted, like you know. I just think there's
so much talk about one thing right now. And my
concern as someone who covers this kind of stuff, is
that we just don't even look at other things. And
we scream about the same things a lot, which is important,

(45:09):
but we don't look at other things. So I thought
those two things were really interesting if you don't mind
getting into them a little bit. And then like, what's
the other stuff that you think we should be talking about?

Speaker 2 (45:17):
Yeah? The first thing is, indeed, you know, we've talked
so much about the IRA, and that's great. I mean
I say this as someone who's very deep deep down
the rabbit hole. I would talk about the IRA day
in and day out, and for month and month without stopping,
but it's not the only actor in foreign influence, and
a lot of people when they say foreign influence, really

(45:40):
you know, their mental model is what the IRA did
in twenty sixteen, which again doesn't acknowledge that there were
many other actors, Russian actors, right, So the gr you
played an important part. There are other Russian entities who
participate in foreign interference and information operations, but of course
there is are non you know, there are other governments, right.

(46:01):
So the first campaign by the Iranian regime targeting US
audiences I think starts in twenty ten, right, their first
foreign interference on social media campaign, So a lot of
this was happening also before we kind of like woke
up to it. So there's a lot more actors than
just the IRA and frankly than just more Russia, both

(46:23):
on the foreign side and also, as we talked about,
right on the domestic side. That's one thing. And yes,
as you said, I am worried that we still don't
have the full picture of how that specific Russian campaign worked,
and that there's still a lot that's missing from the record,
and working with activists who had been targeted, we looked

(46:47):
at the messages that they received, and we never talk
about those messages. Right, when we think about the Russian interference,
we kind of feel like, yes, that's a bunch of
tweets who had to be a little bit out of
the loop to reach Russian troll This is not what happened, right,
Some people were targeted personally and worked with the fake

(47:07):
personas for month and month and organizing events together and
discussing political life. And I think we don't talk about
that nearly enough. I think we're still lacking important evidence
from the record. And you know, I've worked with activists
whose messages have also disappeared, right, they only have their
side of the story. So trying to piece all of
this together is still I think an important endeavor.

Speaker 1 (47:29):
What do you think is the biggest threat going into
twenty twenty ourselves? What do you mean?

Speaker 2 (47:35):
You know, this information is really important. It is true
that there's foreign interference, but it's been very odd to
see the pendulum swing so hard in twenty fifteen when
I was saying, I think there is such a thing
as patriotic trolling, right, I think governments are actually doing

(47:55):
these information operations on social media. I think there is
such a thing as Russian trolling. It was kind of like, yeah, really,
and now every time there's something, people see Russians under
their beds everywhere, right, Like everything is this information, everything
is foreign interference. And I don't think that's helpful.

Speaker 1 (48:14):
I mean, and what about I mean on the home front,
I think, like you said something really interesting and one
of the testimony about kind of this gray area of campaigns,
and you said, I think because of our lack of
serious dialogue on what we're willing to accept on social
media or not, we're going to find an increasing amount
of gray area situations as we head into twenty twenty. Candidates, parties,
PR firms like are we going to see troll farms

(48:35):
from actual candidates? Are we allowed? Like what's happening behind
the scenes, Like.

Speaker 2 (48:41):
You know, it's like two different problems, right. The first
one is people don't have a good grounding on what
is normal campaigning. I give you a specific example in
the midterms in twenty eighteen, there was a candidate who
had his supporters install an app, and the app would
get you an off token access to your account and

(49:02):
then will help all the supporters of like tweet the
same campaign message at the same time. But you know,
you would still have to install it on your phone
and you would have to give the token to the app.
Right when that happened, people completely lost it, being like,
oh my god, look at these this is the Russian trolls.
They're back. All these messages are doing the same thing
at the same time, they're bots. And it was really

(49:22):
straightforward to see that it was not bots and it
was not Russians and it was just people using a
campaign app. Right, because we actually sort of lack, you know,
serious grounding on what normal people do in the in
course of a campaign, we're prompt to overreacting, and so
there's a need for a debate on like what is
okay for a campaign to do. Is it okay for

(49:43):
people to download an app and sort of give their
account to their candidate. Is it okay to use fake accounts?
Is it okay to automate some of that activity? And
on the other side, because candidates and campaigns don't really
talk about this, you do have a lot of terrible
ideas that are floating around. I do see people who

(50:05):
think it's a great idea to have a little troll
farm set up for twenty twenty with a lot of
fake accounts that are just going to, you know, help
amplify this or help drown that out.

Speaker 1 (50:15):
Like do you think candidates now actually have like some
candidates could actually have troll farms on their own now
knowing what we know, do you think that there could
actually be troll farms here in the United States for candidates? Yes,
any more details.

Speaker 2 (50:32):
I'm worried that this is not a discussion that we're
having with campaigns and parties and candidates. That being said,
I think it's slowly heading in the right direction. I
was very encouraged to see Elizabeth Warren's disinformation plan that
does say to my supporters and to my campaign, these
are the things we won't do right. It doesn't get

(50:54):
deeply into the details, but I think we're going to
need more off.

Speaker 1 (50:57):
That it's going to say that they wouldn't do that.

Speaker 2 (51:00):
So we can pull up the details of the plan.
But it has a section of it that addresses the
type of behavior on social media that she discourages from
her supporters. I don't think that it specifically talks about
the use of fake accounts, which is interesting. I think
a lot of other concepts were kind of like misunderstood,
right so bots is a traditionally misunderstood concept that leads

(51:23):
to more complex discussion that people don't really want to have. Right,
It's like, what is the role of automation. What part
of your activity can you actually automate? What part of
it is legitimate automation? What part of it is undesirable automation?

Speaker 1 (51:37):
Are you seeing and maybe you can or can't get
into details, but are you seeing like in the US, like,
are you seeing candidates or people associated with candidates have
bots ortrol farms or that kind of stuff?

Speaker 2 (51:48):
So far, I don't think that we've seen candidates and
campaigns sort of like officially do that. What we have
seen is is a lot of people thinking it's a
good idea to use fake profiles to do political messaging. Right,
how much does that at the candidate's direction that the campaigns?
You know, you sort of like, right, I don't. I don't.
I'm hoping that we won't see more of that. But again,

(52:11):
I think a little bit more of a clear discussion
on the rules of the road in this area would
be helpful.

Speaker 1 (52:18):
I remember I interviewed Asa Raskin for this podcast and
he said he thought in the future, a threat could
be this is my maybe very black mirror, so just
go with me and then pull me back. But saying
that in the future, a threat could be, you know,
a bad actor taking using AI, taking a combination of
the faces of the five Facebook people you use the
most or you talk to you with the most, and

(52:42):
targeting you with a face that you automatically trust, almost
like you you're just kind of trust his face because
it's a face that you're almost more used to. Your
brain just kind of registers it. Do you think we
could see something like that?

Speaker 2 (52:54):
Yeah, I think the technology is already on the table
for that, which is interesting. We recently did a report.
We called it FFS I think for Fake Faces Swarm
was the official name, and it was a really interesting
report because it looked at very large campaign of fake
profiles that use generative adversarial networks which you know, basically

(53:17):
like AI to create fake faces from scratch, and so
all these profiles had this fake faces that were generated
by AI, and we realized, wow, this is something that
we really honestly thought was a little bit further down
in our future that we're just seeing there. But on
the other hand, the technology was there. It's very easy
to do. It's available twenty one. It was honestly on

(53:41):
this one, I think it was harder to detect than
to make. There's still sort of telltale signs, right, So
something that was interesting is when you create, or at
least with that generation of generative adversarial networks, the symmetry
of the faces was often wrong. Right, So if you
would have an earring on your left ear, the matching

(54:01):
ear ring on the right would actually like not match
at all, Right, So I've if you had a face
wearing glasses, the left branch would kind of be off
if you compare it to the left branch to the
right one. So like they were, there were teltal signs
like this. But still I think where with a lot
of the AI technologies to generate these types of outputs,
still it is still the case that it's easier to

(54:23):
generate them than to detect them.

Speaker 1 (54:25):
People argue that privacy is kind of a blurry concept.
They say, I have nothing to hide.

Speaker 2 (54:30):
What do you say, Ah, there's an entire book to
be written about that. Yeah, that's not the point of privacy.

Speaker 1 (54:37):
What is the point?

Speaker 2 (54:38):
The point of privacy is the preservation of society and
intellectual independence, Right, You don't have to have something to hid.
You deserve your privacy. And then no, it's it's fundamental
value in democracy.

Speaker 1 (54:52):
Kind of this next threat you talk about a little
bit is not just deep fakes. Could you just take
us to the idea it's read fakes. It's yeah.

Speaker 2 (54:59):
And so you know people think about deep fakes a lot, right,
So the ability for machine learning to generate a video
from scratch of an event that never happened with a
limited training data set, I think that's important and interesting.
I also worry a lot about how that plays out

(55:20):
in the text space. Right. So there are a series
of classifiers and GPT two is one of them, and
tools today who enable you to take a short training
sample and to generate a lot of believable texts based
on that. And I worry a lot about how that
does to the desinformation ecosystem, right because when you spend

(55:42):
a bit of time studying troll farms and disinformation operations,
they often have to produce a large amount of engaging
in believable texts, right to sort of like put out
on a various set of properties or online accounts or domains,
and you know, fake fake profiles and so I do

(56:03):
worry a lot about that specific threat, which I you know,
jokingly called to read fakes.

Speaker 1 (56:10):
How would it play it? Like? How do you see
it playing out?

Speaker 2 (56:13):
Well? If you run, for instance, a disinformation ecosystem where
you have two hundred sites that you're pretending have nothing
to do with one another, it becomes cheaper and easier
for you to keep two hundred sites sort of hydrated
with fresh content. I have a wonderful partner who is
a bit cheeky, and he teaches kids to use sort

(56:36):
of like deep fakes and read fakes and all that,
And I think that's actually sort of a good response.
I think people should should play with those tools and
sort of understand why they can do what they cannot do,
and have sort of a lot more familiarity with these
techniques so that they can more easily spot them.

Speaker 1 (56:57):
Last question, you said you're an optimist, or at least
in your Twitter biotsays you're an optimist mind focused on
dark patterns. Why despite everything you've seen, are you so optimistic.

Speaker 2 (57:08):
Because people are great?

Speaker 1 (57:09):
What makes you? I mean, I guess I don't know
if I've a follow up to that. Why do you
still think people are so great despite everything you've seen.

Speaker 2 (57:17):
Because I think that a lot of what needed to
be uncovered with heart to uncover. I think people worked
really hard to demonstrate that this phenomenon existed. I think
people worked hard to say, look, there are such things
as troll farm. This is how they work. I think
people worked hard to say, yes, you know, activists are targeted,
and this is what's happening. And I think despite the

(57:38):
problems growing in complexity and in size, there's always been
fantastic people chasing them and exposing them, and you know,
coming up with creative solutions.

Speaker 1 (57:50):
You work so closely with all the tech companies, So
do you think they're well equipped to take on this
next challenge.

Speaker 2 (57:56):
I'm much better off than a few years ago. For sure.
We've come. We've come from far.

Speaker 1 (58:01):
You're still optimistic.

Speaker 2 (58:02):
I'm still optimistic. I mean we are in a much
better position, sort of like the tech industry in general,
than when we wear a few years ago. It's still
not perfect. Still a lot to do, both from like
creating better roles, being better at implementing them, and creating
technology to be able to do detection fester.

Speaker 1 (58:18):
But maybe bringing more humans like you to the table.
I would just say, like adding in the people who
actually have an understanding of humanity, because I think the
thing that seems to keep going missing in the narrative
is the human part. And maybe had we been paying
attention a little bit more to the psychology of hacking
and people and that kind of thing, you know, and
there were maybe more people in these tech companies at

(58:40):
the time, maybe that would have been something we could
have caught a little bit earlier.

Speaker 2 (58:44):
More social scientists in tech, more diverse background in tech.
You really can't go wrong with that recipe for sure.

Speaker 1 (58:57):
So what do you think we hit on? The trolls,
the dark corners of the internet, and a little bit
of optimism. I would love to hear from you. Are
you liking the episodes? What do you want to hear
more of? I'm trying out this new community number nine
one seven five four zero three four one zero text me.
It goes directly to my phone. I promise, I'm not

(59:19):
just saying that. And here's a personal request. If you
like the show I want to hear from you, leave
us a review on the Apple podcast app or wherever
you listen, and don't forget to subscribe, so you don't
miss an episode. Follow me I'm at Lori Siegel on
Twitter and Instagram and the show is at First Contact
podcast on Instagram. On Twitter, We're at First Contact pot.

(59:40):
First Contact is a production of Dot dot Dot Media,
Executive produced by Lori Siegel and Derek Dodge. This episode
was produced and edited by Sabine Jansen and Jack Reagan.
Original theme music by Xander Singh. First Contact with Lori
Siegel is a production of Dots dott Media and iHeartRadio.
Advertise With Us

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

The Clifford Show

The Clifford Show

The Clifford Show with Clifford Taylor IV blends humor, culture, and behind-the-scenes sports talk with real conversations featuring athletes, creators, and personalities—spotlighting the grind, the growth, and the opportunities shaping the next generation of sports and culture.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2026 iHeartMedia, Inc.

  • Help
  • Privacy Policy
  • Terms of Use
  • AdChoicesAd Choices