Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
Welcome to Tech Stuff, a production of I Heart Radios,
How Stuff Works. Hey there, and welcome to tech Stuff.
I'm your host, Jonathan Strickland. I'm an executive producer with
I Heart Radio and How Stuff Works and a lot
of all things tech again kind of, this is another
one of those episodes we have to put in a
lot of qualifiers. So in late September twenty nineteen, University
(00:29):
of Oxford's Computational Propaganda Research Project released the two thousand
nineteen Global Inventory of Organized Social Media Manipulation, And apart
from misspelling the word organized, those wacky ox ford Ians
turned the Z into an S, so it's an easy mistake.
But otherwise it's a great report. And yeah, I'm making
(00:51):
a joke about the differences in spelling conventions between America
and the UK, because in all honesty, this report is
pretty scary and I need to get in my oofswell
I can now. I say the report is scary, but
it's also not exactly surprising. We've heard plenty of reports
all around the world over the last few years about
(01:11):
governments and political parties using social media as a way
to spread misinformation in an effort to manipulate people to
do whatever it is the party in question wants them
to do, or in some cases, to not do something,
depending on the circumstances. But the scale of the issue
is a truly global one and it's only becoming a
(01:33):
bigger problem as time goes on. Now, it's also important
to note how the researchers generated this report. This was
not some sort of deep undercover mission in which dozens
of security experts infiltrated various countries to monitor social media activity. Instead,
the researchers relied heavily on published accounts of governments and
(01:56):
political parties manipulating social media for the purposes of propit Ganda,
typically from a lot of news outlets, and they created
a process in which they would score news sources on
a scale of one to three. One being a pretty
reliable and reputable source of information, something that is has
got a nice long standing reputation for for being a
(02:20):
foundational source for news, and then three is on the
opposite end of that right, Three would be a partisan,
biased or unreliable source. Articles that ranked a three were
removed from the the whole set of articles before the
next step in the process began, and that next step
(02:40):
was to review all of those articles and then go
into a mode called secondary literature review, in which researchers
would focus on specific countries and do a deeper dive
into the news stories about manipulation and social media sites.
This included further research that collected stuff like research papers,
govern ment reports, and other publicly available information. Then the
(03:03):
researchers prepared country case studies for most of the countries
they covered in their initial search. Their case studies laid
out specific instances and strategies that were found in the
respective countries. The researchers then called upon experts to review
the case studies, and the experts were there to evaluate
(03:23):
the reliability of the data and also whether or not
the case studies accurately reflected the information that was available.
So not just are these facts accurate, but is the
way that that the report presents the facts is that
in itself accurate? Because you could have some accurate stats
and then still report on it in a way that
(03:43):
is not, you know, entirely representational of the truth. So
the experts were sort of the peer review process for
this report, and you could argue that the report is
itself sort of a meta study that would be a
study that pulls information from many other established sources as
opposed to an original study that does new and original research.
(04:04):
This one was dependent upon stuff that had already been
read written about these various countries. One thing I think
the report does particularly well is that the researchers acknowledged
what makes this manipulation possible in the first place. The
amount of information we have access to at any given
time is truly monumental. Let's think back a few decades
(04:28):
imagine what it would be like before the eras of
radio and television. Back when you would get your news
from print. You would get newspapers or journals or magazines
or other periodicals. That was pretty much the only way
you were going to get any news beyond what's just
going on in your immediate neighborhood. Radio and TV brought
(04:49):
with them the ability to spread news faster and in
a wider distribution. Then came cable and the invention of
twenty four hour cable news, and now we had a
whole new p problem. Suddenly we had to find a
way to fill up twenty four hours of news time
every single day. When before newspapers, radio programs, television programs,
(05:11):
they would curate the most important news stories because you know,
you had a limited amount of space and or time.
But with twenty four hours, suddenly time was not as
big of a problem. In fact, it was the opposite problem.
How do you fill it all up? Then we get
to the Internet, which, like the twenty four hour cable
news channels, is also always on. And much of the
(05:32):
business that is on the Internet relies on doing a
few things, and they're all related. And this is not
going to come as news to any of you, but
I want to lay it out. So business on the internet,
if that's in fact where you are, are really dependent
upon generating revenue. As the Internet itself, you want to
attract as many sets of eyeballs as possible, so get
(05:55):
as many people to visit your website as you possibly can.
You want to keep those eyeballs on the company's web
pages as much as possible, so you don't want people
bouncing off and leaving. And as a consequence, you also
want to serve up as many ads to those eyeballs
as possible, because that's generally how most web based businesses
(06:15):
are generating their revenue. Uh. You know, obviously you've got
other businesses like retail that use the web as a
portal to shop the inventory of the retail store. But
for anything that's specifically dependent upon the web itself for revenue,
that's generally how this works unless you're doing a subscription model,
and all of these things I just mentioned contributes to
(06:38):
add revenue. So yeah, I know, again I'm starting with
the obvious and that if you didn't know this already,
you probably had noticed it at the very least. But
this is why you find so many web pages that
take kind of irritating approaches, like they'll create a gallery
or slide show approach which forces you to click on
the next button to generate another page view. So rather
(07:00):
than just lay it all out in one page where
you can just scroll through and read everything, you're clicking
over and over again. Well, those clicks count as page views,
which help the the company that makes the web page
market it to advertisers, or the websites that are designed
for mobile that just have super long articles that scroll
(07:23):
and scroll and scroll. You might get a picture a
line of text, and then an AD, and then you'll
get another picture in a line of text and another AD,
and then you finally get around to finding the point
where the article actually tells you whatever the headline was
claiming at the top. And that's not even really touching
on the whole concept of click bait, in which a
(07:43):
title and thumbnail image are carefully curated to get as
many clicks as possible, sometimes with no regard as to
whether or not the final web page actually reflects whatever
the original promise was of the title and the thumbnail.
And one of the consequences of all this data around us,
coupled with the various methods companies are using to get
(08:05):
our attention, means we don't tend to spend a lot
of time actually, you know, attending to anything. We can't. Instead,
our attention drifts over data point after data point. Meanwhile,
we've got a limitation on how much we trust information sources,
so skillful manipulators take all of this into account when
(08:27):
designing an approach to manipulate the public. The executive summary
of the report lays out the scale of the problem
right away, and according to the researchers, their work uncovered
instances of governments or political parties using social media manipulation
in twenty eight countries. In a year later, the number
(08:48):
of countries had grown to forty eight, and this year
twenty nineteen, the number of countries in which at least
one political party, if not the government itself, is using
social media for manipulation purposes is at seventy countries. The
researchers also point out this doesn't necessarily mean the number
of countries with governmental agencies using misinformation online is doubling
(09:11):
year over year. Part of the increase maybe do not
to more countries doing this, but rather are growing awareness
and ability to detect social manipulation. So it's not just
that more people are using these techniques though that seems
like it's a pretty safe bet, but also that we're
getting better at detecting them, so in places where it
(09:32):
may once have been overlooked, we now know about it.
So and again, it's just that our our tool set
has gotten better, so that's also contributing to this growing number.
And listing all the countries of those seventy would be
pretty tedious, but pretty much everyone you would expect to
be there is there. That includes the United States and
(09:52):
the United Kingdom. It also includes Russia and China. Other
countries on the list include India, Greece, the Check Republic, Nigeria,
North Korea, Pakistan, Brazil, Cambodia, Saudi Arabia, South Africa, Spain,
and many many more. So the researchers were casting a
broad net around the globe. In this study, much of
(10:15):
that focus has been on how governments use social platforms
to manipulate things within their own borders, so, in other words,
domestic concerns. But the researchers also found some reports about
foreign influence operations or attempts to manipulate people who are
living in other countries entirely. Now, this focus was more
(10:37):
narrow than the overall domestic focus because it's a challenge
to get a handle on how frequently this foreign influence
operation stuff is happening because platforms like Twitter and Facebook
have either limited the investigations into such things or the
reporting of any findings they've had has been pretty limited.
(11:00):
For example, those platforms have at least in their reporting,
limited all their actions against campaigns that originated in just
seven countries, those seven being China, India, Iran, Pakistan, Russia,
Saudi Arabia, and Venezuela. And to be clear, we're talking
about just the stuff the researchers were able to find.
(11:23):
I think it's safe to say there are probably instances
of this that have yet to be uncovered, and some
countries they didn't have time to look into thoroughly. There
are on countries in the world, or three if you
only go with countries recognized by the United Nations. So
this is a really big deal. Seventy out of a hundred,
(11:44):
that's a significant Now, in this episode, I'm going to
go through the report with you guys to talk about
what they found and what it all means, and maybe
think about what we might do to protect ourselves and
those around us from being manipulated bad news. There's not
a whole lot we can do on an individual basis,
but we'll get there now. Part of what makes this
(12:07):
all challenging is that the Internet isn't exactly the same
all across the world, as you guys know. I live
in the United States, and despite a few attempts to
shut down access to servers that were hosting pirated media,
because corporations have enormous sway in the United States, internet
access in the States is largely unfettered, so essentially, if
(12:31):
it's out there, you can access it in the United States.
This is an over generalization but you get the idea.
There are other countries that restrict, to one degree or
another that sort of access. In China, for example, there
is the famous Great Firewall of China. The term describes
not just the technology used, but the political policies. In China,
(12:54):
the restrict citizens access to the Internet. State approved sites
and services are fine, they can access those. The Chinese
government subjects other stuff to heavy censorship or just blocks
it outright. Controlling information is one way to maintain control
over a population, and China is one of the most
obvious examples of that happening today. Another one would be
(13:18):
North Korea. One of the big developments in twenty nineteen
saw China get more involved in foreign influence operations. Previously,
nearly all of China's propaganda efforts were confined to China
itself and its strict control over Internet access to its citizens.
But in two thousand nineteen, with the rise of public
demonstrations and protests in Hong Kong, the country began to
(13:41):
initiate misinformation campaigns on social media to attempt to undermine
public support for Hong Kong, casting the protesters as lawless
and violent. The researchers state that there's no reason to
assume China will stop using social media in an effort
to shape the public understanding of things that are of
importance to the country, So we'll probably see that country
(14:03):
continue it's foreign influence operations. Now, a lot of countries
fall between these extremes I've just laid out, and you
could also argue that neither the US nor China are
truly on the very opposite ends of the spectrum, but
that's the way they're often portrayed. Now, for the purposes
of the report, the researchers decided to focus only on
(14:24):
cases in which there was a clear mandate from a
government or political party to initiate the manipulation. This is
important to distinguish because there may be many cases in
which hackers, activists, companies, subcultures, or other groups of people
could act on their own without the explicit permission or
(14:45):
mandate of a government or political party. In cases like those,
the ideologies and the goals of the group and the
government just happened to align, but there's no explicit direction
from the state to commit any acts. You may remember
that in the wake of the massive two thousand fourteen
cyber attack on Sony which involved the theft of a
(15:07):
ton of confidential information within the company. A hacker group
called the Guardians of Peace claimed responsibility, and while the
hacker group's goals were in line with those of North
Korea as a whole, the country of North Korea maintained
it had not directed any hackers to go after Sony.
If that were the case, which is still a matter
(15:28):
of dispute, then it would be an example of what
I was saying earlier. So the report would focus only
on stories that link back to official government or political agencies,
and not campaigns from apparent independent groups that just happened
to align with those governments. Okay, so those are the
basics when we come back. I'll talk more about specifics
(15:51):
in the report, but first let's take a quick break.
According to the report, authoritarian governments and political parties use
social manipulation to achieve one or more of three general outcomes,
(16:12):
and the first is to suppress fundamental human rights, which
I think we can all agree is pretty horrifying. The
second is to discredit political opponents, and the third is
to drown out any dissenting opinions. Now, those last two
points show that these manipulators are relying on a tool
set used by Internet trolls. In general, trolls will use
(16:35):
all sorts of manipulative tactics to dismiss anyone they target,
and to rely on strategies to start flame wars or
other distractions to keep the targeted party from being able
to make any sort of impact. So, in some ways,
the techniques being used in social manipulation are already incredibly
familiar to us. It's just that instead of popping up
(16:56):
on a message board centered around no Dancing with the stars,
it's a government or political party trying to establish control
over a population. However, it does get more complicated than
just rilling up folks on the internet. The report's introduction
starts off with the sentence and I quote around the world,
(17:17):
government actors are using social media to manufacture consensus, automate suppression,
and undermine trust in the liberal international order end quote.
Now some of that still falls in the troll wheelhouse.
Manufacturing consensus, for example, this is when a party attempts
to make it seem as though the majority of people
(17:37):
around you all believe a certain philosophy or a course
of action is the right one. It's the whole concept
of go with the crowd, right, or if you're being
particularly cynical, it's likening people to sheep or cattle. People
tend to go along with what others are doing because
to do otherwise, to go outside of that, is to
(17:59):
invite scrutiny or criticism, and a lot of us just
prefer to avoid that, so, rather than make waves, will
go with the flow. Trolls do this online through stuff
like sock puppet accounts. That's when a troll makes two
or more accounts for an online discussion, and the troll
will use their primary account to make whatever statement they want,
(18:19):
and then the sock puppets are used to help the
troll achieve that goal, typically by adding support in some way,
so the trolls controlling all three or four, however many
of these accounts there are, and using the sock puppet
accounts to add support and credence to whatever the troll
is saying. So to an outside viewer, it's as if
(18:41):
the troll has said something and that other people are
now chiming in to support that's something, But in reality,
it's just the troll, or sometimes a group of trolls,
manufacturing that consensus, when in fact no such consensus exists
within the group of at large well countries and political
parties are doing the same thing, but on a much
larger scale. Governments do this by employing either directly or
(19:04):
otherwise agents to do the dirty work, and the report
refers to these agents as cyber troops. And some of
those cyber troops are a little extra cyber. That is,
some of the agents working on behalf of achieving the
goals of these various governments and political parties are bots.
They are programs that generate automated responses with the goal
(19:25):
of either suppressing messages in opposition to the party's goals
or elevating an escalating language that supports those goals. The
researchers identify three types of fake accounts in the report,
human bot, and then cyborg. Now, out of the seventy
countries that the researchers looked at, fifty were found using
(19:48):
fake accounts run by bots in some way, mostly to
spread certain messages while drowning out dissent. Interestingly, human run
accounts were even more widespread, with sixty out of the
seventy countries employing them in some way to run fake accounts,
and typically the people who are running these fake accounts
would engage with others people who have real accounts, like
(20:12):
the real users of these sites, by leaving comments on
posts or sending private messages, and otherwise attempting to start
up conversations aligned with the overall goals of the communication strategy.
The report also looked into instances of hacked accounts, in
which a person's online account would become compromised and then
pulled out of their control. Then a bot or a
(20:35):
human agent or the hybrid cyborg could post as that person.
This strategy accomplishes two things at once. It can silence
someone who otherwise might speak the scent against a government
or political party, and it can appear to lend credibility
to that government or political party by having a quote
unquote real person add to the conversation in a way
(20:58):
that further is the political goals. Complicating matters is that
in some countries governments encourage citizens to engage in spreading
propaganda and silencing dissenting voices on behalf of the government.
This would be a strategy where you say it's all
part of being a good patriotic citizen of that country.
(21:20):
In those cases, you're not talking about compromised or fake accounts. Instead,
you're talking about indoctrinated people using their accounts to support
their respective governments. And because we're starting to see some
platforms actually make moves against bots and other fake accounts,
this could become a more common practice in the future.
(21:41):
It's a lot harder for social platforms to remove quote
unquote real accounts that happen to spread propaganda without running
the risk of being labeled as partisan or advocating for censorship.
As for the messaging, the research team identified five general
types of messages and their intended effects. So number one
(22:02):
is the straightforward pro government propaganda. So these are messages
that praise whatever power is behind the manipulation, just you know,
yea America, or yea our glorious leader, or go socks.
Next our messages designed to discredit or defame political dissidents
(22:22):
and opponents, which might include a mixture of truth and
misinformation about the target. So this is where you have
identified some potential opponent to the powers that be, and
you use every tool in your tool chest to make
that person seem like the worst human being in the
world and no one should ever support him or her.
(22:42):
The third path is to use misinformation to distract from
important issues. In the United States, you'll hear a lot
of people complain about this sort of activity, in particular
in which let's say a government official or an agency
issues an outrageous or a controversial mess edge, and people
will say it's an effort to pull the focus off
(23:03):
of matters of more critical importance. I'm not saying this
actually happens all the time in the United States, instead
of saying people talk about it happening all the time
in the United States that when someone does make such
a statement, one of the frequent responses is, this is
just a pull focus away from X, you know, and
(23:24):
it's not really a genuine attempt to start a real
conversation about this other thing. And the fourth type of
message is one meant specifically to polarize and divide a population,
because if you divide the people, if you push them
further to extremes, it means that the people will not
unify on anything. They're they're less powerful divided than they
(23:48):
would be unified. So while that can lead to other
major social problems, as you push people to political extremes,
you've also really decreased the ability of them to organ
eyes on any meaningful level, and they can't really counteract
what the government is doing. The fifth type of message,
the final one, is a direct attack on dissidents themselves,
(24:11):
made in an effort to drown out their voices through
any means necessary. And the researchers also pointed out that
authoritarian regimes use social media propaganda in conjunction with other
methods of intimidation, including surveillance and threats of violence. Frequent
targets would be political opponents, journalists, and sometimes members of
(24:32):
the population or at least large segments of the population
as a whole. If such a government can intimidate those
who would otherwise speak out against it while simultaneously manipulating
the conversation on social media to be in support of
that same government, it's in a stronger position to maintain power.
The report also details the actual communication strategies governments and
(24:55):
political parties are using. So we've talked about the types
of messages that the organization's propagate, but how are they
propagating the messages? What are those strategies and the report
identifies five key ways. Number one is creating outright disinformation.
This can include fake news articles, fake videos, that kind
(25:16):
of thing, and as technology becomes more sophisticated, it becomes
increasingly challenging for the average person to determine if something
they encounter is genuine or has been faked. In some cases,
all it takes is a few edits to remove some context,
and suddenly a message can have a very different meaning
than the original intended one. So you can take video
(25:38):
of a politician making a speech, for example, you trem
a little bit off the beginning, a little bit off
the end, and you can make it sound like that
politician is saying the very opposite of what they actually intend.
In other cases, such as with deep fakes, there's the
opportunity to manufacture an entirely fake video of a person,
and this is only going to get harder as we
(25:59):
go on. This is also the most common communication strategy.
It's employed by fifty two of the countries that the
team researched, though in many cases we're talking about more
modest examples, such as fake news sites like a like
an article as opposed to a fake video. The second
strategy involves mass reporting accounts or content as being against
(26:21):
the terms of service of various platforms or organizations, so
an example of this would be a concentrated effort to
remove a person from Twitter by coordinating a big effort
to report that user uh to Twitter with a claim
that the user had violated Twitter's policies. So it's sort
of like a brute force attack. You overwhelm a provider
(26:44):
a platform with requests saying this person, this page, this
entity is breaking the rules and has to be banned,
and it's all on an effort to ban that person
or that thing. We've seen instances of this recent lye
with people going after figures they don't like and sending
(27:05):
complaints to that figures employer I'm thinking of specifically, like
James Gunn and Disney, and James Gunn was at least
temporarily removed from being able to direct movies like Guardians
of the Galaxy because of things he had done in
his past that were, you know, legit not cool. But
he had since apologized, acknowledged them, and pledged to do
(27:30):
better long long before anyone brought this up. But it
was enough to get him removed from the project for
a while, and so we have seen that this is
an effective tactic. The third strategy is to use data
to target specific groups of people with messaging that the
attackers tailored to that group of people, because it turns
(27:51):
out that if you tell a group of people what
they want to hear, that works great. So if you
identify what your target audience is and what they want
to hear, and then you convey your message in a
way that falls in line with that, you get more success.
Fourth is the awful practice of trolling and doxing. This
is all about silencing people by intimidation, like I mentioned earlier,
(28:14):
and can include revealing a person's real world address, phone number,
and other personal information information about people connected to that person.
It's it's kind of the mafia approach of putting pressure
on a person by intimidating and threatening them. And fifth
is actually the easiest strategy. It involves amplifying messages that
(28:35):
are already out there. So in this case, a government
or political party can just add resources to boost the
signal that are already in line with the organization's goals.
They don't have to create it themselves. They just can
create a whole bunch of let's say, fake accounts and
retweet a message that happens to fall in line with
what they believe. Now as you can imagine, the scope
(28:55):
of these efforts varies around the world. In some countries,
the researcher saw activities centered around specific events such as elections,
and then it would die down. In other countries, it
was more of an ongoing effort that the government would
perpetually support. Likewise, some countries spend a relatively modest amount
funding cyber troops, while others might dedicate many millions of
(29:18):
dollars to a single campaign. One of the larger efforts
cited by researchers was the case of Cambridge Analytica, which
I covered in a past episode of Tech Stuff. So
I recommend you go and hunt that one down and
listen to that to hear about how that unfolded, because
that was an enormous mess. All right. When we come back,
I'll go into a little bit more about what was
(29:39):
in the report and the ramifications we have to consider,
but first let's take another quick break. The researchers created
a four point scale to describe the size and capability
of cybertroop forces around the world. On the low end
(30:01):
of the scale is the designation minimal cyber troop teams,
and these are efforts that haven't been around for very long,
or they only manifest temporarily around those political events I
mentioned earlier, like elections. They tend to be limited in
what they can accomplish, and as a result, they typically
will focus on a single social media platform to maximize
(30:24):
their results, and they also focus exclusively on domestic misinformation
campaigns and not foreign influence operations. So countries that have
minimal cyber troop teams would be things like our country's
like Argentina, Australia, Croatia, Greece, South Korea, and a few
(30:44):
others on the opposite end of the scale, so that's
on the lowest end. On the highest end is high
cyber troop capacity. These countries dedicate a large budget to
funding online propaganda campaigns. They maintain a large permanent staff
of people in order to do that. They not only
execute comprehensive campaigns, they also research ways to do it
(31:07):
more effectively, so they're always working to improve the staff
works full time, not just in election years or around
other political events. They focus on both domestic and foreign operations,
and countries in this category include China, Israel, Iran, Russia,
Saudi Arabia, and yes, the United States. Now I didn't
(31:28):
include the other two categories because I'm sure you can
all extrapolate that they fit between the lowest level and
the highest level of cybertroop capacity. So really it's just
stages of capability and how much these countries are spending
on those efforts, making matters more complicated. As countries end
up developing more effective ways to leverage social media to
(31:52):
spread misinformation, they're also spreading those techniques around the world.
The researchers specifically call out a case in which Russian
operatives taught military officials in Myanmar how to manipulate people
through social media, So we're seeing the skills being shared
across territorial borders. So which online platforms are the most
(32:14):
important for people who want to spread propaganda, Well, it
should come as no surprise that Facebook leads the pack. Now,
not saying that because I think Facebook has a wretched
track record when it comes to dealing with misinformation, although
that happens to be my opinion, But that's not why
I'm saying this. I'm saying because Facebook is just so
(32:37):
darned popular analysts estimate that two point for one billion
people use Facebook. The world's population is approximately seven point
five three billion people, so about a third of all
the people in the world are using Facebook. So if
you want to get a message out there, you have
(32:59):
to go where the people are that happens to be Facebook.
So it's not necessarily the case that Facebook is less
effective at policing these things than other platforms are. Instead,
it's more like it's a target rich environment. It's where
the people happen to be. The researchers state that as
other platforms grow in use, particularly for the purposes of
(33:21):
political discourse, they will no doubt become targets for cyber
troops wishing to spread propaganda in the future. Another thing
working against Facebook is that it's pretty easy to figure
out how Facebook works, at least from a very high level.
The entire platform is built around the concept of engagement.
(33:42):
Facebook makes money when people interact with Facebook, so posts
that inspire more interaction with the platform, whether that's in
the form of comments, sharing a post, or sending likes
or whatever. These things, by the way, are not equal,
but they're all very anyway, those posts end up getting
(34:02):
more visibility thanks to Facebook's algorithms. If a particular post
is doing well, Facebook's more likely to show that to
more people because it's already proven to drive engagement, and
engagement is how Facebook makes money. Essentially, what it's doing
is it's selling your time to advertisers. So the more
(34:22):
time you spend on Facebook, the more money it's gonna make.
So if you know that as someone who's trying to
run a propaganda campaign, you can start to build posts
that drive that kind of engagement in various ways, or
you might even game the system a little bit by
using some state backed accounts, some fake accounts to boost
(34:43):
the signal. You post something, you get a bunch of
people sharing it and liking it. Maybe some of those
people are real, maybe a lot of them are fake accounts,
and then you try and make it go viral from there.
If it's inflammatory, all the better because it's going to
make people either want to share it because it has
uh it has affirmed a belief that they hold that
(35:05):
other people find wrong, or people are so upset at
how terrible the statement is they share it to let
other people know, Hey, do you see how horrible this is?
Either way, the message keeps on spreading. Other platforms that
the researchers cited included Twitter, WhatsApp, YouTube, and Instagram, and
(35:25):
of course others will grow in importance over time. In fact,
I've already heard about TikTok being another platform that merits
special attention in the near future, if not right darn
now now. On the one hand, it is distressing to
think that a tool like the Internet, which has often
been seen as a means to facilitate communication and the
(35:46):
sharing of ideas and ideals in a positive sense, has
been twisted to spread misinformation in various ways to make
people behave the way some government or some political party
wants those people to behave. On the other hand, this
isn't new. Propaganda is an old, old idea. Social media
(36:07):
hasn't made propaganda possible because those ideas and approaches have
been around for centuries, but it has made it much
more efficient and scalable than it ever was before. Also,
you can tailor it to a level that you couldn't before.
And it's a bit ironic that the platforms that were
ostensibly designed to let us connect with friends online and
(36:28):
make new ones, is simultaneously the tool of groups that
are dedicated to driving deeper divides within populations. The very
things that are supposed to bring us together are pushing
us further apart. That a tool that is supposedly meant
to allow for communication can also be a tool that
suppresses it is incredibly ironic to me, and again that
(36:49):
wasn't necessarily the intent of the people who made the platform.
But because people work the way they do, and because
people who are spreading misinformation know how to leverage these platforms,
that's what's happening. And there's been a lot of pressure,
particularly on Facebook and Twitter to do something about this.
A few weeks before I recorded this episode, Twitter announced
(37:12):
it had deleted nearly a thousand accounts that had linked
to quote significant state backed information operations end quote that
was in relation to the protest demonstrations in Hong Kong,
and the state in question at this point was China.
According to Twitter, the purpose of those accounts was to
divide Hong Kong, diminishing the support for protests. Likewise, Facebook
(37:36):
removed a few users and groups that it had identified
as being part of a state backed effort to undermine
the Hong Kong protesters. And because both Facebook and Twitter
are blocked in China due to the aforementioned Great Firewall
of China, the implication here is that the accounts were
attempting to shape the international perception of the story because
(37:56):
people in China wouldn't be able to see it. It
also indicates the yes this was state backed, because no
one in China would be able to access those platforms
without the permission and cooperation of the Chinese government itself,
otherwise just be blocked off. Over the summer of twenty nineteen,
the BBC organized a Trusted News Summit in an effort
(38:20):
to devise a strategy to combat the spread of disinformation.
The company reached out to major platforms like Google, Facebook,
and Twitter to come up with a strategy to minimize
the spread of false information and to make sure that
reputable reporting could rise to the top. Some of the
solutions the group came up with included the formation of
(38:40):
an early warning system in which one company would quickly
reach out to the others in the event and identified
a misinformation campaign. So if Twitter got wind of an
effort like the one we just mentioned with Hong Kong,
Twitter executives could send out an alert through this system
so that the folks at Google and Facebook would also
know to be on the lookout. Those efforts actually go
(39:03):
beyond the political propaganda covered by the Oxford Report. They
also encompass stuff like anti vaccination rhetoric, which is on
scientific and ultimately it's deadly because it discourages people from
immunizing children against diseases, which then raises the possibility of
serious and deadly outbreaks. The summit also called for newsmakers
(39:24):
and platforms to take more care to educate readers about issues,
educating and informing on top of reporting the news. In
the United States, DARPA, which is the agency that funds
research and development into technologies intended to contribute to the
defense of the United States, has initiated its own program
to counteract disinformation campaigns, which, you know, it is great,
(39:49):
but I suspect more than a few disinformation campaigns originating
from the United States have some relation distant or otherwise
to DARPA. So take that with a grain of alt.
I guess you could say it's kind of a case
of when we do it, it's a strategic tool in
our arsenal that guarantees national security, but when they do it,
it's dirty, rotten, cheating. The researchers note that we can't
(40:13):
just look to the social platforms to fix this problem
because the problem extends beyond the platforms. Companies like Twitter
and Facebook can form policies and enforce them to help
mitigate the spread of misinformation, but they aren't ultimately at
fault for the actual content on their services. They enable
the spread of that information, they might even promote it,
(40:36):
and they're certainly responsible for that. Their algorithms can be
co opted by those who know how they work and
used to malicious ends, But the root of the problem
is systemic within governments and cultures themselves, not in technology
in general or social media in particular. So to that end,
addressing propaganda really means taking a look at the institutions
(40:59):
and says stems within a political framework that allow it
to exist in the first place. Until we do that,
it stands to reason that the various governments and political
parties will make use of every communications tool they have
at their disposal to spread messages and suppress dissenting opinions.
The timing of the study is also interesting. It's in
(41:20):
the wake of Brexit, which many in the UK say
was supported in part because of a misinformation campaign, that
they were misled with false promises and pretenses, and that
social media played a big part in spreading that misinformation,
which ultimately lead to a vote in favor of Brexit,
(41:41):
which a lot of people now are second guessing. Not everyone,
I mean, there are a lot of people who still
fully believe that the UK should exit the EU, but
it did muddy the waters. There's also been reports of
election interference for an election interference in the United States
during the elections. Uh, And so awareness of how vulnerable
(42:04):
we all are to manipulation is on the rise. Now,
that doesn't mean our ability to suss it out has
improved dramatically. If anything, it has encouraged more tribalism, in
which groups of people inherently trust anything that aligns with
their own worldview and distrust anything that is outside of
that worldview. And to be fair, it can be quite
(42:24):
hard to identify misinformation just on the face of it.
It requires critical thinking and often a lot of research
to make sure the information you're receiving is reasonably accurate.
And as I mentioned earlier in this episode, with the
sheer amount of data we encounter in our lives, that's
not really practical. So what are we to do. Well,
we could back out on our consumption and take a
(42:48):
more critical approach to selecting our news sources. But that's
a lot to ask. It would mean making a dramatic
shift away from the behaviors we've cultivated over the last
decade or so longer. If you want to get issues
that kind of grew out of the twenty four hour
cable news cycle. We can push for more transparent and
democratic processes and government, but that's obviously not something everyone
(43:11):
can do everywhere, at least not without significant personal risk,
and even in countries that pride themselves on being founded
on principles like that, pointing out shortcomings can lead to
some ugly consequences. Just ask anyone in the United States
who has publicly questioned any politician. You get attacked pretty quickly.
(43:31):
Doesn't matter which side you're looking at either, the attacks
will follow. I think pressuring companies to be more proactive
when detecting misinformation is a good step, but we have
to take it upon ourselves to develop strong critical thinking skills,
and we have to be willing to hold public officials
accountable when they engage in shift e misinformation campaigns and
(43:53):
we find out about it. Like the researchers stated in
the paper, nothing will change unless we tackle the root
of the problem. Just dealing with the symptoms isn't enough,
and that sums up this episode of tech Stuff. If
you've guys got suggestions for future episodes, send me an
email the addresses tech Stuff at how stuff works dot com.
(44:15):
Pop on over to the website that's text stuff podcast
dot com. You're gonna find an archive of all of
our past episodes. There. You also find links to where
we are on social media. You can reach out to
me on the Facebook or the Twitter with your information.
I'm sure none of you would send me misinformation, and
you can also at our website click on the little
link that takes you to our merchandise store, where every
(44:36):
purchase you make goes to help the show. We greatly
appreciate it, and I'll talk to you again really soon.
Text Stuff is a production of I Heart Radio's How
Stuff Works. For more podcasts from my Heart Radio, visit
the i heart Radio app, Apple Podcasts, or wherever you
listen to your favorite shows. Two