Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:03):
I don't think any of
us like to admit we can be
fooled, but when it comes to theinternet it's tough out here.
Speaker 2 (00:12):
They fooled me.
We can't get fooled again.
Speaker 1 (00:14):
In 2023, stats Canada
found that 43% of Canadians
thought it was becoming harderto distinguish between true and
false news or information.
That's compared with threeyears prior.
Meanwhile, research from athink tank at Toronto
Metropolitan University foundthat 38% of people say they fall
(00:36):
for false news at least a fewtimes a month.
No judgment here.
As we've heard throughout theseason, it's an increasingly
difficult task to figure outwhat's true and what isn't.
Algorithms like when you'rescared or angry.
They want you to click andargue with a bot, to feel
(00:57):
panicked, like you're not safe,and some powerful people like it
that way.
So learning to find andrecognize reliable information
is even more essential than ever.
Speaker 2 (01:17):
At the collective
level.
There's also the worry that weare losing a ability to have
meaningful debate because peopleare operating in completely
different information world.
Speaker 1 (01:31):
It's not easy, but
we're here to help you out.
This week, on what's Up Withthe Internet, we're going to
provide you with a fact-checkingtoolkit so you can develop
techniques for verifyinginformation and take it into the
internet trenches.
I'm your host, takara Small,and the podcast is brought to
you by CIRA, the CanadianInternet Registration Authority,
(01:54):
the nonprofit building atrusted internet for Canadians.
So to help us break the fake,let's talk to an expert in this
field.
To help us break the fake,let's talk to an expert in this
field.
Matthew Johnson is our guy.
He works for MediaSmarts, whichis Canada's centre for digital
and media literacy.
Speaker 2 (02:17):
Things like mis and
disinformation in our modern
media landscape are things thatvery few of us had to deal with
when we were growing up, and sothey're skills that really
everybody has to learn.
Speaker 1 (02:25):
Matthew is the
director of education for the
organization, and he began bytelling us why getting this
right is so important.
Speaker 2 (02:34):
Mis and
disinformation matter both on
the individual and thecollective level.
So, first of all, it matters tous as individuals, because the
internet is full of people whoare trying to defraud, to
persuade and to manipulate us,but it's full of good
information as well.
There's simply no going back toour old world of gatekeepers
(02:57):
and broadcast media.
When we seek out informationtoday, the internet is where we
find it.
Seek out information today, theinternet is where we find it,
and more and more, that is evenhow governments, scientific
authorities, public healthagencies, medical associations.
This is how good information isshared.
(03:19):
Not being or choosing not to usethe internet is simply not a
strategy that can work today.
So, on an individual level, whenwe need information whether
that is information about thingslike health, whether that's
information about things likenutrition or news or things
(03:40):
relating to politics andelections we need to have the
skills and the knowledge to beable to discern good sources of
information from bad ones.
At the collective level, there'salso the worry that we are
(04:00):
losing a sense of shared reality, that we are losing our ability
to have meaningful debate,because people are operating in
completely different informationworlds, and there's a lot of
evidence that shows that thatincreasingly and it's not as bad
yet in Canada as it is in someother places, but increasingly
(04:25):
people are operating in what issometimes called a curation
bubble, which is a mixture ofthe choices that we make about
what sources of information toengage with and the ways that
(04:46):
those sources then filter.
What we're seeing, and whatthat means again, is that we may
be operating in completelydifferent media and information
environments, and that meansthat it becomes almost
impossible to have a meaningful,reasoned debate about anything
(05:11):
of any significance emphasizethe ways that apply to the
average person, someone whodoesn't inhabit the media space.
Speaker 1 (05:27):
They are trying to
piece together the world around
them, but they don't necessarilyhave the tools that you or I
would have.
Speaker 2 (05:33):
Sure, but I'd like to
start by reframing that a bit,
in that recognizingmisinformation is less important
than discerning between goodand bad sources of information
(06:09):
will learn to trust unreliablesources less, but they also have
less trust of reliable sources.
So it's important to thinkabout it in terms of essentially
sifting out the good and thebad.
We teach a program called breakthe fake, which teaches what we
call companion reading.
It's sometimes called lateralreading as well, and the idea of
(06:33):
it is to take four quick steps,and usually you won't need to
take four of them.
Most of the time, one or two ofthem will do the job, and each
one generally is only going totake you anywhere between 30
seconds and a couple of minutes,and the purpose of these is to
find out whether a source isworth your attention, whether
(06:57):
it's worth engaging with at all,and to get the general
consensus on a topic.
Essentially, if you're seeing anews story, for instance, you
want to know did it reallyhappen and are you getting
basically the facts?
So the first of those foursteps is using fact checking
tools, because there are a lotof professional fact checkers
(07:21):
out there.
Snopes is the most well-knownof them, but there are dozens.
Some of them focus on differenttopics on different places,
they operate in differentlanguages and we can use these
to find out if someone hasalready either verified or
debunked a story.
We actually made that eveneasier by creating a custom
(07:45):
search engine that searches morethan 20 of these fact checkers,
all members of theInternational fact-checking
Network or signatories to itscode of principles.
It searches them all at onceand nothing else.
So that's a really good firststep, because if something's
already been verified ordebunked by a professional fact
checker, you can find out usingthat tool in 30 seconds.
(08:08):
If that doesn't work, or if youhave reason to think that it's
maybe something that isn't goingto be that easily fact checked,
if it's not a simple factualclaim, we recommend finding
where something originally camefrom, because we know that today
most people don't go to newssources.
They get news through socialmedia, through websites like
(08:33):
reddit, other places like that,but news typically today comes
to us, and we know that youngpeople in particular operate
with the assumption that anyimportant news story will find
its way to them through theirsocial media feeds.
So the really important thingis to find out where a story
(08:53):
originally came from and,generally speaking, there's not
much point in trying to verify asource until you've found where
it originally came from,because the source is probably
just sharing something that theygot elsewhere.
Once you find where itoriginally came from, because
the source is probably justsharing something that they got
elsewhere Once you find wheresomething originally came from
whether it is a claim, an image,a video, a news story then you
(09:17):
can verify it if it's one youdon't already recognize.
So if you're able to followlinks or do a search and track
it back to a source you alreadyknow is reliable, like Globe and
Mail, the New York Times, thenat that point you know enough to
assume that it's probablybasically true.
If you don't, then you need todo a little bit of work to
verify that source, becausethere are a lot of legitimate
(09:40):
news sources that you'veprobably never heard of, news
sources that you've probablynever heard of.
Every good-sized city around theworld has at least one
typically more than one newssource, often a newspaper and a
TV station that maybe has theirown news website, and no one
will have heard of all of these.
But that also provides anopportunity for people who
(10:01):
spread disinformation, becauseit's very easy to make a website
that looks like a realnewspaper website or looks like
the website of a real newsorganization.
So you need to do a little bitof searching.
Ideally it'll have an entry inWikipedia, which is again a
great way of finding out thegeneral consensus whether or not
(10:22):
something really exists,whether or not something is
generally seen as basicallyreliable.
If you can't do that, you cando a search for it and exclude
its own web address with theminus operator, so you put in
the name of the website and thenput in minus sign and then,
without any spaces, its own webaddress, and that'll show you
(10:43):
every reference to it onlineoutside of its own website, and
that gives you a good idea ofwhat other people say about it.
And, generally speaking,that'll get you to the point
where you know whether it'sreliable or not.
And our fourth step whichsometimes is going to be the
first one, because it worksreally well for things like news
(11:05):
stories is to check othersources, to check against
sources that you know arereliable.
An easy way to do that is to usethe news tab on Google instead
of the main search.
So if you see a claim or a newsstory, do a search on Google or
another search engine and thenclick on the news tab.
And what's different about thenews tab is that all of the
(11:26):
sources in it are real newssources that really exist and
also, because you're probablygoing to have multiple results,
you can quickly compare them tosee if everyone is giving
basically the same story.
In some cases, there are goingto be as well best sources you
can turn to.
So if you see a claim, forinstance, about the electoral
(11:49):
process, you can go to ElectionsCanada, or, if it's a
provincial or municipal election, the elections authority in
your own province or territory.
To get that answer.
And that is a really importantbit of skill and knowledge is
knowing what are the places youcan turn to, what are the best
(12:11):
sources of information ondifferent topics?
Speaker 1 (12:16):
And what's the
fact-checking search engine you
mentioned briefly?
Speaker 2 (12:20):
So that's one that we
have on our own website.
The web address ismediasmartsca forward slash fact
dash checker, and really anyonewho has a Google account can
make a custom search engine thatsimply searches only the
websites that you're having todo research about a particular
(12:45):
topic, maybe a health topicthat's affecting someone in your
family, maybe it's a hobby,maybe, if you're a student, it's
a research topic that you'relooking into for school.
You can build your own customsearch engine that only includes
the kinds of sources that youalready know are reliable, and
that means you're not going toget a lot of irrelevant and
(13:08):
possibly unreliable results.
Speaker 1 (13:12):
I'm wondering how do
you combat source fatigue?
So you know you've mentionedthat people can kind of trace
stories or videos, or sometimesmaybe even social media content
back to its source.
But for people who may beengaged with news through chat
from friends and family, it maybe tiring to have to engage in
(13:35):
that type of verification.
So how do you incite people todo the work without becoming
overwhelmed?
Speaker 2 (13:44):
A big part of our
approach is communicating that
these steps are quick and easyto do, because once you learn
how to do them and once youunderstand to what degree you
need to verify something, theygo very quickly Because most of
the time we don't need to do areally deep dive into something.
(14:06):
Need to do a really deep diveinto something.
Most of the time we don't needto read a whole article for bias
and point of view andeverything like that.
Sometimes we are going to dothat kind of close reading when
it's a source that we alreadyknow is reliable is worth our
attention, but most of the timewe just want to find out are the
basic details of the story true?
(14:29):
So an example that I just didearlier this morning was I saw a
claim on social media thatevery Tesla Cybertruck had been
recalled.
I followed the link and it wasa source that I didn't recognize
.
I did a quick search on thatsource didn't have a Wikipedia
(14:50):
entry, did a search for thesource, with its website
excluded.
Couldn't really find anyreferences to it, anybody
talking about it.
So then I did a news search andI saw the same story was being
covered by CNN.
It was being covered by anumber of other sources that I
(15:12):
knew were reliable.
So at this point I couldcompletely ignore that original
claim and take a look at them,because I'm going to trust them
more to give me full andaccurate details.
Now I described, I think, foursteps there there, but all of
that took me about 45 seconds,because I knew those steps to
take, I have practiced them andI knew when to stop.
(15:35):
And that probably is the mostimportant part of the message
when it comes to dealing withsource fatigue, as you say, is
knowing that you're allowed tostop, that once you know enough,
once you've reached a sourcethat you know is basically
reliable, or once a fact checkerthat you can count on has
(15:58):
confirmed or debunked it, you'reallowed to stop.
Speaker 1 (16:03):
We're living in an
interesting time right now,
where there are so many AI toolsthat are available, like like
ChatGPT, and that people usesometimes to kind of better
understand what's happening inthe world, but these tools in of
themselves can be quite flawed,right?
So what is your advice topeople who use them?
Speaker 2 (16:31):
If we're using AI
tools to get information, it's
really important to understandwhat they're good at and what
they're bad at, and that is truewith sources of information
generally.
So when we look at Wikipedia,for instance, wikipedia is a
really good way of finding outthe general consensus on a topic
.
So if you want to know what theconsensus view is of something
like who built the pyramids, ifyou want to know the consensus
(16:55):
on whether or not a particularnews source or a particular
Institute or whatever isreliable, whether they're
objective, wikipedia is a greatplace to go for those, but it
often falls down on some of thedetails.
Similarly, search engines are agreat place to go if you want
(17:16):
to answer a specific question,but they don't do a very good
job of giving you the high levelconsensus view.
So I would say that, in thatrespect, ai tools are a lot more
like Wikipedia that, if youlearn how to phrase your
question effectively, if you areclear that you're asking them
(17:40):
for the general consensus, ifyou're clear to them that you
want them to only considerreliable sources, that you want
them to only consider reliablesources, and if you are clear to
them that you want them toidentify how far in or out of
the mainstream competingtheories may be, they actually
(18:01):
can be really useful.
The other thing that I wouldrecommend is to always either
use an AI tool like perplexity,which gives you automatically
all its sources, or instruct theAI when you're giving it the
prompt to give you sources, andthen, just as you should when
(18:21):
you're using Wikipedia, makesure you check up on those
sources, make sure that thesources it's using are genuinely
reliable and that they aretelling you that they actually
do say what the ai summary saysthey are.
So, just like wikipedia, wewant to use those ai tools as a
starting point and to get abroad sense of consensus, rather
(18:46):
than getting you your wholeanswer.
Speaker 1 (18:56):
And going forward.
I'm really curious to know whatthe role of deep fakes and AI
will play when it comes to factchecking, because now you have
developers around the world whocan create these tools that
people may rely on, notnecessarily understanding what
they do and how they work on notnecessarily understanding what
they do and how they work.
Speaker 2 (19:18):
Deep fakes and other
things like voice cloning are a
really good illustration of whywe need to do companion reading
before we do close reading.
So for some time, in the earlydays of deep fakes, people were
told to look closely at them, tolook for inconsistencies, to
look for errors, in the same waythat in the early days of
verifying information online,people were told to look closely
(19:42):
at a source.
But we already know that again,that not only doesn't work, it
actually frequently backfires,because when you look closely at
a source, what you're lookingat is what the source is telling
you about itself, rather thanwhether it's generally
considered to be reliable.
And similarly with deepfakes,that advice about looking
(20:03):
closely at them was only usefulfor a couple of years.
But at this point, we can'tcount on the kinds of things
that we used to see in deepfakes.
We can't count on there beingextra fingers or weird things
and, as well, it's very easy tomisidentify things as deepfakes,
because you often do get weirdartifacts.
(20:25):
Because of issues with focus,because of issues with file
compression, you will often getreal videos or photos that look
like deepfakes.
There are a surprising numberof people in the world who
actually have six fingers, sowhat you need to do instead is
(20:47):
find out where it came from,because the one thing that a
deep fake does not have isprovenance.
When we're trying to verify aphoto or a video, we're tracking
it back to where it came fromoriginally.
That's why, with photos orother images, reverse image
search is such an important toolto learn or other images,
(21:10):
reverse image search is such animportant tool to learn, um.
But we can also use the otherstrategies that we teach around
finding the original source ofsomething.
If it came from somewhere uh, ifit came from a news
organization, if it came from agovernment authority, if it came
from someone that you canconfirm is real and had reason
to be somewhere or you have goodevidence that they were an
eyewitness to something you'llfind evidence of that.
(21:33):
If it's a deep fake, it's notgoing to have any provenance.
That trail is going to run coldor you're just going to wind up
at someone's instagram account.
Or sometimes, as in the case ofthe famous uh fake of the Pope
in the Puffer Code, you'llactually find that it was
originally identified as a deepfake that was made by someone
(21:54):
who was demonstrating howconvincing deep fake photography
had become, but that when itgot shared, it lost that context
.
So you may find, once you trackit to its original source, that
it's actually identified as adeepfake.
Speaker 1 (22:11):
Learning the skills
required to better understand
deepfakes, and you know theworld of AI and online digital
literacy takes time, so I'mwondering where these skills are
being taught, not just forthose who are in younger grades,
but also for young adults.
Speaker 2 (22:28):
Are being taught, not
just for those who are in
younger grades, but also foryoung adults.
We're working really hard toget that information out to
everybody.
The main pillar of that for usis our Break the Fake workshop,
which is about an hour andcovers all four of those.
We put it on whenever we can.
(22:48):
We make it available to otherorganizations, places like
libraries, to put it on.
We also have a self-directedversion on our website that
anyone can click through.
We've also done videos, tipsheets and other ways of
communicating that content, butwe do know that it's going to be
(23:09):
most effective when thismaterial is taught early and
consistently, that it's going tobe most effective when people
learn it early and have moretime to learn it.
We are seeing increasedinterest in learning these
skills from organizations thatwork with seniors, organizations
(23:30):
that work with new Canadians,and so some of them are
collaborating with us.
Some of them have their ownprograms that they're developing
.
It's definitely taking time andthat's one of the areas where
we would like to see moreleadership is in developing a
(23:51):
more consistent program, um atthe provincial or the uh federal
level, or ideally both, to makesure that everyone is learning
these essential skills.
Speaker 1 (24:06):
I've been online for
quite a while now and one thing
that I've realized is that, youknow, digital media literacy
also requires humility.
It requires people to questionwhat they see in front of them,
but also to question whetherthey may be wrong in some
situations, but that's notnecessarily something that lends
(24:26):
itself to online culture.
So how do we discuss this, howdo we deal with it and what are
your thoughts on it?
Speaker 2 (24:35):
that's actually
something that we added when we
recently updated break the fake.
So break the fake wasoriginally launched in 2019.
We did an update that welaunched last uh fall at the end
of 2024 and one of the thingsthat we added to it was looking
specifically at what's calledintellectual humility, because
(24:58):
intellectual humility is thedifference between genuine
critical thinking and conspiracythinking.
As you're probably aware, mostpeople who are conspiracy
thinkers describe themselves and, in many cases, genuinely think
of themselves as criticalthinkers, but what's actually
(25:21):
happening in most cases is thatthey're starting with the
conclusion and finding evidenceto support it, and so what is
really essential is that they'restarting with the conclusion
and finding evidence to supportit, and so what is really
essential is that intellectualhumility, that possibility that
you might be wrong, thatopenness to being wrong.
So some of that again comesdown to simple habits.
(25:42):
So we encourage people to askthemselves three questions
before you even startinvestigating something, and
those are what do I alreadythink or believe about this?
So figure out what yourstarting point is.
Why do I want to confirm ordebunk this?
So, if you know ahead of timethat it's something you want to
(26:05):
be true or something you don'twant to be true, that tells you
that you should be a little bitmore, put a little bit more
effort into one side or theother, or verifying or debunking
.
And the third one and thisreally is the one that is the
key difference is what wouldmake me change my mind?
So decide ahead of time whatkind of evidence would genuinely
(26:29):
change your perspective.
And if there isn't any, thenyou're not doing critical
thinking.
If there's no circumstance inwhich you would change your mind
, then you're not doing criticalthinking.
So that's at the individuallevel.
But obviously there are, as yousaid, pressures.
There are social pressures thatmake it more difficult to do
(26:51):
that kind of intellectuallyhumble critical thinking.
Elements just of the structure,not necessarily things that
were intended, but consequencesof the design of social media
that do make us tend to entrenchour views rather than
(27:12):
reconsidering them.
Um, and of course, socialpressures, because we know that,
partly as a result of thechanges in our media ecosystem,
but also just partly because ofchanges in social values, we
have become more polarized.
Now we're actually not aspolarized as we think we are.
(27:36):
People generally overestimate,particularly in Canada, how
polarized we genuinely are.
But the fact is that when we'rehaving a conversation with
someone.
How polarized we actually aredoesn't matter.
What matters is how polarizedwe think we are, and when we are
in that situation, when wethink that a particular view or
(27:56):
a particular idea is associatedwith our political identity,
obviously we're going to be lesswilling to entertain the
possibility that we might bewrong.
It's one of the reasons why, inour educational materials, we
generally use quite politicallyneutral examples, because we
(28:20):
want people to learn the skillsof authenticating information
and critical thinking in anenvironment where they're not
going to be swayed one way oranother by their political views
.
But I think it is also going totake over time a commitment I
think we are going to have tocommit as a society to being
(28:46):
open to listening to alternateviews, and that doesn't mean
listening to every alternateview.
As the saying goes, you don'twant your mind to be so open
that your brain falls out.
But that's why part of criticalthinking and part of finding
and verifying information islearning how to recognize the
(29:10):
expert consensus on a topic andunderstanding how people arrive
at consensus, that thescientific consensus on a topic
or the consensus in a field likehistory is not just what
everybody agreed to believe it'sthe weight of evidence and
understanding the process bywhich consensus changes, which
really is not something that welearn, for instance, in science
(29:33):
class.
we may learn this is whatscientists used to think and
this is what scientists thinknow, but very rarely do people
actually learn the process bywhich scientific consensus
changes.
And when you understand thatprocess, you're much less likely
to fall for arguments that arebased on a misunderstanding of
(29:58):
that, a sense that, well,everything is provisional.
Speaker 1 (30:03):
There's this really
interesting theory that I came
across, where the idea is youtreat misinformation like a
virus and so it requiresindividuals to inoculate
themselves through pre-bunking,and I'm wondering if you can
explain this proactive strategyand if you think it's actually
good and working.
Speaker 2 (30:24):
So inoculation
against misinformation is pretty
much what it sounds like.
It's an analogy to inoculatingagainst a virus.
So you just like, vaccinationworks by creating a weakened
version of a virus so that yourimmune system learns how to
handle it before it encountersthe real thing.
(30:44):
Withoculation you introduce aweakened version of a claim so
that people come to recognize it, and there's some evidence that
it can work pretty well, evenin cases where people are
politically or ideologicallyinclined to believe a claim.
(31:09):
Um, there are some things aboutinoculation that can make it
challenging, um, so the biggestconcern is that, essentially,
you have to directly counter theclaim that they're going to
encounter later, and so thatmeans being very proactive.
(31:40):
It means identifying the mostcommon and the most potentially
harmful false claims and comingup with effective counters to
them, and it also means gettingthose counters to people before
they encounter that actualinformation, in the same way
that a vaccine doesn't do youreally any good after you're
sick.
So it takes a lot of work.
It takes a lot of planning.
(32:02):
I think it can be reallyeffective.
I think it's something that canbe very effective within
certain disciplines.
So I think, for instance, inthe school system, there are a
lot of things that could be done, for instance, in science class
, to pre-bunk commonmisconceptions and common false
claims around things likevaccines or climate change or
(32:25):
the way the scientific methodworks.
As I've mentioned earlier, inhistory there are things that
can be specifically pre-bunkedabout how hate groups misuse
ancient and medieval history, oreven about people's views of
what life was like in the 1950s,which is often set up as a very
(32:47):
idealized time by certainpeople spreading particular
political messages.
Harder it is to pre-bunkspecific claims and the harder
(33:13):
it is to get those pre-bunkingmessages to them before they
encounter the false claim thatyou're trying to immunize them
against.
Speaker 1 (33:20):
That was Matthew
Johnson from MediaSmarts, and
check out the podcastdescription for the
fact-checking search engineMatthew mentioned as well as
some other useful resources.
In our next episode, we'regoing to look at the
institutional responses to thisproblem.
How is the government copingand where can it do better?
(33:40):
Who decides what?
Speaker 2 (33:42):
is true.
Who decides what is OK versuswhat is outside the Overton
window and not OK?
How do you avoid governmentoverreach?
Speaker 1 (33:50):
Keep an eye out for
that next week and if you're
enjoying the podcast, thenplease leave us a review and a
rating.
You can email the show atpodcast at sierraca, as always,
or you can reach me online atTakara Small on Blue Sky Social
and Instagram.
Thank you for listening andwe'll see you next week.