All Episodes

August 21, 2015 68 mins

Could something as simple as a search results page affect a major political election? It turns out the answer is yes. We look at how search rankings can influence public opinion.

Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Brought to you by Toyota. Let's go places. Welcome to
Forward Thinking. Hey everyone, and welcome to Forward Thinking, the
podcast that looks at the future and says I swung
the election. Friends, that's no small order. I'm Jonathan Strickland

(00:23):
and I'm Joe McCormick, and I have a question for you, guys.
Have you ever googled yourself to be met with horror?
Not because of something that you did that's chronicled on
the Internet in an embarrassing way, but because somebody has
the same name as you, and they're higher in Google

(00:45):
results than you are, and they do something that you
don't necessarily want people thinking that you do. It is
very difficult for me to answer this question, uh, without
putting my own person philosophy on the line. I'll just
say yes, uh, speaking as from what I can tell,

(01:06):
the one end only Lauren Vogel baumb on this planet.
That's a lucky condition. Yeah, I actually don't know how
that feels. Well, there's somebody with the same name as
me who's some kind of erotic photographer. Well, let me
just say this. There is a certain conservative senator from
Texas who probably is irritated that he has the same

(01:27):
name that I do. Really like he's tired of getting
emails about how this new app wor let me say, actually,
I say Senator. I think he's the State House of Representatives,
so I believe I misspoke. But yes, there is a
politician in Texas who has the same name as I do,
and I'm sure he is endlessly irritated by my tweets, etcetera.
Why are people asking me how Mulnier works? How do

(01:50):
you say that mull mullner? Okay, well, this is going
to be related to the topic that we're going to
talk about today, which is Google rankings, or i'd say
more broadly search rankings. Though let's be honest, what we're
really talking about is Google the United States. Sure, sure,

(02:11):
we read this really good article in Wired by one
Adam Rogers. It's called Google Search Algorithm could Steal the Presidency,
and we found it so interesting because in it, Rogers
introduces a concept that he calls Google mandering. Yeah what
google mandering? As in Google mandering and an spirit of

(02:32):
Google has appeared with us here in the requests. I
thought it was an introductory class at Hogwarts. You know,
you had your herbology and google mandering. No, no, but
based on jerrymandering, which of course is the practice of
rigging a election populations to get the results that you're
looking for. Right, So, maybe if you have a lot

(02:53):
of people who you're expecting to vote against your party,
you can just safely confine them all to one weirdly
shaped district so they're not going to be threatening to
vote in your district. Yeah. The the basic idea here
is that the way that search results could be displayed
might influence someone's decision on an important film thing like

(03:18):
who to vote for in an election. What that's so crazy,
you guys. Google surely cannot influence something so personal is
voting decisions, or something so complex as election results. Right. Well,
so here we're gonna get into something that is sort
of a requires some standing back and analyzing the way

(03:40):
media affects our perception. Now, the way media affects our
voting is pretty obvious in one sense, as in, you
can read articles, or watch television or get any other
kinds of sources of entertainment or information that tend to
favor one side of a debate debate over the other side.
But there's a different way that your media can affect
to your decisions, and it can be at the level

(04:03):
above that, not just that you're looking at one article
and it and it consistently favors one side over another,
but in the selection or availability of sources available to you. Yeah.
So in other words, uh, I mean, well, let's put
to it a a very simple way. If you lived in
a very remote location and only one newspaper ever arrived,

(04:30):
and that newspaper had a very specific slant political slant,
and that's all the information you get, it would be
very difficult for you to make an unbiased decision if
you can only get the anarchist world news today, in
which case you probably just don't vote, but but you
don't vote with authority. But yeah, that's that's a that's

(04:54):
an example. So now, when we talk about the the
use of online sources and also just just the various
media sources that are out there, we're not necessarily boiling
it down to something that's simplistic, but the point being
that when you get one of these influential voices to
come into an area, there appears to be a measurable

(05:14):
impact to that. Yeah, and that's not you don't have
to take our word for it. Um, there's uh, there
is a actual phenomenon called the Fox News effect. Well no, okay,
So to be clear, we're not going to be taking
a political stance one way or another. This is about
the measurable effects of media availability in surgeons. Yes, yes,

(05:35):
uh so. The Fox News channel, as you may may
not know, started up in October, wherein it joined other
twenty four hour cable news channels like CNN and MSNBC.
And you know, okay that the channel has long maintained
to have no bias towards anything other than fairness. Um,

(05:56):
but it's always been on the conservative side of fairness. Yeah,
I think everybody is aware of this. Different channels tend
to have different political leanings. You can, I can expect
MSNBC to be more to the left, you can expect
Fox News to be more to the right. Sure. So
it was introduced into the cable packages of some likewent
of towns in the United States between November of two thousand,

(06:20):
which was the year of the Bush versus Gore versus
Ralph Nader kind of sort of presidential election, and a
group from the National Bureau of Economic Research, which is
this nonprofit and hypothetically nonpartisan organization that studies the economy
and politics saw in this an opportunity for learning specifically
about media bias and putting some numbers into it. So

(06:43):
they gathered voting data from over nine thousand towns and
they found that the Republican Party had gained a zero
point for to zero point seven percentage points in towns
that had gained access to Fox News and and furthermore,
that the channel had encouraged voter turnout there. Their estimates
are that channel convinced some like three to eight percent
of its viewers to vote Republican, which which sounds like

(07:06):
a small amount, but that is more than enough to
have included like ten tho voters in Florida, which is
enough to have flipped the state, which was the decider
in the year two thousand election certainly was yeah, so
everything turned on Florida. That's where we had the recount,
and we eventually had to have the U. S. Supreme

(07:27):
Court step in and say, okay, we're putting an end
to this. Congratulations Captain Bush. And to be clear, the
three to eight percent that were convinced to to vote Republican,
many of those could have been self identifying Republicans. Well,
before the Fox News channel came on, they were just
you know, convinced to go out and actually cast a

(07:48):
vote partially because of Fox News. So it's not to
say that, you know, and when a media uh company
of some sort enters into a region that it magically
changes uh three to eight percent of the population to
that side. You know. I think usually those estimates are
referring to um to undecided voters. Yeah, And the thing

(08:09):
that's interesting here, I think is that this is a
comment about the effects of the availability of different types
of sources. So it's not necessarily that in these towns
where suddenly they got access to Fox News on cable,
everybody was sat was you know, made to sit in
a room and forced to watch Fox News. But now
you have these sources of available, some people are going

(08:29):
to consume those sources. And it appeared to have some
effect on how people voted. But there are other ways
that selection of messaging in media can have an effect
on voter turnout and on elections. Yeah, so you're referring
specifically to a campaign that Facebook ran to to inspire

(08:50):
people to go out and vote. This is great because
in this case, it is a supposedly entirely neutral message.
It doesn't say go vote for my anarchist candidate clause
mcgrabby clause Yeah, that's why I would vote in that
anarchist party. Yes, absolutely no. No, Facebook, in this experiment,

(09:12):
which Facebook sometimes runs experiments on you guys, I just
you should know that, just flat out to start with,
it's pretty cool. You're helping science, whether you want to
or not. Yeah, you're a product and a science experiment. No, No,
it's it's great though. I mean because because they were
able to gain a sample of sixty one million users

(09:33):
who were eighteen or older who accessed the site on
the day of the congressional election in and and there,
they just put out these messages about going out and voting.
It wasn't again, it wasn't voting for a particular person. Uh.
It was just either either go vote or go vote

(09:53):
and check out how many other friends of yours have
gone and voted. So the the experiment split their overall
sample into three groups. One percent about uh six hundred
and eleven thousand users was a control group that actually
received no message. Another one percent received a purely informational
message at the top of their news feed. That just
encouraged them to vote, linked to info about local polling spots,

(10:15):
provided an optional voted button to click, and and gave
account of other Facebook users who had clicked on it.
Cent of the sample, about sixty million users got all
of that informational stuff, plus a little social message that
included the profile picks of up to six of their
Facebook friends who had clicked the I Voted button. And

(10:36):
they were furthermore that the researchers were furthermore able to
match six point three million of those users with public
voting records to see whether their messages had affected voting practices.
And I mean they did well. Okay, they found that
the informational message actually had no effect, but but the

(10:59):
social message made people point three percent more likely to
click through to polling information and point four percent more
likely to actually go and vote. And that might sound
like a tiny effect, but remember that we're dealing with
those those millions and millions of study participants. So the
researchers estimate that about three hundred and forty people went

(11:22):
to the polls who otherwise would not have gone because
they saw that social message. Yeah. The interesting thing to
me here, besides the fact that there was a noticeable
effect of this approach is that Facebook could very easily
sway an election simply by sending the go vote message

(11:46):
to people that had been identified as being sympathetic towards
the mission statement of Facebook. So in other words, that
Facebook has particular uh policies that they really want past,
and there are particular politicians as associated with those policies.
And because we share everything we have on Facebook, we
we tell Facebook everything our deepest, darkest secrets, we whisper

(12:09):
into its ear, they can identify with pretty high precision
which people would be the most sympathetic toward the candidates
that they themselves would want to support. Or, if you
want to get conspiratorial, in another direction, you could say
that Facebook could potentially sell this service to a candidate.

(12:30):
So maybe if suddenly clause Mate gravity Claus is flush
with cash, he could go and pay Facebook to just
tell people who have said supportive things about the Anarchist
Crab Party to go vote on election day, but not
to tell anybody else to go vote, right, So, in
other words, the ones who are most likely to support

(12:52):
that message get the extra incentive or the extra encouragement
to go and vote. The other people don't get that
in uragement, and ultimately Facebook could say, look, we didn't
tell them who to vote for. We didn't tell the
other people not to vote. All we did was Santana
vote message. That's all we said. And it ends up

(13:13):
being this. You know, it's it's almost like all we
had to do was put the idea into people's heads
and stand back and let the rest happen. Now, that's
about a social networking platform. That's something that could potentially happen,
although you could probably figure that out, like if you
were talking about numbers large enough where that sort of
thing would probably become a parent pretty quickly and I'm

(13:33):
guessing would not reflect too well upon said social network.
Let's talk about search engines and and why they would
matter in an election is just that people use them
to do research these days. Uh. For example, in the
days leading up to end just following the presidential election,
Google users interest in search terms related to Romney reached

(13:57):
the highest that they have ever been by far are
and terms related to Barack Obama spiked higher than they
had since his initial election in two thousand eight. So
people were they were cramming for the final So that's
the thing. Right. If if people are turning to search
engines in order to get information, and we've already established
that the types of information you get can influence your decisions,

(14:21):
it stands to reason that the search engines results page
has to be pretty important. Yeah, but okay, let me
play the dumb guy here. Okay, sure, no, not the
dumb guy. Let me play the guy with a reasonable concern. Okay,
reasonable concerned guy. The Internet is a democratizing force, because
isn't it great that you can go into a search
engine and you can get information that represents all the

(14:45):
points of view out there. You can find articles that
are pro Barack Obama, you can find articles anti Barack Obama,
you can find pro Mitt Romney, you can find anti
Mitt Romney. And you might only be able to find
articles that are pro Clausemate Grabby Clause. That's because he
is so wonderful. But the point is, whatever information people

(15:05):
want to present, whatever opinions they want to publish, you
can find it on the Internet. So why why wouldn't
the Internet be a perfectly neutral source to get your
information from? Well, first, that works under the assumption that
every single web page out there is treated equally by
search engines. Ah, and we know that not only is

(15:26):
that not the case, it in fact cannot be the
case because how would you present every single potential return
on any given query where they're all ranked at the
same level. Would you just have an infinitely scrolling bar,
would you just have a camera view that is constantly
hovering over different UH titles? And even so, then what

(15:48):
order do you put them in? Because that alone creates
some sort of sense of rank. Now you could say, well,
wait a minute, what if we randomize search results so
that we and nothing gets prefering chilly treated and featured
towards the top. But that would sort of go exactly
at cross purposes of what somebody like Google is trying
to do, where they're constantly trying to give you the

(16:11):
best possible result for whatever terms you entered. Yeah, that
that's why we go to Google more often than to
other search engines, because it more frequently returns us links
that were interested in, right, so that it would go
entirely against their interests and to be frank, our interests
if they were to say, well, let's randomize all of
the different articles about Barack Obama and Mitt Romney so

(16:32):
that none of them get preferential treatment. Randomization would just
mean that for Because let's go outside of the elections
for a second. You don't want random results for any
given query. When you put in a query. What you're
the what you want to see on that first page
of results, which I'll get to in a second, is
a link to a site that answers the question you have,

(16:54):
gives you the information you want, links to the restaurant
you're looking up, whatever it may be. That's what you get.
You aren't You don't want a random thing that happens
to relate tangentially or otherwise to whatever it was you
were looking for. So, because of Google's uh uh efficient
means of returning search results, it has shaped our behaviors online.

(17:19):
And this is something that we've seen over and over
again in various studies. So most of us don't bother
to look beyond the first page of search results for
any given query. Yeah, most of us, And most of
us put in a search query. Well, and you don't
want to write you have better things, Yeah, you have
better things to do than to go through eighteen pages

(17:41):
of search results trying to find one that is the
closest and most relevant to to what you are searching for,
at least if it's something you know, especially something casual,
right like I don't. I don't want to spend twenty
five minutes going through to find uh, something like I'm
looking up shoes. Put that on number one. You know,
in my experience as someone who has been doing various

(18:02):
kinds of deep research on the web for many years now,
I can say that I think the organization and prioritization
of Google search results has gotten demonstrably better over the
past ten years or so. It used to be much
more common that to find the ideal example of the
thing I'm looking for, I would have to go deep
into other pages. That happens way less now. Now It's

(18:26):
it's way more common that exactly the thing I'm looking for,
or the best example of the thing I'm looking for
available on the web is on the first page, which
is both good and bad. It's good in the sense
that we're finding the stuff we want more quickly than before.
It's bad and that if you are doing research into
a topic and you're unfamiliar with that topic, you have

(18:49):
been conditioned to go after those first few links, and
it may behoove you to go deeper to find to
get a full understanding of whatever topic it is that
you're you're researching. Um. But see, I mostly do research
on academic subjects. And this is also problematic because if
you're looking at academic subjects, most of the time, not always,

(19:11):
but most of the time, you're looking at a kind
of objective approach to whatever the subject matter is. And
most of the subjects that you're researching are fairly objective
to begin with. I mean, they're they're not something so
hot button as as political stuff, right, So if I
were doing searches on political stuff, it may be that
I would discover more of leaning on one side or

(19:34):
the other. I mean that's a possibility. I just haven't
done that. That's not what I use Google for. Well, yeah,
and if you do go deep into the search results
on certain political candidates, you'll find that there's tons of
garbage out there that's not going to be useful to
anybody on either side unless you're just looking for some

(19:55):
slander to rile you up. Sure, sure, well, And and
that's for that's for relatively you know, popular terms that
a lot of people are going to be writing about.
That The the problem that I wind wind up running
into with Google is that I don't know, you know,
I do so much research. I start so much research
there that I'd say that about like once every week

(20:16):
or two, I wind up having to go through some
like dozen iterations of a search term or or dig
like several pages back in the search results just to
find what I'm looking for. Partially probably because of the
human error of me looking for information on a topic
that I know nothing about and thus don't know the
right terms to search with. But sometimes it's kind of like, no, Google,

(20:39):
I did not want to know about specialty cheese shops
in my area. When I searched for active cheese culture,
you find like the neighborhood where all the joggers go
to the cheese shop. Yeah, that's the active cheese culture
in my neighborhood. Yeah, Or is it the one where
the cheeses are all living in a culture where they're
active like Fraggle Rock. I encounter the same thing all

(21:03):
the time researching for podcasts. In fact, in fact, the
the other topic that we will be recording today. As
we're sitting in the studio, I was having issues with
that where I kept on going back and putting in
different search terms because I thought there's got to be
a study about this, and I could not find something. So,
I mean, this is a common issue. So that's also

(21:24):
something that take into account, is that the search results
will be as close to relevant as possible. It behooves
Google to have that. If if Google was seen as
being an unreliable source for relevant information, people wouldn't use Google.
And that's where Google's value is. So they have they
have a an incentive to do that. However, Yeah, so
let's look at the ways then the search results actually

(21:46):
do matter in practice. Yeah. Sure, because again, you don't
have to take our word for it. People, people have
done actual research on how likely people are to click
past the first page of Google results. Done at all
tell me only what percentage of clicks go to the
first page. So more than nine out of ten clicks

(22:08):
in Google results are going to happen out of the
first page. Yeah, Or if you want to think of
it another way, fewer than one out of ten will
go to the second page or beyond. Yeah, So it's
not just to the second page, it's to all other
results in the entire web come by. Yeah, when you
see like four point eight trillion results returns from for
your search query, nine five percent of the clicks are

(22:31):
going on that first page and not even on that
first page, right, Yeah, thirty two point five percent of
them are going straight to that first result. Now is
this including the sponsored ad results? This is going to
the I believe it's the non sponsored ones, So sponsored
ones also take up some of that percentage. Obviously, the
second result gets seventeen point six percent, So clearly first

(22:53):
place is the place to be. That's where you're going
to get the most traffic um And also, if you
look at the last results on the first page of
a search engine results page, why would you ever scroll
all the way down to the last result on the
first page. Well, very few people do, as a matter
of fact, but those who do still greatly outnumber those
who go to the second those who go to the

(23:15):
second page. So yeah, if you look at the number
of people who click on the the final result on
the first page of a search engine results page, it's
more people than those who click on the first result
of the second page. So if you're somebody who's putting
content on the web, whether whether you're trying to monetize

(23:35):
it as a business like you're writing articles and make
your money through advertising, you need clicks to keep going,
or whether you're just trying to get your message out
in one way or another. If you want people to
see your page, being on the first page of search
results is crucial. It's a huge help. I mean, it's
one of those things that we always here. Is becoming

(23:55):
less important because of the rise of the importance of
social networking, but it's still easily one of the best
ways to drive huge amounts of traffic. If you what
people call own a search term. If you own a
search term, then you become the destination for everyone who
searches for that because if you're if you're number one

(24:16):
on that list, you're getting of that traffic. If it's
a popular search term that translates into hundreds of thousands
of page views, that's a big deal. But what if
the search term you own is somebody else's name, and
that person's name that's the name of a political candidate,

(24:38):
and you don't like that political candidate. Well, if if
you are the go to source for that, then that
number one result could be a very negative uh portrayal
of said candidate. And that's just that's just the way
it is, although if you're in Europe there's actually some
way of getting around that, but that's you're in the
United States. That down here in the United States, this

(25:00):
worked out to particularly unfortunate effects for one Republican contender
in recent years. Y'all, I'm sure aware of the Rick
Santorum effect. We don't need to go into detail about
the about the Santorum web patrolling that occurred around this dude. Yeah, yeah,
if you are interested in reading up on that and

(25:21):
don't mind some kind of gross subjects, you can go
right ahead, right So you know, we we've seen that
there are there are huge incentives for anyone who's going
to be using search engine traffic to drive whatever it
is they're doing, whether it's a political campaign, a company
and organization, whatever it may be, there's a huge incentive

(25:42):
for them to try and get on that first page
because being anywhere else, you might as well not even
worry about search engine traffic. You've got to concentrate on
something else you want to you have to be on
that first page to to leverage traffic. Sure, but getting
onto that first page is a really tricky game. Yeah,
because it's not like it was in the old days.
Like in the old days with search engines, one of

(26:03):
the things you could do is you could pepper your
page with tons of irrelevant metadata that had nothing to
do with your content. You know, I this page is
an ad for the virtues of grabby or what's his name?
Clause clause. But also you might like this page if
you're interested in metallica, bckney spears, right, yeah, what were

(26:26):
the other things popular in the early two thousands, Yeah, yeah,
So if you've ever gone to well, those old web
pages where there's just a a just a garbage pile
of unrelated words at the bottom of the page, or
sometimes it's hidden where they made the text the same
color as the background. That was an attempt to game

(26:48):
the system to get on that first page by not
only trying to get you know, look like you're an
important page, but also just throw in so many search
terms that no matter what someone was searching, they would
end up at your page. Never mind the fact that
the information on that page may be completely irrelevant to
the search query, that no one cared about that they
wanted the page use. They want that little page counter

(27:10):
that was on everybody's homepage way back in the day
to click up a notch, Um, but come for the
false advertising, stay for the dancing babies. Yeah. Now, eventually
search engine algorithms got more sophisticated than that. Engineers built
better algorithms that would ignore this metadata. In fact, today's
Google algorithms that Google says that metadata plays like no

(27:31):
role at all in page ranking, that it's more important
that the page actually contain relevant information that is key
to whatever the query was, and part of that is
decided based on how many other pages linked to this page. Yeah,
so if I'm doing a search for people can try
to game that too. But yes, but first, let's say

(27:53):
that I'm doing a search for a particular candidate. Um,
and I'm not going with yours because I'm I'm frankly,
I'm I'm the I'm in the anti crab party. I'm
more of a lobster party. I will go with Pincheo
Hula Hans. So you're against the crab anarchist party, you're
in the lobster fascist party. Fascist is such a such

(28:15):
a such a cruel word. I think of them as
the imperialists. So law an order party. Yeah, So so
Pinchio Hulahan, let's say, let's say, um, you know, I
want to do a search on pinchio hula haan. Well,
the website that comes up first maybe one that lots
of other websites linked to whenever they are commenting upon

(28:35):
pinchio hulahan. So there might be news stories that link
back to this website, and there might be lots of
other blog pages, all sorts of stuff. The way page
ranking works in general is that incoming links are worth
a certain amount. Also, they're weighted, so incoming links from
more important pages are worth more. So in other words,

(28:56):
like a big um news outlet like seeing then if
they link out, that link out is worth more than
Joe Bob's blobster blog. Bob blah blahs law blog is
not going to be as big as CNN, so uh,
you know those are both factors in it now. People

(29:17):
tried to game the system too. There were a lot
of I mean, you probably have encountered this where you've
done Oh yeah, you go to you you find a
you know, you do a search for something, something pops up,
and you know you click on the first result because
ninety one point or thirty two point five percent of
us do that, So you click on that first result,
and what it takes you to is just a list

(29:37):
of links, and all it is is just links for days,
all the way down the page. People tried to make
these link farms in order to game the system and
build up other page ranks of other sites. But you know,
eventually algorithms got fisky enough to see that too. Yeah.
In one entertaining and slightly less soul sucking example that
there was one time, very briefly back in the days

(29:59):
that I was on live journal, a uh an instance
of Neil Gaiman attempting to get the search term Pendelotte
to lead to Neil Gaiman's website, and he succeeded because
he's Neil Gaman, he owns the internet. Why Penjialette because
he had some like some like temporary jokey feud with Pendulot.
He was like, hey, but I can do this weird

(30:20):
thing to you, and Pendulotte was like, no, you can't.
And that's my pendulout impersonation. I'm not sure it's very quality, yeah, yeah, yeah, yeah,
it's got a kind of a grally voice. Yeah. And
then I mean that was one of those things where
there were it was easier to game the system, it's
harder to do it now, uh these days. The way
Google pitches it anyway, is that in order to get

(30:40):
on that first page of results, you need to demonstrate
that the site that that page belongs to is dependable,
that the information inside is relevant to the search query,
that it's of a high quality. These are all, you know,
qualities that are difficult to It's difficult to say how
they measure that. Google doesn't make their algorithm public by

(31:03):
the way they it's it's secret sauce. I mean, stuff
like that sounds highly subjective to me, So it's hard
to know how they could really verify that. Well sure, yeah,
but but okay, So so either way, there are all
these algorithms at work that are choosing what we see
when we open up a Google search, and you know,

(31:24):
depending on what the content of those articles is, we're
we're going to run into some bias on that first
term that we click on. Yeah, and in fact, if
we were to be a little unethical, let's say that
we're in charge of an enormous search engine and we
can actually tweak things so that very specific search we

(31:46):
we can rank essentially dynamically whatever the pages are and
present the links that are most in line with our
own worldview as the top ranking ones, and put the
ones that maybe argue with our worldview further down. We
already know five of everyone who goes to that search
engine results page is clicking on that first page of results.

(32:09):
So yeah, if the president of Arthropod Google prefers, you
might very well get a first page that's heavy on
the pro pinchy literature. But I mean, so we're not
saying that we think Google is doing this, but the
potential is there and it's, as it turns out, pretty

(32:31):
well research. Yeah, so this is going to be the
core of this episode. Here is this paper by a
couple of guys named Robert Epstein and Ronald E. Robertson.
They address this effect that they call the search engine
manipulation effect and it's possible impact on the outcomes of elections.
That was the title of the paper in paper that

(32:52):
they released through the Proceedings of the National Academy of
Sciences or p n a S. And in this paper
they review the results of five different experiments carried out
in two countries testing the effects of search engine results
on subjects voting preferences. Right, So, They were working with
the hypothesis that the search engine results could affect a

(33:14):
person's decision making uh, just based upon which results are
presented first, and that it could affect enough people that
it could in fact sway an election. Right. So they
started in San Diego. They started small, to just test
the hypothesis on a relatively tiny sample size. They gathered
a hundred and two volunteers. They were the volunteers were

(33:36):
actually paid for their participation in the experiment. All the
volunteers in the experience that we're talking about gotam tiny payment.
I think in the first one it was twenty five dollars,
but in subsequent ones it was lower. Uh. The volunteers
were presented with two candidates for a election which had
already happened in Australia. In Australia, these are people in
San Diego given two candidates for Prime Minister of Australia

(33:59):
in twenty um. And the two candidates were Tony Abbott
and Julia Gillard. And the volunteers were divided into three
test groups. So what the researchers wanted to do was
present the group with choices that they weren't likely to
have previous knowledge about They wanted to They wanted to say,
all right, well, here are these two candidates that we
are fairly confident most of the people that we've gathered

(34:23):
have either never heard of or they've heard very little
about them. Americans don't tend to have a lot of
opinions about Australian politics, if they even have opinions about
their own. Sorry, that was that was bleak. But but
they but they did want to choose actual politicians because
they wanted uh, people for whom they are already robust
Google search results. Yes, they wanted to use real articles

(34:47):
that were written about these two people rather than have
to write up a bunch of fake ones. So they
used real articles, a big collection of them, and um
and some of the articles put one candidate in a
favorable it's over the other, and other articles were the opposite.
They some favorite Abbots, some favorite Gillard, and then they
also had some articles that they classed as neutral. They said,

(35:09):
you know, weren't really biased one way or another. So
they then went forward with a double blind test. I'll
talk about that in a second. But the researchers first
asked each group which of the two candidates they would
most likely support an election based on just a very
brief biography of each candidate that was presented in as
non biased away as possible. And they said that there
wasn't really any um measurable difference between the two candidates.

(35:33):
People didn't really care going in. Yeah, it was like
essentially a coin flip situation. And then each group had
the chance to do research on the two candidates using
a mock search engine called cadoodle in the experiment. Yeah,
much better algorithm than Arthur pod Google. Uh. Anyway, I'm
surprised they didn't have Pring or something, but no, it's

(35:55):
it's cadoodle. So you would use cadoodle to do a
you know, some search on these You would put in
the names of the candidates and the search results would
pop up and um uh. They divided the group in
or groups into three. They used a double blind approach,
which meant that neither the researchers nor the participants had
any knowledge of what the hypothesis of the study was,

(36:16):
so they didn't know why they were doing this. This
is a common thing in in research like this to
try to ensure that your researchers or the people working
on the experiment aren't giving subtle cues to the participants
what they should do. Accidentally by said, right, because most
of the time participants want to please we as humans
want to please everyone, right yeah, so yeah, And because

(36:38):
this is all about studying bias in the first place,
you don't want to corrupt that with introducing other bias.
Than you can't really see what the results of your
intended bias was. So uh so, none of neither the
participants or the people administering this had any knowledge of
what the hypothesis was, or what group anyone was assigned to,
or what the purpose of the other groups happened to be.

(36:59):
So so what have and when these groups did their
cadoodle searches, Well, you had one group that received absolutely
neutral results. There were just it was just like a
regular search results page, where no candidate was given favorable treatment.
Then another group would get favorable results for one of
the two candidates. The other group got favorable results for
the other of the two candidates. And the way this

(37:20):
worked is that they had multiple pages that they could
click on if they wanted, so that you'd get like
six pages of results, and you could review as much
of the results as you wanted to and and Also,
all of the three groups had the same collection of
articles at their disposed, right, it was just in a
different order, exactly. Yeah, So one group might get this

(37:42):
staggered kind of response where it's favorable to one candidate,
then favorable to another, and then back and forth. Another
one might get all of the you know, pro Abbot
articles at the beginning, and then only get to the
pro Guillard articles at the very end Gillard or Gillard
how we pronounce that, And then the other group would
get it the other way around. So, uh, you know,

(38:06):
they again, no one knew what was going on or
was told what was going on at the beginning. Uh,
they did. The researchers did decide that they had to
insert a question to find out if people were picking
up on what was going on. But they had to
be very careful with how they asked that question, right,
because if you ask the people, did you find these
search results biased? There, you're kind of giving them a

(38:28):
clue they should be interpreting what's going on. So how
do you ask, hey, do you think we totally rigged
the search results? They they The way they did was
they said, did anything bother you? It was vague enough
where they felt you know, maybe if no one is
really paying attention, they'll just say no nothing bother me.

(38:49):
It would only be the people who are really focusing
that said yeah, I happened to notice something weird. Uh.
And then they also offered up a space where you
could type in as much as you wanted about what
bothered you of those in that search results page. So
they included that, and that was their measurement of how
many people detected that there was a bias going on UM.
And then the results are pretty interesting. So they saw

(39:12):
again that before they allowed people to do a search
on the candidates, there was no real there's no contender.
Nobody cared about these Australian politicians one way or enough.
To be fair, no one really cared afterwards either, I'm
just kidding, just kidding. Well, after they were able to
do the search results, they saw that there was a
forty eight point four percent increase in the number of

(39:33):
people who said they would vote for the favored candidate
of their respective test group. So they called that forty
eight point four percent the vote manipulation power or VMP
or VAMP. Yes, I guess to be VOMP if it's
if it's vote so SOT eight point four percent. That

(39:54):
sounds like a kind of a significant number to me,
But keeping in mind these are people who know probably
nothing about them. We're going to explore that in a
later part of this experiment, but just in these test conditions,
that's pretty interesting that people were that willing to be
swayed just by biased ordering of search results in Google. Again,

(40:17):
let us state all the same articles were available to everybody.
It was just a question of in what order they
were presented. Yeah, And and it turned out that most
people could not detect that there was a bias going on.
Of the participants said that they did not detect any
kind of bias, and they didn't they didn't click that

(40:38):
something about this bothered me, only clicked on it and
made some indication that they felt the search engine results
were not fair in some way. We weren't given the
full details of all the results in the in the paper,
but based upon the level of transparency the rest of
this report has I'm pretty confident that that's that's an

(40:58):
accurate representation of why happened. Because one of the things
that Joe and I were talking about this before the podcast.
One of the things that we were both impressed by
was that, uh, they were very forthcoming about the limitations
of their testing procedure and not being an expert, just
as a lay reader, this seems like a very good
study to me. It seemed very conscientious and very thoughtful

(41:20):
and self critical about its own about the limitations of
what conclusions we could draw from their results. Also, the
fact that they use double blind made me happy. But
at any rate, so they repeated this experiment a couple
of times in San Diego, the experiments one through three,
we're all in San Diego, and they decided, well, maybe
we need to bring down that number even lower of

(41:41):
the number of people who realize that there was manipulation
at play. So they said, well, how can we do that?
And they thought, well, one thing we could do is
we still give the favorable results for candidates to you know,
groups B and C, group A being the control group.
But let's insert at the fourth position an article of
the opposite viewpoint. Okay, so you would get results where

(42:03):
the first page is all pro Abbot stuff, going eventually
into the neutral ones in the later pages, and then
finally getting to the pro Goillard stuff in the final pages.
But in the third or the fourth or the third place.
On the first page, there'd be one pro Guillard article
that was in the first time they ran it for

(42:24):
experiment too, was in the fourth position. In the third
experiment third experiment, they moved it to the third position,
and they found, first of all, the interesting thing was
if they put it in the fourth position, it actually
made more people willing to vote for the favored candidate
a sixty three point three p v MP. That kind
of makes sense to me because it seems like they'd
still be getting this overall pretty biased message, but they'd

(42:47):
start to get the impression based on just one result
going the other way that yeah, I'm getting the full
picture here. Yeah, the search results are honest, right exactly,
because when you see just everything from one from one position,
then you start going like, is there a bias here?
Maybe maybe someone over at Google's really pro abbott or
cadoodle ioodle. So this also meant that fewer people were

(43:12):
able to tell that something was not on. The up
and up of the participants were did not indicated all
that they were aware of a bias, so down to
of the group. Then they did it a third time,
and this time that's when they moved it to the
third position instead of the fourth position, and uh, this
time it was a lower VMP of thirty six point
seven percent. But they didn't get any hits at all

(43:36):
on a detection of bias. So, if your goal is
to make sure no one knows you're messing around, you
put that you put the the one negative position in
in number three, and you'll get a lower result overall,
but no one knows you're up to something, and it's
still a significant result. All right, So this is very interesting.

(43:57):
But obviously the next step is what would happen if
we tried to scale up these sample sizes and get
more people to take this test. Yeah, so sample size
is one of those biases in any study that is problematic. Yeah. More,
the more people you test, the less chance there is
of error. Right, with a hundred into you, you'r you
don't have enough quote unquote random element in there too

(44:21):
to make sure that you're you haven't skewed the results
just through the sample size, just choosing a certain demographics
or something like. Yeah. So so next they went to
the mechanical Turk. Yeah, mechanical turk, Amazon's mechanical Turk, which
has been designed to help gather subjects for various types
of of of studies and they were able to get
one volunteers. They got volunteers from all the fifty states

(44:44):
in the United States UH, and the demographics were interesting.
They weren't necessarily aiming for specific demographics. What it turned
out to be was that nineteen point five percent of
the subjects were self identifying as conservative and fifty point
two percent as liberal UH, and then you had you know,
others on that spectrum as well, And it was just
interesting that that was the numbers, especially when you start

(45:05):
looking at the results per demographic towards the end. So
they found out that the the vmp UH in this case,
they repeat the same test, so it's the same Australian
Prime minister candidates. They found that it was thirty seven
point one percent, or once you do some post stratification adjustments,
it's thirty six point seven percent. Post gratification is all about,

(45:27):
you know, you take the the sample size that you
had and the demographics that were based off of that,
and then you have to scale it up to the
general demographics and adjust the numbers based upon the weight
of those numbers. So that's why it dipped down to
thirty six point seven percent. Once they finished with that,
they the results indicated that some demographics were more vulnerable

(45:48):
to this manipulation than others. So depending on some facts
about you, you might be more susceptible to bias ordering
and information presentation. So one weird example that stuck out
to me is apparently self labeled divorces were more vulnerable
than self labeled married subjects. Yeah, if you were a

(46:10):
self labeled divorce Republican and you were pretty much going
to be led astray by the search engine. This was
another strange thing it found is that apparently self labeled
Republicans were more swayed by the order of presentation than
self labeled Democrats, like by a bunch of Republicans were
at fifty four point four percent and self labeled Democrats

(46:32):
were at thirty seven point seven percent. Moderate Republicans were
the most vulnerable group out of all of them. Keep
in mind that they were also a small group because
the overall number of people who identified as Republican was
or at least conservative anyway, was nineteen. But yeah, they
had a VMP of eighty percent. The lowest VMP, weirdly enough,

(46:53):
was in a particular income bracket, very weird to me.
At forty thousand to forty nine thousand, those people are
very Google savy that two point five v MP. So, uh,
this is one of the things where you know, you
can't necessarily draw broad conclusions based upon this study, but

(47:16):
it was one of those things they noticed, and they said, hey,
this might mean that if you wanted to manipulate a
you know, an election in some way, you would use
this kind of information to know who to target the most,
because it would tell you which ones are you going
to get the biggest return on investment. Assuming that that

(47:37):
this manipulation actually results in action, which is a big assumption,
will get to do a little bit later. So so
you're going to get way more bang for your buck
if you do skewed search results to divorced moderate Republicans
as opposed to married people who make forty And what

(47:58):
about bias in this section of the study, this is crazy? Um,
so based upon again, they used the same approach as
before to say, hey, did anything bother you about the
search results? They there was a forty five percent v
MP for people who detected a bias and a thirty
six point three for people who were unaware of a bias.

(48:21):
In other words, so hold on. The people who said
something bothered me about these results, I think they were biased,
were more affected by the biases that being aware of
the bias. So counter except, we've been conditioned that the
first page of search results are the best results for

(48:43):
your query. So if you're conditioned to the point where
I know that the thing I want, the information I
need is on that first page, even if I think
it's biased, well it's got to be right, it's got
to be the most relevant because people wouldn't lie to me.
Why would I go to page two. Yeah. So so

(49:04):
now now they're just saying that that's that's a possible
answer for that particular that their interpretation right, They don't
they don't know for sure that that's the case. And also,
again using the really vague what bothered you about this
could mean that there's a little gray area around all
of this as well. Yeah, okay, So there was one

(49:25):
more phase of the experiment that they carried out, and
I thought this was a very interesting next step because
as soon as I was this part in the study,
I was like, well, all these are having to do
with candidates that we don't have preformed opinions about or
almost nobody in the study had preformed opinions, which obviously
is not how reality works. Yeah, and when when you're

(49:46):
when you're actually a voter in an election, you probably
have some preconceptions and biases going into your research. See that.
But you have a stake in it. If you're a
US citizen, you have very little stake. And who is
Prime Minister of Australia in the grand scheme of things. Yes,
we're all connected and I love all of you as
brothers and sisters. However, that being said, I mean it

(50:07):
might affect the probability of getting future crocodile Dundee sequels. Okay,
I know how I vote on that, so at any rate.
Uh So. But what they wanted to do was use
this in a real world situation so they could determine
how big of an effect, if any, would there be
in that instance. And so what they did was they

(50:28):
went to India. There were these enormous elections. There were
something like eight hundred million potential registered voters, of which
six hundred I think sixty million went and voted. I
read that at the time, it was the largest election
in human history. Yeah. Yeah, so what they wanted to
do was run the same experiment, but actually using real candidates.

(50:52):
In this election, they selected two thousand one voters in
India who had not yet voted on one of three
candidates for a specific position. They also, again they compensated them.
The compensation, by the way, included one interesting option. Uh,
they would either give it ranged from one to four
dollars depending upon where you were. But they also gave

(51:14):
an offer of giving donating a dollar fifty to a
charity that would feed poor Indian children. So at the
end of it, around dollars was raised for for kids,
which was awesome. So it's cool that they did that,
you know, it was that they were giving back into
this community. Yeah. One of the things they pointed out, Now,
in all these cases, I think they wanted to specify

(51:36):
that they were taking care not to cause any harm
to the participants or to the democratic process. And in
the other cases it didn't really matter because it was
referring to past elections in another country. Yeah, there was
no way that any of the results would ever have
any effect on things that had already happened because causality, y'all. Now,
this is an election that's ongoing in the country where

(51:58):
the people who were participating had the opportunity to have
their minds changed, and this could actually affect the election.
But they did say that it was a small enough
sample size that they didn't think it was going to
sway the election. Million is not significant. Uh. And also
one of the interesting things they pointed out is that
the biases that might come as a result of this

(52:21):
study would be balanced out because they're doing it for
all the candidates. In turns right, they were they were
equally distributed among each of the candidates because the group
was subdivided into I assumed four groups this time a
control group, control, and three one for each of the candidates. Um.
So again they were using real articles that have been
written about these actual candidates, and then they had to

(52:42):
figure out how to rank these in search to favor
each of the candidates and then randomly distribute them for
the control group. Yeah. They said they had to actually
get the help of an Indian like consultant to help
them determine exactly which ones were most biased towards which candidates. Yeah. Yeah,
and they had a little bit more of a problem
and into study trying to work that out. Yeah. In fact,

(53:03):
there they have numbers for the pre optimization and the
post optimization of the test, which I'll get into in
a second. Because of that very thing, you know, they
had to bring a consultant, and because they said, we
have a limited understanding of Indian politics, being these researchers
from the United States, uh and and felt that maybe
what we were presenting people was not truly the best

(53:27):
arranged list of search results in order to see if
the effect is real in a real world setting. So
that's why they hired the consultant. But that was already
when they had already started this this experiment, So it
did affect the numbers. I'll get into that in a second.
So the overall VMP for the whole experiment was only
quote unquote ten point six percent. But there are a

(53:48):
lot of things you have to take into consideration here.
One is that you didn't have these candidates who were
complete unknowns to the people in question, right. These these
were people already had some ideas about these various candidates.
Some people had very strong opinions about these candidates going
into the experiment, and they found that those people were

(54:11):
the least likely to be swayed. Yes, So yeah, that
ten point six percent, that's kind of what you get
after you look at the pre and post optimization of
the experiment. So before they brought the consultant on, uh,
it was trending towards nine point five so lower. Then
after they got the consultant it went to twelve point
three percent. So possibly had they brought the consultant on

(54:32):
from the very beginning, that number would be higher. One
thing that did impact the number was that there was
a group that had a very strong counter reaction to
the search results. Yeah, this was something I don't think
was encountered in any other phase of this research as
far as I noticed. But it was a negative VMP,

(54:52):
meaning that that if you presented search results biased toward
one candidate, it actually worked again to that candidate, people
were more likely to vote for one of the other candidates.
And that one demographic was conservative female voters in India
with a negative VMP of negative eleven point eight per cent.

(55:13):
So with this one particular demographic in this test group,
if you showed them a list of articles that were
all biased in favor of one politician, it worked against
that politician's interests, right, and the researchers said, it might
suggest an oppositional attitude, or it may be a tendency
to favor and underdog someone that you want to see

(55:34):
the person who is trailing behind to come from that
position and take it all. And UH, they said that
if you eliminated that group from the results, it would
raise the VMP from ten point six percent to nineteen
point eight percent. It actually goes up to in the
twenties depending upon the various UH implications they talked about later,
but up to nineteen point eight percent. And they said,

(55:58):
eliminating them from the results is actually not an unfair
thing to do, because if in a real world setting
you were attempting to manipulate the results of an election,
you would specifically target the people that you felt you
could you could influence, and you would specifically the people.
In fact, you could even take advantage of this, like

(56:20):
if you knew this was the effect for certain demographics
of people, you could pick those people and show them biases.
You could show them results with bias in the opposite
direction or at least of the candidate you would least want. Yeah,
so you could even have it work for you in
that sense. So when you get that through all of

(56:40):
these different experiments, was is ultimately mean you got you
have to get to the true analysis, you know, does
this actually matter? Well, according to the researchers, they think, yeah,
they have got a statistically significant and very interesting and
perhaps worrying set of results on their hands. So I

(57:02):
want to read this one quote from the from the
final analysis section of their paper. They said, our investigation
suggests that with optimized targeted rankings, a VMP of at
least twenty percent should be relatively easy to achieve in
real elections. Even if only sixty percent of the population
had internet access and only ten percent of voters were undecided,

(57:25):
that would still allow control of elections with win margins
up to one point two percent. And that includes a
lot of elections. They're close elections all the time. So
if you're talking about a very like razor thin kind
of lead, then something like this would be enough to
put one candidate in front of the other. Yeah. Certainly
here in the United States, especially in larger elections like

(57:47):
the presidential election, that is the case. Yeah, it could
not that not. I mean, if you look at the
overall numbers of the presidential election, it may not look
very close. But you have to remember that the elections
in the US for presidents are decided state by state, right,
So what you could do then is target one or
two key close races in swing states, and those are

(58:08):
the only things you'd have to push over the edge. Yeah,
because you already know that tackling any state that's really
entrenched in one camp or the other is kind of Uh,
it's a lost cause, there's no point in it. You're
not going to be able to create enough of a
swing in that to make a big difference. So if
you can convert one point two percent of Ohio or

(58:29):
Florida or whatever state you know is the big swing
state that year, that can win an election. So uh,
mostly it looks like these these sort of tactics would
have would mostly affect undecided voters, people who had not
already made a decision on on one candidate versus another.
If you have already made that decision and maybe that

(58:51):
the search results aren't compelling enough for you to change
your mind. It's very difficult to change someone's mind anyway,
especially by resenting them evidence, especially in the realm of
pologists in harder Yeah, but yeah, if it's science kids.
But yeah, if the top search results are positive for
a candidate to an undecided voter, that voter maybe feeling

(59:15):
more inclined to vote for that candidate. But they do
also point out, and this is one of those things
we were mentioning earlier about how they were quick to
say the limitations of their study. They say that there
is a known laboratory effect in general, not just for
this study, but for lots of stuff, a laboratory effect
where you might observe something in the lab that seems
really relevant, but in the real world it becomes less so.

(59:37):
And they said that it may be that this this
influence is very uh, it's tenuous. It doesn't last very
long either, So it may be that within an hour
whatever effect there was kind of wears off. So unless
you unless you send up the search results page and
then boot that person out the door to go to
the polls, it may end up not being a big

(59:59):
enough effect to actually create action. So but still could
be it could be the element that that does push
someone to choose one candidate over another, And that alone
raises the question of how do we how do we
account for this, how do we make sure we're aware
of it, And is anyone doing this on purpose, or

(01:00:21):
if they're not even doing it on purpose, what do
we do to make sure it doesn't like affect things. Well,
that's the really crazy thing to me is that even
if even if no one is purposefully manipulating Google search
results like this, the algorithm could be doing it on
its own. Oh yeah, yeah, I mean there could be
an effective bias coming out of Google results that is

(01:00:45):
already changing the outcomes of elections without anybody wanting it
to happen. It could just be for it to happen
at any rate. Yeah, it could be an accidental byproduct
of something that the algorithm does for completely neutral political
not political reasons. Well, and you also have to remember
that algorithms are designed by people, not designed. They're not

(01:01:05):
spawned by you know, deep thought, the uncaring, the uncaring
thinking machine at the heart of the universe. It's actually
stuff that the human beings have designed, and sometimes that
introduces a bias. And in fact, I would argue that
the concept of page ranking already has at least some
element of bias. There's no real objective way of saying

(01:01:26):
this page is more important than this page. Therefore, this
this first page needs to be ranked first in the
search results. Eventually you have to come to uh decision
that may have some bias to it to to actually
make that determination. Now, that bias may not be inherent
in the algorithm. It may be inherent in how everyone
else is treating that page, and that's how I got

(01:01:47):
to number one. But the algorithm is the thing that
determines what criteria are most important when making that determinations,
It's it's a long winding road to get there. But
the ultimate the ultimate answer here is that search results
could totally like effect an election. Y'all. Okay, to be fair,
people aren't getting all of their information about candidates for

(01:02:11):
elections from Google. Right. Sometimes they watch a man on
TV yell at them. Hey, that's my favorite thing to do.
Sometimes they'll change the channel and watch a different man
on yell at them. Or sometimes they listen to their
friends on Facebook who are ranting about it. Right, yeah, yeah,
So you know, there are so many different avenues that

(01:02:31):
are open to us about, you know, where we get
our information for this sort of stuff. Um, in a
world where we would only get this information from the internet, obviously,
this would have a much greater effect than it does already.
So you know, you can ask the question of how
many people actually use the Internet to do active research
on political candidates, and that would give you a better

(01:02:52):
idea of how effective this is in the long run.
I honestly don't know that answer. You know, I don't
know how many people of the of the people who
go and vote, or the people who are considering voting,
how many of them take the time to actually do
Google style research on candidates as opposed to just seeing
what their friends are saying or uh, consuming stuff in
other forms of media. Well, of course, Google results aren't

(01:03:14):
the only venue through which Internet. Internet companies of various
kinds could change the outcomes of elections without even producing
content of their own, just by choosing in what order
and when we see content. To think about Facebook again,
you know, we talked about that get out the vote
message earlier on Facebook. What if they just showed it
to one party, or what if the things that were

(01:03:37):
featured in your news feed the algorithm for selecting those
things had a political slant. Yeah, So, like, like, let's
say that my good buddy uh posts a a um
An article about Pinchio hula Han, which normally I would
like the heck out of But because Facebook is anti pinch,
Facebook is now in the anarchist crab camp because they've

(01:04:01):
seen the light of I always forget clause mc grabby clause,
so you don't even know your candidates name. How am
I supposed to dr mc grabby clause supports freedom? Now
doctor huh is a PhD? I assume, And anyway, pincio
hula hand. Because Facebook hates pincio hula hand, they end
up essentially burying that that link. Yeah, so on mind feed,

(01:04:23):
which features stuff that people have posted, it doesn't pop
up at all. If I were to go to my
friends feed, I would see it there because Facebook allows
you to go ahead and post it to your own feed.
They just don't feature that anywhere else. So this is
the thing. You might not have even realized this, Like,
you don't see everything your friends on Facebook post unless
you've specifically opted to follow them directly. There there are

(01:04:45):
a lot of things and placed a certain amount of
importance on what they post by liking and commenting on it,
or or by opting in via a button that is
ridiculously hidden, like the close friends button. If you do
a close friends button only do you get everything. It
notifies you when someone posts something. Never ever make me
your close friend. I post too much to Facebook. You

(01:05:07):
don't want that to happen to you. I'm just saying
I I don't want that to happen to you either.
Thank you for your concern, Jonathan, You're welcome. But so so,
this kind of sorting is happening all the time on
Facebook based on what posts you interact with and and
your own biases. You what we see on Facebook is

(01:05:27):
determined by what we interact with and and so therefore,
you know, aside from the from the echo chamber that
we all create by choosing who were friends with on Facebook,
and you know, under the assumption that most of your
friends are probably like minded, uh that it just it
becomes even more smaller and echo ere when you take

(01:05:50):
these kind of algorithms into account. Yea. So it's one
of those things where, uh, you know, it would be
entirely possible for a company like Facebook or Google to
manipulate things so that, uh, a specific side of the
story is being favored over another, and uh and that
could definitely affect how we perceive those stories. Or I

(01:06:12):
just want to say one more thing. It could be
not just what we see and what we don't see,
but the order in which we see it in you feed,
because we know, as we've said before, things like recency
and priority matter. Yeah, yeah, exactly, the most recent thing
you you have encountered is more likely to have a
longer effect on the first thing you encounter, the first
thing on a list. Yeah. Boy, I never want to

(01:06:33):
be the person who has to make the decision of
if you're making a ballot, whose name comes first. That's
you know. You could like, well, we're just gonna go
alphabetical order and let let and leave the blame to
the Latin alphabet. Seems like that. I don't know how
they actually do it. Seems like they could randomize that
for equality, right, they could randomize it, especially with electronic

(01:06:54):
you can you can make it so that every single
person who comes up gets a random or range. Yeah,
or however many candidates there are. But in the United States,
come on, let's be honest, this is pretty much what
we tend to get. The crab party is on the rise. Yeah,
the lobster imperialists really don't care how the voting goes.
That's kind of how imperialist. Are I like to imagine

(01:07:17):
that there's a vote vote algae you guys, because you
can really you can suffoc so on that happy note. Uh,
this was really an interesting thing to look at. The
study like, like we said, is first of all, it's
incredibly accessible. It's very easy to read. It is not

(01:07:38):
like for open access. Yes, definitely check that out if
you're interested in finding out exactly how they went about
putting this this uh, this study together and the criteria
they used. It is really easy to read. It's fascinating
and like I said, you know, it's I admire their
work and their approach to their work that I think

(01:08:00):
it's something that I want to see more of in
all sorts of areas of science. And meanwhile, if you
guys have suggestions of future topics that we can tackle
here on Forward Thinking, or you've got anything you want
to say about this episode, send us a message. The
email addresses f W Thinking at how Stuff Works dot com,
or drop us a line on Twitter we are FW
thinking there, or just search f W thinking over on Facebook.

(01:08:22):
We'll pop right up. You can leave us a message
and we'll salve to you again really soon for more
on this topic in the future of technology, I'll visit
forward thinking dot Com, brought to you by Toyota. Let's

(01:08:49):
Go Places

Fw:Thinking News

Advertise With Us

Follow Us On

Hosts And Creators

Jonathan Strickland

Jonathan Strickland

Joe McCormick

Joe McCormick

Lauren Vogelbaum

Lauren Vogelbaum

Show Links

RSSAbout

Popular Podcasts

Therapy Gecko

Therapy Gecko

An unlicensed lizard psychologist travels the universe talking to strangers about absolutely nothing. TO CALL THE GECKO: follow me on https://www.twitch.tv/lyleforever to get a notification for when I am taking calls. I am usually live Mondays, Wednesdays, and Fridays but lately a lot of other times too. I am a gecko.

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.