All Episodes

April 8, 2024 22 mins
With just months until the November elections, the campaigns are heating up, and so is the political rhetoric, both on and offline. Election interference on social media is nothing new at this point, but another major player is making its presence known for this election cycle: artificial intelligence. Dr. Weiai "Wayne" Xu, Associate Professor at UMass Amherst, focuses his courses and research on political communities and discourse online, along with artificial intelligence's impact on how we disseminate and learn information. He talks with Nichole this week about what we know, what we're still learning, and best practices for gathering the facts through the elections.
Mark as Played

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
From WBZ News Radio in Boston.This is New England Weekend. Each week
we come together and talk about allthe topics important to you and the place
where you live. It is sogood to be back with you again this
week. As always, I'm NicoleDavis. We are just a few months
out now from the November elections,and as expected, the campaigns are heating
up, and so is the politicalrhetoric, both on and offline now these

days, it is certainly nothing newthat the Internet is rife with hackers and
spammers and scammers and spoofers and whateveryou want to call them. But over
the past year or so, anothermajor player has been coming into the spotlight
pretty quickly. We're talking about artificialintelligence here, and what I wanted to
do with this next segment is chatwith an expert about how AI has been
and will be shaping our political landscape. How we can make sure that we

get the facts about what's really goingon about political platforms and so on and
so forth without falling or a spoofor a deep fake. Wain Shoes an
associate professor at UMAs Amherst, andhe specifically researches online political discourse communities how
artificial intelligence is influencing how we getour information. He talks about all of
that. So he's here on theshow with us this week. Professor,

Thank you so much for being here, and we'll start with this election influence
from around the country and outside thecountry is nothing new. How do you
see AI taking this to a wholeother level. Yeah, so I think
we can look at both the supplyand the mind sight of the information ecosystem,

and with the introductions of genitive AI, obviously there's a lot of conversations
about deep fikes that average users canvery easily create multimedia content using a prompts.
Obviously, a lot of platforms,you know, the AI chatbots have
this god reil in place to proveand safe practices. But also you have

a lot of third party apps thatare connecting to those large luguation models or
services provided by the mainstream AI chatbuts such as you know, germinis or
chgbts, and we don't know,you know, how much of the future
that they have built into referring tothe third party apps. So that's the

kind of supply side, and peoplecan create disinformations or propaganda content very easily.
The demand side is that you know, in my observations, you know,
increasingly the AI chat bus will becomethe gateway to informations and opinions.
Right. So for example, ifI'm having explained symptoms and I want to

do a quick search instead of goingto Google, I might go to CHGBT
and describe what I'm experiencing. AndI wanted to get a quick results.
And obviously the reasoning is that thelarge language models can synthesize the massive amount
of information more effectively. Right,So I want to get a full picture

of something, and you know thatapplies to elections. Right. If you
want, if you want to knowmore about the candidates, and you can
you want to rank candidates based onyour policy preferences, you can go to
ar chatbots and just write a simpleprompt and ask the machine to rank candidates.

Let's see if you want an Americanfirst agenda or policy. You know
this what gives you ranks, rightif you just ask AI about it.
The challenges though, is that wedon't know much about how what training data
goes into those models, and youknow, and those platforms are relatively not
very transparent about the model outputs,and the magic the black box behind it,

right, so as those platforms becomethe mainstream gateway to informations, an
important place to information, so Ithink that needs to be you know,
a check and balance. And sofar we see, you know, the
untracked popularity of those AI chatbots,but also we see limited efforts to you

know, to push those platforms tobe transparent. Well, the Internet itself,
it's kind of a wild West,and you have to be very careful
about discernment and the information you're reading. It requires a little bit of work
on your part as well to makesure the information you're taking in is proper
information. You have to do yourown vetting. And combined with the fact

that a lot of these AI chatbotswere updated back in what twenty twenty one,
twenty twenty two and haven't been updatedsince, can we even really rely
on these chatbots to give us themost up to date, proper information for
us to do this research. Wewe kind of have this perception that those
large language models were trained back intwenty three or twenty two, but it's

simple wanted to realize that those modelsget updated fairly frequently and also a lot
of AR chat bots they are equippedwith real time search data. So like
if we're talking about CHIBT the youknow, the more advanced versions or Google
germinis. They use the models tonavigate through the real time search results,
so in a way that they arethe outputs might be up to date,

but it's very important to see whatsearch results are fitted into the model,
right, So what pops up inthe first search results, and this has
to do with you know, searchengine optimizations, so the results that run
the higher in the search engine resultswill be naturally kind of skewed the findings

of the output from the model.So I think that we need to consider
both the model but also how thesearch engine works in alongside the model.
We had many headlines recently about thatissue in New Hampshire, the deep fake
call where some people thought President Bidenwas on the other end of the line
for some reason telling them don't votein the primary, I don't need your

vote. Save the vote. Thatwas tracked back to an operator in Texas,
of course not the White House.And we've got these video deep fakes
as well, which are very concerningthese generative AI videos. I've seen some
that are extremely lifelike to a scarypoint. Tell me how we're starting to
see more and more of this inthe election cycle. I think that's just
there's well, I definitely we needto have a conversation about that, right,

But we also there's a lot ofpotentials of those tools that are waiting
to be unleashed we haven't seen.We saw some you know, promo videos
generated by by Sora, the latesttext to video models. But there's the
ground will be very you know,the territory might be very different, you
know, a few years from now. I think now we are kind of

just kind of observed and see whathow what kind of potentials or capacity those
models have. Well the same times, you know, as a researcher,
my role is really just to kindof see how people work, you know,
two things, how they respond tothe deep fis, but also how
they uh develop their own you know, critical data literacies like or AI literacies

in uh, you know, discerningthose problematics and artificial content. Right,
so people can be uh creative andresilient. Right, and so we would
expect the user, the average usersto get better at uh, you know,
at FED checking, and you'll besurprised by uh, the FED checking

potentials of those large, large,large language models, but also we need
to see like how users will becreatively using Uh. It's true that those
large lugugine model can create deep fake, but those defakes might be possibly tracked
by the large micationy mode of thoseor accuracy and authenticity. So we kind
of see how, you know,we will wait to see how the society

will response to it. But there'sa lot of conversations about regulation, but
I think it's at this point oftime it's really just important to understand how
the technology works and to have sortof the multi stakeholder models to govern the
space. You not only have thetech firms, you know, regulators or
average users, so you have thecivic groups that are using it for their

own words, but also they're moreaware of the limitations and the problems those
things could arise create. So Ithink I would you know, see that
there's more research to be done,but also there will be more observation to
be done about how potential impact ofthose tools. Yeah, and there's obviously
an issue of trust as well,because I think that there's already a lot

of anxiety and distrust when it comesto the political system, especially following the
twenty twenty elections, worry six andall that, whether it's justified or not,
the distrust is there. So Idon't mean to sound reactionary when I
say this, That's not where I'mgoing at this point. But how are
people supposed to trust the process whenwe're starting to have to second guess a

lot of these pieces of media thatare in front of us. How do
we tackle that? I think that'sI was I was thinking about this phenomenon
called news desert. Right. I'msure that as a media workers, you're
very familiar with these large pockets ofAmerican communities that are not served by well
by local media and a lot oflarge language models. The very they're the

very capacity to generate answers based onpromps is based on the massive amount of
training data, right, So whatit goes into training the models, the
machine and it really matters, reallydetermines the output. And going back to
the trust issues, right, So, there's you know, widespread just trust

of mainstream medium, and we alsoneed to look at the data of the
distrust towards the local medium and ingeneral, I think people tend to trust
local, local medium more so thantheir trust of mainstream national medium. So
but but we also have this youknow, news desert that I think that,

especially when it comes to local issues, the performance of those models could
be dependent on the availability of datafrom local medians right or local sources,
which would drive the quality of themodels. But you know, if you
look at yes, the distrust isthere, but large language models, it's

built with neutrality in mind, right, so it doesn't seem to like,
you know, if you ask aquestion, they try to give you information
from both sides. So in away it is less toxic than lashing onto
a one particular medium, one particularpart, more particular part of the media

outlets, or a specific part ofthe media ecosystems. So in sence that,
you know, the larger language model, where they are equipped with real
time search data, might provides youknow, more con precisive views and less
biased views of the political you knowevents. So I'm more hopeful about that

about that, but in people alsorealizing that people have the capacity to fact
checking, and they might come upwith creative wave of you know, prompt
engineering to get the output that isless polluted. Let's see by by you
know, you know, you know, part political parties or anyone with malicious
intent. So it sounds like thereis some benefit to having this involved in

the political cycle. It's just allcoming back to discernment, and it's all
coming back to checking your facts exactually exactly. I think that there's you
know, obviously there's a critical AIliteracy that to teach about people about people
need to be It can be tobe informed about what those machines can do
or cannot do right and knowing thesource that goes into train those models.

But I'm comparing those models to thevery toxic partisan media ecosystem that you know,
people are not when they are tryingto understand what's going on, they're
not you know. You know,for media researchers, we want to look
at things from both sides, right, we want to go to the very
toxic sites. We also will tolook at the mainstream sites. But for

a lot of people just because alimited attention span, that we will tend
to kind of latch onto the sourcesthat we tend to rely on that conforms
to our existing viewpoints. And youknow, if you compare to that median
habit uh, the AI chribots,because it's ability to silicide across ideology across

different sites. It might provides itmight provide more fair approach to uh,
you know, information seekings or gainingunderstandings about the current affairs. No,
that's true. I guess it's allabout your narrative and how you choose to
read into the facts that it givesto you. Right, Yeah, So
what are same municipalities and state governmentsdoing to try to protect the integrity of

the elections now that artificial intelligence isa major player at this point, I
think that obviously there's a lot ofconversations about the the platform accountabilities. Right,
so the platforms that are providing AIchetbouts and other AI generative AI services
that they need to be uh,you know, kept in the in the

roup about you know, their potentialpotential impacts of their tools the elections.
I think it's very there needs tobe a space for the third sectors,
right, the civic groups and academicresearchers. So far, as academic researchers,
I'm obviously advocating for more access tothe data, you know, more

you know, uh, a seatin the conversations so far, you know,
when we are trying to understand what'sgoing on in the digital world.
We face their you know, fairlylimited data access, especially with a lot
of platforms they try to do awaywith UH the APR access to their platform
data. So the research, youknow, you know, the research suffer,

the research communities suffer because of thelack of data. So to address
a lot of issues surrounding local electionsand local politics, it needs to be
again multi stakeholder models where you haveUH, you know, public officials,
you have the governments, you havethe tech firm, but also you have
the researchers and civic groups providing ahigh level of transparencies for UH, the

academic and community part otners to evaluatethe models, to evaluate the generitive AI
tools, to have them participate inthe decision makings and future regulations. I
think that would be one way togo, but I think that there needs
to be lost to be resolved beforewe actually arrived that you know, that

place where we can advocate for youknow, transparencies and openness and a platform
accountability. The deep fake videos arealready here, that's the thing, the
audio clips, the videos, they'realready here. So so how can not
just municipalities and governments, but alsopeople protect themselves from maybe getting influenced by

a bad actor who might not wantour best interest apart. You know,
I'm teaching several courses where I actuallyincorporate genitally AI in the curriculars. I
want students to develop a solid understandingsof those tools. Right, So it's
actual to be uh, to bescared of you know, the impact of

those machines. But also if yougain more confidence understandings or more started understanding
those tools, it's actually well,uh, you know, helps you better
discern you know, uh, youknow, artificial content from the real ones.
I think that what people can dois that you know, embrace it,
use it, and you know,just punch housing it and just to

fed track everything that comes out ofthe machines. Do some comparative works,
know, if they have times,if they have interest, look at and
use different tools and if you arereally invested in outcomes the answers from those
those machines, those models, usedifferent tools like compare notes, compare the

outputs from those different models and tools, and you know, and then check
do the fat checking right, lookat you know, the sources that are
being cited in the output. Right, So they are actually AI chatbots that
are aimed with UH, you know, citing, providing citations to to the
to the answers for the deepic videosand all that you're asking specifically about the

defics. We have been working youknow, in the past few years.
We have observed how you know,this information spreads on the digital platforms,
right, and when it comes tolocal communities, right, Uh, the
universal fact checking practice may not workbecause each community has its own politics and

culture norms, and so what iscret critical is that we would excite that
communities would come up with their owncommunity oriented solutions and there might be a
community based civic groups and g osthat have people who understand the tool and
who are dedicated to UH, youknow, building a more you know,

fair and transparent and equitable UH digitalspace. They can participate in monitoring this
space, right, in providing UHor in observing the trends, in monitoring
potential problematic content, and that theycan play the role of this. I
mean, they could be the digitalvigilanity, so you know, watching out

for potential UH deficts. But thisagain, you know, there's no universal
uh technological solution to it. Imean you could expect like there's some third
party apps that comes out that helpsyou. There could be an add on
to your It could be a browserextensions or an app on your cell phone

that does the far checkings of whateveryou see online and compel that to their
models to see whether there's a deepfake or not. But those technology,
those technological solutions might work, butthey are not the automate dancer, right.
The auto answer is really coming downto rebuilding that trust. As you
mentioned that, you know there's buteven in the orange realities of public institutions

being discredited and distrusted, then thereneeds to be a space for a third
party, for third sectors, youknow, civic groups, academic researchers,
to chimmying and to a play roleas a monitor, as a potential you
know, sources of insights. Allright, last question, then I'll let
you go for somebody who is justsitting there scrolling on TikTok or whatever.

I don't know, pick your platform. There's too many of these days,
you know, when you're scrolling yousee a video that you're like, wait
a minute, that doesn't seem quiteright. How what are some of your
suggestions for somebody who wants to bea little more mindful of what they're seeing
on social media, especially as wemake our way toward November. What would
you suggest people look out for thatmight lead them to believe this might not
be completely truthful? Right, right, So I'd like to kind of bottle

a line from the QANNT your research, right, And I think that you
know, this is where you knowthere's some possible technological solutions might might be
helpful. Is that you know,when we're talking about images that we see
or defect images, that we coulddo a reversed image search and finding out,

uh, if that images has alreadybeen circulating and what was the original
source of that images? And Ibet that you know in the future there
will be too cand of come outthat allows you to check if that image
is artificially you know, created orif it's digitally doctor So there will be
two out there, so just beaware that those are. You know,

there will be tools out there thatpeople can do can use to for check
back tracking. But I think thatthere's also just to be broader awareness of
the source like who created that images, Like who is uh, you know,
amplifying that that content, right,So knowing about the potential ideological leanings

or agenda of the source, right, So we're not just looking at the
content. This applies to every platformwe're talking about. They've got to be
a person who shared that. Itcould be your friends, but without that
image like your original contfirm. Sodo a bit of additional search and so
what to look at, who people, what people are talking about it,
and who are talking about That givesyou some sense. So sometimes just it

takes extra miles, it takes additionalsteps for people to fight. Maybe you're
not. Eventually, it's harder toverify a lot of claims verify. It's
really hard to verify the authensity ofthe content. But it will get a
more you know, fairy pictures likewho is driving the conversations? Right,
So that gives you some cues aboutthe possible intent behind the secuations. All

right, well, professor, thishas been really good information. Important in
folks, we make our way towardNovember. Thank you again for your time.
Yeah, thank you. I havea safe and healthy weekend and please
join me again next week for anotheredition of the show. I'm Nicole Davis
from WBZ News Radio on iHeartRadio.
Advertise With Us

Popular Podcasts

Dateline NBC
Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

The Nikki Glaser Podcast

The Nikki Glaser Podcast

Every week comedian and infamous roaster Nikki Glaser provides a fun, fast-paced, and brutally honest look into current pop-culture and her own personal life.

Music, radio and podcasts, all free. Listen online or download the iHeart App.


© 2024 iHeartMedia, Inc.