Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
LANDESS (00:04):
It's predicted that
this year we'll see artificial
intelligence transformindustries and redefine human
interaction with machines, butit's also facing challenges to
meet ethical commitments.
I'm Mike Landis To furtherdiscuss the benefits and
potential downside of AI.
Ut Tyler Radio connects withUniversity Assistant Professor
of Communication, dr EricGustafson.
(00:25):
Endless possibilities with AIand huge ethical challenges am I
right?
GUSTAFSON (00:29):
Absolutely, and
thanks for having me on, mike.
I think when we get to theconversation of ethics,
oftentimes we've already hadthose changes happen right, and
the questions surrounding ethics, I think, really are at a
flashpoint, on college campusesparticularly and then sort of
(00:50):
leaking out into otherindustries as well, where we
first need to figure out whateven are those challenges.
What are the questions that weneed to ask People in a number
of different industries?
Very brilliant people allaround have isolated some of
those questions.
We still have them tinted withthe different emotions that come
from something so new and we'veyet to completely remove
(01:15):
ourselves from the equation orat least do the best we can to
assess these things on their ownmerits as opposed to on our own
fears or excitements about them.
LANDESS (01:32):
Perhaps the flashpoint
of this international
conversation came with thedemonstrations of what ChatGPT
was capable of Tell us moreabout the upside and downside of
this free-to-use AI system.
GUSTAFSON (01:39):
Well, one ChatGPT if
you've ever peeked into it is
immensely helpful.
It's a great tool and a greataid for helping to find
information, to conduct research, to learn.
It's a great tool, but it alsoposes a lot of different issues
because it replaces those skillsthat we used to have to do on
(01:59):
our own.
Because it replaces thoseskills that we used to have to
do on our own, those researchskills, those identifying what
is credible and what's not, andwhile with every new
technological development weoften we get a new way of
knowing or a new way of comingto know or creating knowledge,
which means we lose a pathway.
(02:20):
But I guess we are in the stageof figuring out is that new way
that ChatGPT proposes or allowsus to sort of come to knowledge
and create knowledge?
Is that good, bad uglysomewhere in between?
What do we do with it?
Sounds like it can be all ofthose things.
Yeah, I had a great colleaguewho once said yes, but.
(02:45):
Or in communication studies wealways say it depends, it's
contingent on context.
All the time it depends.
LANDESS (02:53):
In a day and age in
which we are bombarded daily
with all kinds of informationthat may or may not have been
professionally vetted, how willwe know for sure, going forward,
what is true and what isn't?
How will?
GUSTAFSON (03:05):
we know for sure,
going forward, what is true and
what isn't?
I'm not sure we will,especially with the upcoming
election and we think about thedifferent sort of campaign
(03:30):
messages and that can exacerbatethose difficulties of
identifying what's true and whatisn't.
I'm not sure we will for alittle bit.
Right now we have differentfact checkers or AI checkers
that have developed rapidly inconcert with these technologies,
but we also have programs thatactually strip AI generated
content of the markers thatwould be caught by the detectors
.
So in Concert we have all thesetechnologies and software
(03:52):
specifically sort of runningtogether and they're
leapfrogging well ahead of ourquestions right now.
LANDESS (03:59):
I'm thinking of the
1950s and 60s sci-fi movies in
which machines would take overthe world.
They'd start talking to eachother and then they didn't need
humans anymore.
I mean, is that eventechnically possible?
GUSTAFSON (04:14):
I think some of the
most interesting predictions or
sort of explorations into whatthe future will be seem so
far-fetched that we can't fullygrasp them.
They seem so fantastic, so faroff, and yet if you transported
(04:34):
someone from the 50s or 60s tohere, they'd probably go.
What is all of this?
And I think we will have thesame experience realization in
another 50 and another 60.
So I think all of those at thetime of their writing seem so
far-fetched.
And then, all of a sudden, inthe 50s and 60s we were already.
(04:54):
You know, if we talk about theTuring machine and think about
sort of this birth of computing,this birth of sort of the first
artificial intelligence, if youwill, we already had it then,
and then it's finally explodingnow, 70 years later.
I think we'll see that withother hosts of technologies that
(05:17):
are running alongside this.
LANDESS (05:19):
It's said that security
and privacy are essential
requirements of developing anddeploying AI systems, but that's
also the biggest problem facingAI.
It would feel a little bit likethe foxes are essentially
guarding the hen house at thispoint in time.
Is that an overestimation of it?
GUSTAFSON (05:36):
I don't think it's an
overestimation.
I think it may be a way ofcharacterizing exactly what's
always been the case, which is,when we look at technological
developments, especially thosethat push the frontiers of our
understanding of the world, weoften see war and security at
the forefront.
One of the technologies runningalongside artificial
(05:58):
intelligence that is said togoing to supercharge it in the
next decade or two is quantumcomputation, which represents a
fundamentally different way ofcomputing from classical
computation.
But our original impetus fordeveloping quantum theory and
quantum mechanics was to createthe atom bomb.
It also helped us create MRImachines, helped us create the
(06:22):
transistor, which was thefundamental unit for classical
systems, which was thefundamental unit for classical
systems.
But oftentimes we do see thefoxes are the ones funding the
research that makes these thingspossible and oftentimes pushing
those boundaries.
LANDESS (06:43):
And we get to find out
about it later.
So we're just months away.
GUSTAFSON (06:59):
We're just months
away from what promises to be a
very contentious presidentialelection.
You mentioned this a moment ago.
In theory, a voter in SmithCounty could get a phone call
with Joe Biden's voice askingthem to vote for Donald Trump.
I may sound ridiculous, buttechnically that's possible.
Right last semester, I had astudent create an AI generated
podcast and he used the voicesof Joe Biden and Donald Trump
for the two podcast members andyou would be shocked at how good
this sounded.
It's absolutely a possibility.
It's more than a possibility.
(07:19):
It's most likely a probability.
And to the extent that to yourearlier question, can we tell
the difference?
Sometimes not.
Sometimes, yes, it'll developalong with that human touch of
the people in these campaignssaying, oh, if we tweak that,
this will sound better and maybeno one will know.
(07:41):
It's really hard to tell, butit'd be very interesting.
Yeah exactly.
LANDESS (07:46):
Deep fakes with video
are a little scary.
I see some of them being usedfor comedy bits.
You see those on the Internet.
They involve those two men, thepresident and Donald Trump.
But someone with enough moneyand a determined enough agenda
could certainly get a lot ofmyths and disinformation out
there.
GUSTAFSON (08:03):
Absolutely.
I think with prior electionswe've seen that explosion of
this idea of fake news and toour college students, to younger
individuals.
It's second nature to them nowto question these things,
whereas some of us who are olderhad a touch more faith in them.
With video and audio, whatwe're seeing is fake news sort
(08:29):
of pushed to its extreme, and itmight reverse to its opposite,
to use sort of a phrase fromMarshall McLuhan, a media
scholar from the 20th century.
If we take this so far, we pushit to its opposite.
Instead of simulating thesevoices and crafting a message
that creates trust, we put allthese messages out there and
none of them create trust, andso accomplishing the opposite of
(08:52):
what we thought.
LANDESS (08:53):
There's been a call for
international collaboration and
ethical standards to take place.
How quickly could safeguards beestablished and put into place
Should that?
Should that happen?
GUSTAFSON (09:03):
for the election.
I'm unsure for um, thelegislative system in general.
There have been talks that havestarted.
The European Union just pushedan act through that levies
significant penalties on AI,transparency and sort of
attempts to safeguard thesethings, but legislation always
(09:34):
travels slower thantechnological development.
So whether or not the rightsafeguards were put in place
prior to the election will besomething we'll see soon.
LANDESS (09:43):
Just over 30 years ago,
the World Wide Web went into
the public domain, a decisionthat fundamentally altered the
entire past quarter century.
Are we fretting over theunknowns about AI in the same
way that some did about theInternet years ago, or are the
concerns about AI moresubstantial?
GUSTAFSON (10:01):
I think if you look
to any technological development
, you're going to find anxietieswith it.
If you go back to the ancientGreeks, socrates bemoaned
literacy because it removedknowledge out of the human mind.
And how do we know if someone'ssmart if they can't remember it
?
Try to tell a college studentthat today.
And the Internet is a greatexample too, because we find the
(10:23):
roots of it first beingdeveloped by the military to
share documents in the late1960s, and then in 1983, they
switched a protocol which waswhat we consider sort of the
birth of the internet.
And then, probably 20 yearslater, we see it 20, 30 years.
We then see it as integratedinto every single facet of our
(10:45):
lives, right?
So I don't think the concernsabout AI are unwarranted.
I think we're just nowrealizing that this has been a
long time coming and oftentimeswe just need to.
If only we could learn aboutthese things before they
exploded onto the scene, I guess.
LANDESS (11:06):
Any final thoughts
you'd like to share about AI?
GUSTAFSON (11:10):
You know AI.
It's the next big scary thingwhen we ask ourselves questions
about it, it's not productive tosay this is awful or this is
amazing.
It's more productive to weighboth of them.
Ai represents the tip of theiceberg for me.
LANDESS (11:25):
Thanks for listening as
UT Tyler Radio connects with Dr
Eric Gustafson of theUniversity's Department of
Communication For UT Tyler RadioNews.
I'm Mike Landis.