Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:09):
This is face Palm America. I'mBeowulf Rockland. Facepalmamerica dot com is where
you can go for more information aboutthe show, to listen to past episodes,
and connect with us on social media. I have been thinking a lot,
as probably you have been, aboutartificial intelligence AI. We've been hearing
about it constantly in the news,and you have probably created images or pictures
(00:36):
or texts or scripts or essays orwhat have you using things like chat,
GPT and other AI generators. Ifsomeone with as little technical expertise as myself
is doing it, then probably justabout everybody with a computer and an Internet
(00:59):
connection is doing it. So Ithink we should get a little bit more
information about what we're actually talking abouthere and what the implications for the technology
are. And I'm very pleased tosay that we have an expert on it.
Susan Gonzalez is the founder and CEOof AI and You, a group
that encourages AI literacy. She's alsoa member of the National AI Advisory Committee,
(01:25):
where she advises President Biden and theadministration on the National AI Initiative.
Susan Gonzalez, Welcome to face PAULMAmerica. Thanks so much, for being
here. Thank you. I'm gladthis is a great time to be having
this conversation. It definitely is wayto kick out the new year in twenty
twenty four. First of all,could you define the kind of AI,
(01:47):
the kind of artificial intelligence that we'retalking about today, because the term gets
thrown around an awful lot, andI think it's important to distinguish the AI
that we're focused on now in twentytwenty four from the types of function that
have been incorporated into computers and theInternet for a long time. Now,
what is it that we're talking aboutwhen we talk about AI today? I
(02:07):
mean, off the top of myhead, you know what I think of
it sort of thing is chat GPT, But what's what's the functional difference between
that and earlier kinds of technologies?Sure? Well, really we have been
using or interacting I should say,predictive AI every day. You know.
(02:28):
Someone asked me, well, whatwas the first what time did I first
interact with AI? And my responsewas, well, what time did you
pick up your device? Did youuse your phone to open it? Did
you get on the internet? Thatwas the moment you started using AI as
we have known it. Predictive AIreally is based on making predictions. So
there's a reason, for example,your favorite streaming site is suggesting horror movies
(02:53):
because you watch a lot of horrormovies. It's predicting based on your behavior
that you're going to like another horrormovie. So we've been using this for
years. Most people think, ora lot of people think that generative AI,
as you mentioned, that was thebirth of artificial intelligence, and really
nothing could be further than the truth. I mean, we have been using
(03:14):
it for years. I first learnedabout AI when I was at Facebook like
eight years ago. This new technologywas allowing the blind community to access the
platform, and I thought that wasreally cool, and I thought there was
a lot more to be offered,specifically to marginalized communities. So when you
talk about today, the AI thatwe're most usually talking about, or we're
(03:37):
usually talking about, is generative AI, which you mentioned or jen AI.
This is really it's really simple.It sounds complicated as a large language model.
Really what that means is that nowthe computer can take a prompt from
us, whether it's text or imageor a video, and can create a
(03:58):
response for us. So it's notbut to be clear, this is not
an Internet search engine. So thisis not a cut and paste where you
go in you cut and paste anarticle. Generitive AI is a creator and
it's very different from predictive AI.But it's also very different from an Internet
search where we can list finds allkinds of articles and we have to research.
(04:23):
That's not how this works. Andjust let me wrap this up too.
With genitive AI, keep in mindit runs on what's called training data.
Essentially, what that means is there'sa virtual library. So if you
go into chat, GPT or bartor any of the other generative AI tools
that are out there and you typein write me a five hundred word essay
(04:45):
on the American Revolution, it willtake that information and it will go out
and access the training data, sothat could be Wikipedia or library resources or
whatever resource that that company used tomake accessible to reply to your response.
So that's one thing that makes itvery, very different from an Internet search.
(05:09):
But most importantly, people need tounderstand that, you know, these
are incredible tools and the responses canbe incredibly fascinating, right, and they
can be wrong right, and whenthey're wrong, that's called a hallucination.
So everyone just needs to be awareof it, I thought absolutely. I
had been reading an article in TheGuardian Online in which the columnist was talking
(05:32):
about trying to find various quotes fromMarcel Proust and realizing at a certain point
that the chat GPT or whatever enginethappened to be was giving it like false
responses and that it was just kindof that, and that it was kind
of making up things, you know, like on the fly, Like what
(05:57):
is it that causes it to respondin that way? And I mean it
seems pretty clear why we have tobe aware of that, But what is
it about the functionality of the AIthat causes those hallucinations and why do they
show up so often? Well,like any other technology, right, this
is new and we are learning fromit. So for example, one thing
(06:20):
that we've learned thus far is howbias the responses can be. So what
happens is that of the training datathat I mentioned, if this data reflects
old bias information or its information forexample, only reflecting one population, the
answer you get is going to reflectthat data. So while it sounds like
(06:44):
and it is, it's an incredibletool, undoubtedly, but there are some
risks, and I think one thingin particular, especially with your listeners is
when it comes to deep fakes.Yes, and the election, right,
so deep fakes weren't around last electionbecause the technology wasn't there. Well it's
here now and there's a lot ofmisinformation out there, particularly during the election.
(07:08):
Yeah, and that's something that Iwant to get into. But I
guess it kind of comes down toa long standing computer principle, which is
garbage in, garbage out. Imean, if you're using bad information,
especially to train a program, you'regoing to get bad information out. So
you know, if a person islying about something, then the system that's
(07:29):
uh that that person builds or oris connected to, may well be lying
as well. And that's that's importantto keep in mind talking about deep fakes.
I mean, he really does seemincredible, like how well AI systems
are able to create fake voices.You know, I know I've managed to
(07:53):
mirror mine on online. It seemslike a pretty basic, straightforward thing to
do if you do to read afew hundred words into a microphone into a
computer in the right set of programs, And it really does pose a set
of challenges because people are just instinctivelybelieve like images that appear to be real
(08:16):
online and if those are created notjust by in a few cases, you
know, very talented individuals. We'vealways had, you know, fakes,
but if they're able to be createden mass by AI, then we have
a very dangerous situation. Is theresomething that we can do technologically to limit
(08:37):
the spread of that or is thisjust primarily about training people better to engage
with the media that they can sue. Well, what comes up for me
is something you know, my dadwould say, the horse has left the
barn, right, and so whathas happened is that this technology has advanced
so quickly that unfortunately there are noguardrails. Everything just moves so quickly.
(09:03):
And here we are in a presidentialelection cycle, not to mention all of
the other elections happening around the world. And so in the past four years
ago, your average person could notcreate deep fakes. Deep fakes, by
the way, are defined as justfalse information, misinformation, fake information.
But because the technology is so advanced, you and I could do that.
(09:24):
In fact, I hired my seventeenyear old family member. I wanted to
understand more about deep fakes, andI said, here, will you create
one and tell me about it?Well, it took him eight and he
was at home for the summer.He hadn't even started college. It took
me about eight hours to create adeep fake, and I said, do
one really extreme like Trump supporting theright to choose and Biden supporting the NRA.
(09:48):
And he did it. It wasvery rudimentary in eight hours, but
he said, oh gosh, ifI spent a few more hours on there's
no problem. So that's the scarypart, right, is that with this
election and we as voters need tounderstand that we have to protect our vote,
and unfortunately there's no I hate topresent a problem without a solution,
but unfortunately there's no solution because it'shappened so quickly. There are no guardrails,
(10:11):
there are no laws, there areno consequences for anybody to create uh
for creating deep fakes and sharing misinformation. This may change by the next election,
but this is where we are today. So the most important thing we
can do as voters is to questioneverything, question everything we see. And
by the way, this doesn't haveto be online. You can get an
(10:33):
AI generated fund letter fundraising letter,an AI generated phone call could be completely
fake. So unfortunately it puts usas voters in the position of having to
question everything we see reader here andrely on resources, you know, our
locals like you for the real stuff. So you are advising the President and
(11:03):
the administration on this sort of thing, what are you with regard to deep
fikes specifically? And there are alot of other issues, I know,
but what are you advising him todo about things like this? Because if
there's only limited things that you cando, is like, what direction do
you appoint him in? Well,what's interesting about this committee is that the
(11:28):
charter is to advise the President andthe White House on the National AI Initiative.
That is very different from advising Congress, who needs to regulate the technology.
So the White House and the Presidentcannot regulate Congress can. And that's
than a way is that Congress unfortunatelywas not paying attention so much on regulating
(11:52):
AI until it became until really untilgenerative AI became a thing, and now
everybody's actually paying attention to it.So on one hand, what what we're
working with the White House on isand he released recently releasing executive order which
reflected some of our recommendations on howdo we help guide companies to create you
(12:15):
know, truly responsible AI. Right. As an example, one of the
most recent recommendations which I co authoredand I'm super excited about and we just
released it last month to the WhiteHouse. It's it's called Enhanced AI Literacy
for the United States, and it'stalking about the need to educate people about
the basics. So at the openingof your of your segment, you mentioned,
(12:37):
oh, I must, I'm onit, everybody must be on it.
Actually they're not. You know,a study came out about two months
ago on chat ept and it reflectedthat seventy percent of users are men.
That's a problem, right. Maybeit reflects the tech and men or the
men and tech, but still it'sstill a problem not to mention marginalized communities.
(12:58):
Right, So if you go backto the election, the last couple
elections were basically decided by women andpeople of color, So one would have
to conclude that women and people ofcolor are very vulnerable for being targeted online
for misinformation to sway votes. Thoseare the kinds of conversations that need to
be had so people are aware ofwhy this matters. So you know,
(13:20):
look, I'm an independent and I'ma woman of color. I'm probably going
to get tagged to someone that couldbe swayed with misinformation online. Yeah,
that's how it's going to work.The first part of your response there is
particularly frightening to me because you saythat Congress is responsible for regulating it,
which I know is true. AndI also remember the last time I saw
(13:41):
you know, a tech ceo Ithink it was Mark Zuckerberg get up in
front of Congress and watch some ofthe fundamental misunderstandings that members of Congress have
about technology, and it just blowsmy mind. How little they know about
it and how they can be ina position to regulate is the frightening thing
(14:05):
to me. I mean, youare there to guide the president. Are
there similar organizations that are in placefor Congress? How do they approach it?
I mean that the main thing thatit seems to have come out of
Congress, you know, controlled bythe Republican Party at this point, seems
to be like how do we howdo we deal with you know, fake
(14:30):
shadow bands that are certainly put uponus by you know, leftist organizations like
Google, which is just on adifferent planet. As far as I'm concerned
about reality, I struggle to understand, like, yes, we need to
educate people, but you also needto educate regulators. And what's the best
(14:52):
way of going about that? Absolutely, and actually I have been making my
way through Capitol Hill to to dothat, to meet with Senate and House
AI Caucus members. My goal isto highlight the need to educate marginalized communities
about AI. Number one, becausepeople are going to be It's a fact
(15:15):
people that are in jobs that requirerepetitive skills. So let's say administrative assistance
or pair of legals, or evenany type of course in a factory line.
Those jobs you see it over andover will be impacted and potentially flat
out replaced with genitive AI. Right, So what are we doing about that?
(15:35):
Right? And how is AI impactingworkers? Nobody's really talking about that
yet. Everyone's talking about here thenew jobs are going to be created with
AI, the data science or machinelearning. Well that's not that's not me,
that's not your regular person, right, Well, what about the regular
person. If I'm an admin assistantand chances are my jobs going to go
(15:56):
away? Then should I be upskilling? What should I be upskilling? Or
what should I be learning? Thoseare the conversations that need to be had.
But the best thing about what's happeningnow with the White House and this
committee is that the White House isabsolutely willing and clearly they are listening and
they are engaging with the tech companiesspecifically because, let's face it, the
(16:18):
tech companies are huge. Yeah,how else do you put it? Right?
Huge meaning hugely influent influence. They'reinfluence on capital health, and they're
going to guide the regulation and exactlythey're going to guide it, you know,
for better or for worse in theirown self interests, and folks like
you need to be thinking about whatis in the public interest and right exactly,
(16:41):
yeah, that's exactly it. Nowto your point about Congress, you
know, yes, they trend onlittle the older side who may not be
as tech savvy, but you knowwhat, there are some that are younger
and that are on the AI Caucusand that are interested in taking on some
of these issues directly. So thoseare the folks that I'm talking to that
(17:03):
I'm more hopeful for. But butit is a problem, you know,
there was I was last time Iwas in DC, someone said, you
know, how can we expect Congressto regulate AI when most of them don't
even know how to sign on toFacebook, right right, Yeah, and
I mean not respectfully. It's afact or don't understand how like search results,
(17:25):
you know, favor those results whichare more popular and aren't just reflective
of some political bias. Don't understandbasic basic things like that and and there,
and build entire planks essentially in theirplatform around that, around that misunderstanding.
(17:45):
It's it's just it's mind boggling tome. And it really speaks for
a need for additional education. Isthis kind of an extension of what you've
been doing before this at AI andYou? I mean, what's what is?
What is that company? And tellme about the relationship between that and
what you do on the committee?Right well, AI and use my day
(18:08):
job. The committee is a volunteera volunteer position, which I'm happy to
do. I launched AI and You, which is AI a n d you
dot org after I left Facebook,specifically to address the chasm between the AI
ecosystem over here on the left andthe rest of the world over here on
(18:29):
the right, just regular people likeus. Yeah, and so I created
AI and You specifically to educate marginalizedcommunities, meaning women, people of color,
disabled, et cetera about the basicsof AI, so we cannot expect
people, for example, to understandlike, oh, you should be aware
algorithm algorithmic bias. Well, ifyou don't know what an algorithm is,
(18:52):
why would you want why would youeven care about how bias it can be.
So we've created content in English andSpanish, very easy to understand language,
so like videos, short videos onlike what is an algorithm? How
to use AI to help you finda job? You know, what?
What do I need to do toprotect my privacy? Like how is AI
related to that? So we offera lot of resources on our platform.
(19:17):
We launched an education campaign on misinformation, so we have a special video on
that. So this is really toraise awareness and importantly, what we need
to do collectively is to prompt yourlisteners and their family and friends to be
curious about AI. Now is thetime to do that because mostly people think
AI is robots and it's coming sometimedown the road. And yet at the
(19:42):
same time, you know, atthe same time shopping doing grocery shopping online,
well that's LLAI. You know,we didn't we didn't usually be able
to do that. We're doing thatbecause of the advancements of these technologies,
and if we repeat history as wesaw with the digital divide, the marginalized
communities are going be severely impacted.They will by nature because a lot of
(20:03):
the mid level jobs and the adminroles and paralegals by definition are filled by
a lot of people from marginalized communities. So what are we doing to protect
them and to educate them and alsoso they can educate themselves, right,
because if we don't think about that, we're going to have a lot of
problems economically that are going to comeback and bite us in short orders.
(20:27):
So yeah, I would just encourageboth any member of the public is who
isn't familiar already or even if youare, and especially you know, Republican
members of Congress, AI and youdot org is a very good place to
go to get a good solid introductionto what AI is, because you're going
(20:47):
to need to know because you're goingto have to be regulating this stuff.
I really thank you for coming onthe program today. It is you made
me even more aware of some ofthe challenges that we face, and I'm
glad that you're in place to helpguide us on them as a nation.
Susan Gonzalez again the founder and CEOof AI and You and also a member
(21:11):
of the National AI Advisory Committee,where she advises the administration President Biden on
such issues. Thank you so muchfor being with us today on Face Paul
America. Thank you. I wantto thank you for listening today on Face
Paul America. Please go to Facepoloamericadot com for more information about the show,
and thank you to Ace Elson andRosabel Hine who are the producers of
(21:33):
this program. Please, if youget a chance, rate and review us
on Apple Podcasts or Spotify. Ithelps grow the program. And until next
time, enjoy the day.