All Episodes

April 4, 2023 34 mins

Robert sits down with Noah Giansiracusa, math professor at Bentley University, to talk about the reasonable and unreasonable fears people have over AI.

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:05):
Hey, welcome to It could happen here a podcast about
things falling apart, and today it's kind of going to
be a conversation about is shit falling apart? Are we
all about to be devoured by a rogue AI? Is
your job about to be devoured by a rogue AI?
These are the questions that we're going to, you know,

(00:26):
talk around and about and stuff today and with us
today is Noah John Syracusa, a math professor at Bentley University. Noah,
welcome to the show. Thanks for having me, and I'm
reaching out. We're talking right now because there's an article
that was put up in The New York Times on
March twenty four, twenty twenty three, titled you can have

(00:46):
the Blue Pill or the Red Pill and We're out
of Blue Pills, which is a fun title by Yuval Harari,
Tristan Harris and as a Raskin. And it's an article
that is kind of about the pit falls and dangers
of AI research, of which there definitely are some. I
enjoyed your thread on the matter. I thought it was

(01:07):
a lucid breakdown of the things the article gets right
and the areas in which I think they're a bit
fear mongery. So yeah, I think that's probably a good
place to start, unless you wanted to start by just
kind of generally talking about where you kind of are
on AI and what you kind of think, you know,
the technology is advancing towards right now. Yeah, I mean,

(01:29):
I think I can probably answer both those questions and
the same because part of why I enjoyed writing that
threat dissecting the article is I just had the strangest
feeling reading it that I agreed with it so much
in principle and yet somehow objected it to so much
in detail. Yeah, and it thinking about that article helped
me think about my own feelings on AI, which you know,
every day of the week is slightly different because so

(01:50):
much news happens. Yeah, I found myself overall deeply frustrated
that I agree with the central conclusion, which is that
maybe we shouldn't be just like plowing headlong into this
and should be more careful when we we we screw
around with technology like this, which I agree with and
I feel like should have been the thing we did
with like I don't know, Facebook, Twitter, like all of

(02:12):
these things, Like it's less my obsession is less with
like the specific dangers of AI, and more with what
we keep letting these guys who are fundamentally like gamblers
within your capital money really put our society through the
ringer without ever asking should we like do any research
on maybe how social media affects children and like how
all of these different things. And it's it's right that, like, yeah,

(02:35):
we should be concerned about what these people are going
to do with AI, but also why why now? Why
just now? Yeah? And that raises a really good point,
which is what's different now versus what we've been experiencing
with social media? And just to give your listeners some context,
one of the three authors on this Sneer Times article

(02:56):
is famous for writing this book Sapiens that's a sweeping
history of humanity, and the other two are actually most
famous for the Netflix documentary The Social Dilemma. So they
really are in this camp of warning people about social
media algorithms. And as exactly as you're saying, that's sort
of this thing that we've been dealing with, probably quite poorly,
and now we're kind of moving on to the next

(03:16):
societal risk, which is AI. So that as a really
important question of what's different now? And I think That's
one of the things the articles trying to address, which
is many of the problems that we already have with algorithms,
data driven algorithms, and even AI as it's used in
social media is still happening now, but somehow things feel
like they're aspiring out of control. Yeah, and I think,

(03:38):
I mean, honestly, I think a lot of this just
has to do with culturally, what are touchstones for AI
we're going into this, you know, which are Skynet? You know,
like it's that sort of thing, and you do see.
I feel like the uncredited fourth author on this particular
article is James Cameron, because there's pieces of it throughout

(03:58):
this There's some it opens actually pretty provocatively. Imagine that
you are boarding an airplane. Half the engineers who built
it tell you there is a ten percent chance the
plane will crash, killing you and everyone else on it.
Would you still board? In twenty twenty two, over seven
hundred top academics and researchers behind the leading artificial intelligence companies,
we're asking a survey about future AI risk. Half of

(04:20):
those surveys stated there was a ten percent or greater
chance of human extinction from future AI systems. Which yeah,
let's zoom in on. Yeah, yeah, let's talk about that,
because what I tried to do in my thread was
go through all the claims and assertions and really pause
and say hold on. But that's a great one to start,
because there's a lot to dig in right there. Yeah. So,

(04:41):
first of all, there's a huge difference in that airplanes
are based on science and physics and things that we
understand pretty well. There's a lot to it, and there's
been millions of flights, so you have a lot of data.
You know, how many planes crash and how many don't.
Maybe one engine goes out, you can do the statistics
and CEO you know whatever, percent of planes without the
engine still land safely. The problem with AI is we're

(05:05):
just guessing, right There's no way to know one hundred
years from now or ten years from now what it's
going to do, what the real risks are, so we speculate,
and that's not uncharted territory, right let nuclear weapons were
first introduced, people had to guess and speculate. But the danger,
I think is putting it in that same category as

(05:25):
things like airplanes or climate change. I'll like to think
about climate change when you see these you know, what's
the IPCC if forget the acronym in these reports, that's
based on thousands of scientists digging into thousands of published
papers and all this data really modeling the environment. There's
a lot of meat and substance to it. The problem
with the AI is it's mostly people I hate to

(05:46):
say it, but like me or like you, just kind
of guessing and thinking, maybe this will happen, maybe that'll happen.
The reasonable thing to say if you're in the airs,
which is like, yeah, I have concerns that AI could

(06:07):
cause serious negative externalities for the human race, perfectly reasonable statement.
It is physically impossible to say there's a ten percent chance,
exactly because it's never done that before. You know, I'm
a math professor, and I'm the first to say numbers
don't have some intrinsic meaning. Right. If I just say
something has maybe a fifteen percent, I'm just making up
I'm pulling out of my ass. Yeah, it doesn't make

(06:29):
it true. So it's this, it's a general pet thieve
I have of sort of giving a false sense of
precision by using numbers that you don't really know where
they came from or they're just made up. So that's
one issue is these numbers are made up, and asking
a thousand people to make up numbers isn't necessarily any
better than asking one or two. You know, then if

(06:51):
the numbers made up, it's made up. So that's one issue. Yeah,
I also do think and I'm not the I saw
someone making no I think it was Ben Collins, who
writes for NBC. On Twitter made a note that like, well,
the fact that all of these statements about like how
dangerous they are about human extinction are coming out of
people in the AI industry has started to kind of
feel like marketing. That's right, Yeah, exactly, it's a little

(07:13):
bit of buzz marketing going on here. And I think
you mentioned social media, and the authors of this article
mentioned social media, and we have to look to the
past right to understand the future. I think that's the
only way to do it. So, what was one of
the biggest scandals in social media was Cambridge Analytica And
as you know, we probably remember, this was this data
privacy scandal where a bunch of data was collected from

(07:34):
Facebook users that shouldn't have been you know, people didn't
realize that the data has been collected, they didn't approve it,
and it was used for this election company or this
political company that was trying to profile people and influenced
campaigns towards Donald Trump towards Brexit. So this was a

(07:54):
huge scandal, and you know, Facebook was fine five billion
dollars or something, very justifiably. But I would say what
it was in retrospect was a data privacy issue. People's
personal data was leaked when it shouldn't have been. The
problem was there was so much fear and fear mongering
over it that people felt this data was used by

(08:14):
these sort of algorithmic mind lasers to kind of know
us in such great detail and get us, trick us
into voting for Donald Trump and targeting us. And the
journey is still kind of out, but most of the
evidence looks like Cambridge Analytica it wasn't that effective. They
just couldn't do it. And it turns out you can
know a lot about a person, a lot about their data,

(08:34):
and it's really hard to influence them to change them.
So what happened I think was there was a lot
of alarm set spread rightly so about the tech companies.
They have too much power, too much data, they know
too much about us, and this horrible thing happened. The
problem was a lot of the alarmism then actually reinforced
this aura of power, of godlike power that the tech

(08:55):
companies have. People criticizing them actually gave them more potency
than they deserved. And then suddenly Google and Facebook and
all they had. It wasn't sudden, but it kind of
built it up. They had this aura that our algorithms
are it's so insanely powerful, and we have to make
sure they stay in the right hands, and we can
do so much. And that's unfortunately what I see happening

(09:16):
now a lot, and that is kind of the setting
for critiquing this article. Yeah, absolutely agree that this stuff
is risky. AI. I absolutely agree that we could go
down to dangerous path. But once we start leaving firm
ground and speculating wildly and using the terminator stuff that
you described. Yeah, even if you think you're criticizing the
tech companies, you know what you're doing giving them the

(09:37):
biggest compliment in the world, saying that you guys have
created are godlike and you've created these mighty machines, created
a deity which is very similar to the language this
argue article has at the end, and I think it's
kind of worth like, as you're bringing up there are
real threats. There are real threats that are immediately obvious.
The threat that a lot of writers are going to

(09:58):
lose their jobs because companies like BuzzFeed decide to replace
them with you know, chat, GPT or whatever. The fact
that a lot of artists are going to lose out
on work because their work has been hoovered up and
it's being used to generate Like these are very real
and very immediate concerns that we don't have to They're
not hypothetical. We don't have to theorize about the AI
becoming intelligent for this to be a problem. These are

(10:19):
things we have to immediately deal with because it puts
people at risk. It's the same thing with like, you know,
there's a lot that gets talked about with Cambridge Analytica,
with kind of like the different Russian disinformation efforts. But
when I think about the stuff that was happening in
the same period that worries me more. One of the
things that occurred is because there was so much money

(10:42):
to be made if you could get certain things to
go viral on YouTube, companies that use tools that weren't
wildly dissimilar from some of these basically generated CGI videos
based on kind of random terms that they knew were
likely to trick the algorithm into trending. And god knows
how many children were parked in front of these like
very unhinged videos for hours at a time that like

(11:03):
they would start watching some normal kid musical video or something,
and then they're watching like the disembodied head of Krusty
the Crown bounce around while like some sort of nonsense
song gets sung, and it's like, well, what is that
actually going to do with kids? Like, we don't know.
That's unsettling, thoughttling. Yeah, And that's the kind of thing,
you know, And I'm sure there will be obviously, Like

(11:24):
one of the things that this article is not wrong
about is that if we kind of leap forward into
this technology with the kind of abandon that we're used
to giving the tech company, there will be unforeseen externalities
that we can't predict right now that will be very concerning.
I just don't think it's sky in it. Yeah, And
that's what was so challenging, not just with that article,

(11:44):
but with I think the movement we're having is I
do agree very much in spirit. I agree with the recommendations.
We need to slow down, we need to be more
judicious and cautious, we need to really consider these. But again,
if we overhyped the technology, we may be doing ourselves
a disservice by empowering the very entities that we're trying

(12:06):
to take power from. And a sample like that, can
I read a quick quote from the article, do you
AI's new mastery of language means it can now hack
and manipulate the operating system of civilization. By'm gaining mastery
of language, AI is seizing the master key to civilization
from bank vaults to holy sepulchers. That's right, and that

(12:28):
I mean, that is funny, and you're right to laugh.
Let's actually zoom in a second, and I think this
is such a tempting trap that AI is super intelligent
in some respects, right, you can. It's done amazing at
chasss amazing, It's jeopard be amazing at various things. Chat
Gypt is amazing at these conversations. So what happens is
it's so tempting to think AI just equal super smart

(12:51):
and because it can do those things, and now look,
it can converse, that it must be the super intelligent
conversational entity. And it's really good at, you know, taking
text that's on the web that it's already looked at
and kind of spinning it around and processing. It can
come up with poems and weird forms. But that doesn't
mean it is super intelligent in all respects. For instance,

(13:13):
one of the main issues is to hack civilization. To
manipulate us with language, it has to kind of know
what impact its words have on us, and it doesn't
really have that. It just has a little conversation a
textbox and I can give it a thumbs up or
thumbs down. So the only data that it's collecting for
me when it talks to me any of these chatbots
is did I like the response or not. That's pretty

(13:36):
weak data to try to manipulate me, you know, it's
so basic. That's not that different than when I watch
YouTube videos. YouTube knows what videos I like and what
I don't like. Would you say that YouTube is hacked civilization. No,
it's addicted a lot of us, but it's not hacked us. Yeah.
We people have hacked YouTube and that has done some
damage to other people, like but it's like the thing is.

(13:59):
And that's part of why while I have many concerns
about this technology, it's not that it's going to hack
civilization because like, we're really good at doing that to
each other. Like there's always huge numbers of people hacking
bits of the populace and manipulating each other, and there
always have been. That's why we figured out how to
paint like it's I do think that there's there's an

(14:23):
interesting conversation to be had about the part of why
people are kind of willing to believe anything as possible
with this stuff is that for folks who were just
kind of living their lives with a normal amount of
attention paid to the tech industry, it seems like these
tools popped out of nowhere a couple of months ago, right.

(14:43):
It feels like, oh, there has just suddenly been this
massive breakthrough. And the reality is that all of the
stuff that people you know, chat gpt, these different ais
that everybody's talking about, this is technology that people have
been pouring resources into for years and years and years
and years and years, and that's why it's able to
do some of these amazing things that we've seen. But
it's not I don't think it means that in a

(15:04):
month it's going to be a thousand times smarter. It's
it's it's a process of labor, and it was finally
ready to be unveiled to the extent that it has been.
Maybe that's right. And a good example is a GPT
four which recently came out. There was GPT three before
and chat Gypt, and there was so much speculation that
GPT four is going to be again this godlike thing

(15:26):
that just you know, that brings us to the singularity.
And honestly, it's done better at tests. You know, I
forget the numbers, but maybe one of them got a
twenty percent grade on some tests and this one got
an eighty percent. So that is a significant improvement. Right.
If you're a teacher and your students improve that much,
you should be happy, right, But as you said, is
that a thousand times No, even though the machine is

(15:48):
much bigger, much more data, and it just shows that. Yeah, Like,
the reality is this is incremental progress going at a
very fast rate, very unsettling even for those of us
following the field closely, where it experiencing that kind of
vertigo that you're saying that whoa where did this come from?
So even within the field, and you're absolutely right, if
you're just at home, you know, not paying attention for
a week or a month or a year, suddenly the

(16:10):
stuff pops up. It is disorienting. But one thing I
think that's helped me at least kind of clarify what
not even answering what the risks are, but just understanding
the different camps of why certain people are reacting differently,
and why even the people afraid of AI seem to
be now fighting amongst each other and why it's getting fractured.
Is are you more afraid of this AI used as

(16:33):
a tool by people or are you more afraid of
it kind of taking on its own autonomy and kind
of going rogue and doing its own things. And I'm
very much afraid of people using it. I think big
companies are going to use it and there's going to
be a lot of problems, just like we saw with
social media. People will get addicted, democracies will be flooded
with misinformation, It'll be weaponized by various actors, will be

(16:57):
bought accounts. So I am very concerned about it being used. Basically,
it performing the job it was told to do. But
it'll be told to do dangerous jobs, either making money
or making discord. There's another group of people that are
more worried about the AI somehow deciding on its own
to do things to take over. And that's where you know,

(17:17):
I can't roll it out, But that's where I kind
of am skeptical. Let's focus on how people are using
it for now, for the foreseeable future. I don't think
we need to worry yet, at least about the AI
somehow having a life of its own and stabbing us
in the back and enslaving us, because there's just so
much that can go wrong before you even get to
that point. Yeah, and it's it's not that's exactly like

(17:41):
it's a threat triage kind of thing, where like, is
it theoretically possible that one day human beings could create
an artificial intelligence that is capable of having its own
agency that is malicious? Yeah, sure, I guess, Like I
mean maybe, but man, we're there's a lot of us
that are very malicious right now that are actively trying
to harm other people at scale. I'm concerned about how

(18:04):
they will use AI to do that. I think botonets
are a really good example. One of the things that
that these these new this newest generation of AI tools
allows is more realistic and intelligent bots than I think
have been accessible at scale before. And that's a very
real concern. Um. I will say when I kind of sorry,
when I kind of wargame this back and forth with myself.

(18:26):
One thing that is oddly comforting is like, well, the
shared comments that we all inhabit of, like ontological truth
is already so shattered that like there's there's only so
much damage. I feel like adding additional bots and additional
disinformation can really do um. Like I done one thought

(18:48):
on that though, because I've been digging into that too.
I've been, you know, trying to ponder how to feel
about that, because a lot of this I don't know,
you know, I'm trying to make is. I do think
if you go back to like two sixteen earlier versions
of the Internet before leading up to Donald Trump's election,
I think there was a lot of wild West to Google,

(19:10):
to social media, to all these things. Right, fake news
was just like piling up to the top of Google
search results. That election was so monumental and such a
seismic shockwave through tech that fake news and misinformation might
have played a role that they really had to do something,
and I think some companies are more effective than others.
I think Google put a lot of effort into making

(19:30):
sure authoritative sources rise to the top. So what that
means is when now you go online and you google
for medical information, the top results you get are WebMD
or some official CDC, your government thing. They're pretty decent reliable.
It's not to say there's an all that crap on
the Internet, but Google has done a pretty good job
of having the good stuff float to the top, and

(19:51):
that's the information that people see. So what I'm worried
is now we might be kind of resetting ourselves back
to the twenty sixteen where when you're talking to these
chat bots that are trained on all the internets. Yeah,
I don't know if the web mds and the CDC
type of information is necessarily going to float to the top.
Maybe they'll work that out. But I'm also worried that

(20:13):
open Ai or Google or Microsoft for wherever, they'll have
ones that are pretty reasonable and kind of you know,
tuned to appeal to a lot of people. But Elon
must might build his own competitor, one that might be
really tuned to elevate the right wing site your car.

(20:38):
So I have been messing around, as I mean, and
you have been doing so in a much more rigorous manner,
I'm sure. But I've screw around with a couple of
different AI chat and search engines. I use find PHI
and D sometimes I've been playing around with BING and
one of the things I've noticed is that you know,
if you ask it like, hey, summarize for me, like
why the Battle of Hastings mattered, You'll get a reasonably

(20:59):
decent answer. But if I ask it like I don't know,
specific questions about myself, I've come to I noticed at
first when I did it, I would get some really
weirdly like colloquial vernacular from it explaining things, and I
realized it was just pulling answers directly that fans had
asked about me on the subreddit that this show has.

(21:19):
And so when I think about like ways in which
to game the system, well, you make a bunch of bots.
You have them post questions and answers that are you know,
supportive of this specific product line or whatever on a
subreddit and hope that it gets picked like scanned by
an AI and that becomes part of its like answer
for you know what happens if you know, I can't
stop itching or whatever. I don't know, like, but I

(21:40):
like obviously you can see using them ways in which
these cannon will be gamed to some extent. You know,
it's always kind of a red Queen sort of situation
where you have to disinformation people fighting this info. You're
always running as fast as you can just to stay
in place that's right, And that is that brings up
another issue which I do feel like this is possibly

(22:01):
really tipping the balance in that it takes a certain
amount of resources to create misinformation, it takes a certain
amount of resources to debunk it. Right, A journalist has
to sit down, Snopes has to write a little piece
about it. And the problem is with this AI, it's
suddenly just dropping the price of creation down to essentially zero.
Anyone can create essentially limitless supply of quasi information that

(22:26):
may or may not be true. But the problem is,
is the price of journalism of debunking also going down,
maybe by fifty percent, right, maybe it takes you half
as much time to write an article. It's not going
to zero, no, So that's the balance is Creating stuff
has gotten a lot cheaper, Detecting, debunking, doing proper journalism
has gotten a little bit cheaper. So I'm worried that

(22:46):
that's journalists are already stretched then. And this is by
far my biggest concern because it's it's not just this
that's obviously a significant factor in it. There will be
more disinformation, there will not be more journalists, in part
because I think AI is going to take jobs from
a particularly low level d It's not going to replace,
you know, prizewinning columnists at the New York Times, and

(23:09):
it's not going to replace like guys like me who
have a very long and established career of doing the
specific thing that we do. But I think back to
when I got started as a journalist as a writer.
It was as a tech blogger, and I had an
X number of articles that I had to get out
per day, and obviously, like my boss was essentially trusting

(23:29):
that with that many articles, i'd have a few that
did well on Google, and that brings in traffic, and
that brought in money. And there's a degree to which
you're just kind of doing seo shit. But it's also
I conducted my first interviews for that job, I went
to trade shows for the first time. I did my
first on the ground journalism for that job. It taught
me how to write quickly and an a polished nature.
And I was not writing anything that was like crucial

(23:51):
to the development of humankind. But it made me into
the kind of person who was later able to write
things that were read by people all over the world,
and that had an influence on people. And I worry
about the brain dray, not just among journalists, but among writers,
among artists, you know, people who do illustrations and stuff. Eventually, musicians,

(24:13):
at least some kinds of musicians will probably also run
up against this, where the stuff that it was easy
for kind of people breaking in to get a little
bit of work that would hone their skills and allow
them to live doing the thing that they're interested in
is going to disappear. And more and more of the
stuff that we kind of casually low level consume, not

(24:34):
our high art, not our favorite movies, not our favorite books,
but the stuff that we encounter when we stumble upon
a web page or like in a commercial or whatever,
will be increasingly made by AIS, and that AI will
be pulling from an increasingly narrow set of things that
humans made because less humans will get that in tree
level work, and that is there's something concerning there that

(24:55):
is something that worries me about the future of just creativity. Yeah,
and I think, I mean two points. One is just
to kind of be Devil's advocate a little bit, because
I do sympathize and I think you're right, but a
little bit devil's advocate is there. It might be on
the out flip side of the coin that there's people
that feel like they have artistic imagination and desires but

(25:15):
lack the technical ability, and suddenly they can paint, so
to speak, by using these aiimage generators. Maybe someone has
some form of dyslexia or their English as a second
language or even you know, native speaker without any of
these issues obstructions, but just finds the writing process difficult,
and maybe AI enables them to be a writer to contribute.

(25:37):
So I could see, you know, there's going to be
the pros and the negatives, and I don't know how
the balance is, but I think you're right thinking from
a profession that's sort of like a passion project view.
From a professional view, I do see the profession narrowing.
If it journalists are expected to work twice as quickly
because they're all using chatbots, there's probably going to be

(25:58):
half half as many of them, right, I mean, that's
that's the economics. But this brings up a bigger issue,
which is I do think what you're hitting on is
there are these long term risks that maybe AI is
gonna fuel this rebellion of robots and this. You know, maybe,
but again, we have an economics, social, political, economic world

(26:19):
we live in, and I just think let's really focus
on the issues we have. Now. That's not discounting the future.
It's not like let's burn a bunch of carbon and
meaning fuels because who cares about climate change? That's our
grandkids problems. Yeah, this is different. It's like, let's think
about the jobs the world. I mean, another way to
put this is if we mess up our economy and
mess up our democracy by people losing jobs and mass

(26:42):
protests and losing trust in the government and there's just
an erosion of truth, we're not going to be able
to handle climate change or any of these big AI
you know, the singularity type of risks. So what I
feel like is, let's focus on what keeps our economy
and our sanity and our humanity. Well, let's keep this

(27:03):
fabric of society together now so that we're more equipped
in the future to handle all the risks AI and otherwise.
But this goes back to what you're saying, which is,
these are real issues in the short term, and if
we don't address them, if we get distracted by the
long term, we're not going to be ready to address
the long term even if we think about it now,

(27:23):
we'll be so distracted and so dismayed. Yeah, so I
think we have to be practical here. I agree, and
I am also I think it's a valid point that
you make about the fact that all these are tools
that will reduce options for some people, there are also
tools that create options that can be used for the
creation of art of culture. I do think some people

(27:44):
I know have brought up photoshop when I talk about
my concerns with AI and are, like, you know, there
were a lot of people, draftsmen and whatnot who were
concerned when photoshop hit because it was a threat to
some of the things that they did for money. And
photoshop effectively has created whole forms of art that didn't
exist or didn't exist in the same fashion before it
did as a tool and tools like it. And that's

(28:07):
not a think I think it's kind of worth I
don't like I don't want to be kind of just
on the edge of tragedy here. You know, this is
a there's a lot of different ways this could go,
and they're not all bad. I think we're all used
to calamity right now, so much so that we potentially
expect it in situations where it's not the inevitable outcome. Well,

(28:30):
I mean that's I think one way to kind of
boil a lot of that down is we can adapt.
We just need time to do so to many things.
And what's really challenging and frustrating now is the pace
is so fast. It's not just an illusion, and it's
not just oh, if you don't pay attention to AI,
it really is fast. It's very very hard for us

(28:50):
to adapt. So, just thinking of the Internet, we got
a lot like individuals as users and tech companies got
a lot better at dealing with clickbait. Right YouTube was
tons of baden. They figured out ways to demote that
to some extent. We got a lot better at keeping
fake news out of the high search rankings in Google.
Like I mentioned, a lot of these problems that came up,
we're not perfectly addressed, not even close. But there was

(29:13):
significant progress and that's often understated. But if these problems
are coming so fast and so intense, it's a lot
to adapt to. And that's what's really the challenge is
the pace. And I think we're we're seeing a very
very breakneck pace. That's really hard. Now does that main
you're on the side of like Elon Musk and some
of those folks who just signed that letter being like,

(29:34):
maybe we should put a pause on AI research because
you know, I'm not one hundred percent against it. Again,
I kind of am, like, Man, I wish we'd been
having this conversation when Facebook dropped or YouTube dropped. But
I don't think that's a realistic thing. I'll say that,
but I do. Look yeah, yeah, so I would say, no,

(29:54):
I'm not I'm not a favor that for one thing.
I mean, in a very practical sense, you think all
these companies that are putting billions of dollars in these
investments in AI are all going to sit around saying,
you know what, let's just not do this for a
few months of course not. So here's what I think
They're not going to slow down. What's going to happen

(30:15):
is going to happen. Even if some players decide to
be responsible and slow down. Guess what that means. The
only people plunging ahead are going to be the irresponsible ones.
So what I think we need to do is I
don't think we can really slow that down. So what
about the flip side. I think we need to accelerate
public education on artificial intelligence. I think we need to
accelerate government legislation, regulation, international cooperation. I don't think we

(30:41):
can solve this by slowing AI down. I do think
we need to find a way to speed up our
democratic process processes. It's taken us how many years to
pass basically nothing about social media in the US and
some mixed results in Europe. Yeah, that's the problem, right,
If we could work faster, then I think we could
keep up. And I think that that's that's actually the

(31:02):
long term like practical survival thing from this is that
I hope we get is like, yeah, we've always needed
to be more careful about the things that we expose
billions of people too. Suddenly it should have happened before now.
But I hope that this I hope that all I
hope the fact that AI, because of James Cameron, is

(31:24):
coated into our brains to be something that triggers a
little bit of panic in people. I hope that rather
than reacting with panic, it leads to a more intelligent
and considered state of affairs when potentially embracing technologies that
are going to change life for huge numbers of people.
That's right, and that is I think we have an
opportunity here to experience that and explore them and try

(31:45):
and that that is kind of what I was aiming for,
And that threat is again I love that article that
you know you mentioned at the beginning, But if we
start going down this road of hype, there is a
danger that we're going to fall into these traps. And
I think let's stay grounded. Let's say practical, let's real
identify the risks. Not that I'm some guru and know
what they are, but it's almost easier to see what's

(32:06):
not true than what is true. Yeah, and that's I
think let's all try to police each other and make
sure we're focusing on practical things that really are manageable,
that really are genuine risks that are impacting people, that
are impacting people today, and especially ones that are impacting
marginalized populations. Yes, so I think let's hope we learn
these lessons. And yeah, I am not optimistic, but I'm

(32:29):
not as cynical. I think there's a lot of important
discussions happening now that let's just say, there's a lot
more discussion now than we had with social media, and
maybe that's a good thing. Yeah, well, I think that's
a good note to end on. Noah, did you have
anything you kind of wanted to plug before we roll
out here? No, I just I think it's it's a
great topic that everyone can be involved in, and I

(32:52):
just my plug is just don't be intimidated. Don't be afraid.
I am writing a book that's not going to come
up for a couple of years that's trying to help
empower people to kind of be part of these conversations.
But that's far off. I just want to say broadly,
don't be intimidated and don't fall for this narrative that
sometimes happens in tech communities that, oh, you know, I'm

(33:12):
not a tech person, I don't have a chance to
understand this stuff affects all of us, and how it
affects you matters, and your opinion matters, and your voice matters,
and we're all part of social media, we're all very
soon going to be part of AI in chat thoughts,
So don't don't be afraid to join the conversation. You
don't need any technical background because I think the subject
is just as much sociological as technical. It's about people.

(33:35):
I think that's a great point to end on. Thank
you so much, Noah, really appreciate your time, and everybody
else have a nice day. I mean you have a
nice day too, also, thanks to you too. It's lots
of fun. It could happen here as a production of
cool Zone Media. Well more podcasts from cool Zone Media.

(33:56):
Visit our website cool Zonemedia dot com, or check us
out on the iHeartRadio app, Apple Podcasts, or wherever you
listen to podcasts. You can find sources for It could
Happen here, updated monthly at cool Zonemeda dot com slash sources.
Thanks for listening.

It Could Happen Here News

Advertise With Us

Follow Us On

Host

Robert Evans

Robert Evans

Show Links

About

Popular Podcasts

Death, Sex & Money

Death, Sex & Money

Anna Sale explores the big questions and hard choices that are often left out of polite conversation.

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2024 iHeartMedia, Inc.