Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
Welcome to tech Stuff, a production from iHeartRadio. Heydarren, Welcome
to tech Stuff. I'm your host, Jonathan Strickland. I'm an
executive producer with iHeartRadio and how the tech are you? So?
Last week I said I would do an episode about
(00:25):
doctor Jeffrey Hinton, the so called godfather of AI. My
dog is very interested in this, as he winds in
the background, and as I published this. We're into the
fifth month of twenty twenty three, and I still feel
pretty good about calling this the Year of AI. While
artificial intelligence has obviously been a discipline for decades, with
(00:49):
lots of impressive displays and exhibitions and developments throughout the years,
the buzz around and attention to aifields has really hit
a high point this year, largely driven by stuff like
large language models LMS, as well as the chatbots built
on top of them that seem to be pretty knowledgeable,
(01:12):
almost human in their capabilities. Plus you throw in some
image and video and audio capabilities that allow us to
use a machine to create all sorts of stuff, and
you got yourself something that the average person can at
least recognize as AI. A lot of AI applications historically
have been so far behind the scenes that you might
(01:36):
not even recognize it as artificial intelligence, or you might
not think of it in that context. But now we're
getting to a point where there's at least the appearance
of machines behaving similarly to people within certain contexts, and
it becomes way easier for the average person to say, Wow,
hang on, what's going on now. I say that because,
(01:58):
as I have pointed out in this show so many
many times, there are lots of different aspects of artificial intelligence,
some of which have been around for many years, and
some of them have even been causing problems for many years.
See also facial recognition technology and the fact that bias
and systems can lead to really terrible consequences in the
(02:19):
real world. And today I wanted to talk about how
some of the folks in the AI field are voice
and concerns that they have around AI and AI's evolution
and also its deployment and how it could be a
destructive tool in the future. Now, if you've been listening
to me for a while, you know I try to
take a very thoughtful approach to this. I think it's
(02:42):
important to understand the capabilities of AI, and it's also
important to understand the potential misuses of AI, or at
least the unintended consequences of using AI. But I also
want to try to avoid fud that is, fear, uncertainty
and doubt that can air on the side of being
(03:03):
an alarmist. So I think that we should be concerned,
But so far I haven't been ready to push the
panic button just yet for AI. But maybe that's about
to change, because while I've been trying to wrap my
brain around this, a person like doctor Jeffrey Hinton has
come forward with his own concerns about AI. And if
(03:24):
doctor Hinton is concerned, I should probably listen. And that's
because doctor Hinton has been at the cutting edge of
AI development for years for decades, particularly in fields like
artificial neural networks and deep neural networks. In particular, he
recently resigned from his position at Google, where he had
(03:45):
been working in AI research, at age seventy five. He's
certainly at a point in his life where retirement would
seem pretty natural. You would just think that he would
come to the conclusion of, yes, it's time for me
to rest. But his decision was made at least in
part so that he could speak out about AI and
the dangers he considers to be important without considering how
(04:08):
it would impact Google. And that's from doctor Hinton himself.
He posted that on Twitter, where he said he was
doing this without considering how it would impact on Google.
He was addressing a New York Times article that implied
he had left Google so that he could criticize Google
in particular. He was quick to say that he felt
Google had been pretty responsible in its pursuit of AI,
(04:31):
at least arguably until relatively recently. So let's learn a
bit about doctor Hinton and his background, the work that
he pursued, and what his concerns around AI actually cover,
and maybe along the way we'll figure out some questions
that we need to answer at some point, implications that
(04:52):
need to be considered, and perhaps choices we absolutely should
not make if we want to create helpful AI that
provides a net benefit rather than something that you know,
creates the terminator or how or whatever. And I am
being a bit flippant, but there are reasons we should
have some concerns, even if they don't involve single minded
(05:14):
cyborg soldiers. Jeffrey Hinton was born in nineteen forty seven
in London, England. He attended the University of Edinburgh and
graduated with a degree in psychology in nineteen sixty nine,
which is an interesting starting point for someone who had
become deeply involved in computer science, and the background in
(05:35):
psychology is probably an important component for someone who would
contribute to the advancement of artificial intelligence in general and
neural networks in particular. In nineteen seventy eight, he earned
a PhD in artificial intelligence at the University of Sussex.
He transitioned into being an AI researcher, but it was
kind of a tough go in the UK. There just
(05:57):
really wasn't that much support and funding for AI research
over in the UK, so it was hard for him
to make much progress. So he decided to immigrate to
the United States, where he first worked as a researcher
at the University of California in San Diego, and then
he moved on to Carnegie Mellon University and he worked
(06:18):
at Carnegie Mellon as a professor from nineteen eighty two
to nineteen eighty seven, but by the late eighties he
made a decision to relocate to Canada and you might say, well,
why would you go to Canada when you were already
working in AI in the United States. I mean, the
US has spent billions of dollars in research and development
(06:39):
in the technology field. Well, his primary reason was because
the main source for research funding in AI at the
time came from the Department of Defense, and doctor Hinton
wasn't comfortable with the idea of working on machine intelligence
that was through military backing, because the presumption is whatever
(07:00):
our work you create is ultimately going to be put
to use by the Department of Defense, and it's reasonable
to assume that at least some of those uses could
be weaponized, and Hinton didn't want to contribute to work
that could later be used to harm or kill others,
so he would rather sidestep that and get funding from
(07:21):
other sources. So he settled in Toronto, Canada. He took
on more academic roles. He continued to be professor. He
also continued to work in the field of AI research,
specifically in artificial neural networks and deep learning approaches, and
we will talk more about those in just a little bit.
In twenty twelve, he co founded a company with two
(07:44):
of his students after publishing a paper on deep learning.
So his paper got the attention of some really smart
people around the world, and before Hinton knew it, he
was being courted by some really big companies, companies that
had super deep pockets and wanted to hire him and
(08:04):
a couple of his students on to work in the
field of AI research. So he got one offer from
the Chinese company Baidu that would have had him and
his two students work for the company for a few
years in return for twelve million smackaroo's worth of compensation.
That's a healthy salary. But Hinton also had other folks
(08:26):
who were potentially interested in his work, and he also
figured it would be far more lucrative if he created
a company with his students, if they made a company
together that could then be acquired, so instead of getting
hired as individuals, they would have a company that would
have to be purchased. And so that's when he and
(08:46):
these two students incorporated into DNN Research. The DNN stands
for deep Neural Networks. Hinton then took this brand new
company which really just had three employees, including himself, and
had no products and no services, no business plan, nothing
other than the fact that it was incorporated, and then
(09:09):
he put it up for auction. The actual auction took
place in Lake Tahoe during a conference on AI and
machine learning, and by Do participated, but so did Microsoft, Google,
and an AI research company called DeepMind, which a couple
of years later would become a Google subsidiary of its own. Now,
(09:32):
DeepMind was the first company to bow out of the auction.
It just did not have the resources of these three
giant tech companies. Microsoft then followed. Actually, Microsoft kind of
bounced in and out of the auction a couple of
times before finally throwing in the towel, and the bidding
war came down to Google versus by Do, and it
(09:53):
just kept going and going and going. Once the price
hit an astounding forty four million dollars. And keep in
mind DNN Research had only been around for a very
short while and had no products or services to its name,
doctor Hinton called the auction closed and the company went
to its new owner, that of Google. Reportedly, the people
(10:14):
at Google were actually surprised that he stopped the auction
at that point, because they figured that he was leaving
millions of dollars on the table that the bidding war
would have continued between Google and by Doo, and he
could have gotten more for it, but doctor Hinton was
more concerned with working for Google rather than for by
Do and felt that forty four million dollars was more
(10:36):
than enough. So that's a pretty you know, mature approach
as opposed to let's take every cent we can grab.
Also in twenty twelve, he won an award when he
co invented a deep learning model called alex Net, named
after the other co creator, his student, Alex Krzewski. This
(10:57):
was bigger than just a two person operation. By the way,
it's not like Alex Krzewski and doctor Hinton were the
only two to work on it, but they were the
leads on this project and it was named after Alex. Alex,
by the way, was also one of the two students
who was part of DNN research, the other being a
student named Iliya Sutzkever. And my apologies for the butchering
(11:20):
of pronunciation, but the learning model Alex Natt focused quite literally,
i guess you could say, on image recognition and participated
in a competition in which the model proved to have
an eighty five percent accuracy rate. And while it's a
trivial thing for a human to look at a photo
and say something like that's a bunny rabbit. It's not
(11:41):
so trivial to create a way for computers to be
able to do the same thing. So this eighty five
percent accuracy rate was like that was like a stake
in the ground saying we have made a massive leap
ahead with machine learning and artificial intelligence. It was one
of the reasons why DNA research was so highly sought after,
(12:03):
and alex Nett wasn't just an impressive approach toward machine learning.
It really got enough buzz that money began to pour
into deep learning projects everywhere, not just with doctor Hinton
and his students, Like we literally started to see more
development in the discipline as a whole because this was
(12:24):
such an impressive display. From twenty twelve until just this year,
doctor Hinton worked in AI research and deep learning, in
particular over at Google. One of his two students, Ilijas Skiver,
actually would leave Google to join a little AI nonprofit
called open Ai. I mean, originally it was a nonprofit,
(12:45):
and technically the nonprofit part of open ai is still
a parent organization, but really the for profit arm of
open AI is in the news way more frequently these days.
In twenty eighteen, doctor Hinton was a co recipient of
the award. This is a prestigious honor for those in
the computing field. Some people even refer to it as
(13:06):
being the equivalent to a Nobel Prize. And now he's
stepping forward with concerns relating to the work he dedicated
his life too. Now we're going to take a quick break.
When we come back, we're going to talk about deep
neural networks and what they do within the realm of
machine learning. But first these messages. Okay, we're back. Let's
(13:40):
talk about doctor Hinton's work and deep neural networks. Now,
as you might imagine, this subject gets really complicated. It's
really nuanced, it's technical, and as I'm sure you have
no need to imagine, my understanding of deep neural networks
is pretty limited. I mean, you could call it surface
level and I wouldn't be able to disagree. So we're
(14:00):
going to paint this topic in broad strokes. And I'm
doing this not to dumb it down, but rather to
do my best to kind of get across the general
way it works without making too many egregious errors. Along
the way. So first up, the goal of a deep
neural network is to provide a learning mechanism that mimics
(14:23):
the human brain, but using a computer rather than a
human brain. So, for the purposes of an overly simple
thought experiment, imagine you've got a black box. It's an
opaque black box. Now, one side of the box allows
you to put something in, and the other side of
the box allows stuff to come out. And let's say
(14:46):
that you are putting in one thing and it transforms
in some way inside the box and comes out as
something else. That's the general thought here, because I'm feeling
a little peckish and a little puckish. Let's say that
you decide to put in the inputs as the ingredients
you would need for a pizza, and you're shoving that
(15:08):
into the box. So we're talking stuff like pizza dough
and some sauce and some cheese and any toppings you like,
and you shove that into the input in the box,
and then the output shoots out a cowl zone. Well shucks,
you think, unless you're Ben Wyatt from Parks and rec
in which case you celebrate because you think col zones
are superior to pizza in every way. But assuming you're
(15:30):
not Ben Wyatt, you say, that's not what I wanted.
I wanted to get a pizza, not a calzone. So
you have to open up the box, you have to
adjust some stuff inside it, you have to close it
all up and try it again, and you keep doing
this over and over until you get a pizza, a
properly cooked and prepared pizza. This is sort of similar
(15:50):
to how computer scientists perform supervised learning with artificial neural networks,
because that box represents what we call hidden layers. They
could be lots of hidden layers, and these are layers
of computer nodes that serve as artificial neurons, and pathways
form between these different nodes as they process information. So
(16:15):
when you put input into the system, that input goes
to a node and it begins to sort the data
based on some criteria that the system has been trained on.
Whatever the purpose of the system actually is. It's kind
of like, you know, let's say it's for recognizing bunnies,
since we use that example earlier. So you feed it
a whole bunch of images, and the node takes the
(16:37):
data and passes the data to another node. A layer
down and it does it. It chooses the node based
on some transformational function at that artificial neuron, right, so
you can think data comes in, the neuron, performs a
transformational function on this data. Based on that result, it
goes to one node or another, and then the process
(16:59):
repeat and it does this again and again until it
comes out the output side. Where like in our example,
we figure out whether the machine is able to recognize
if a picture has a bunny in it or not.
So you feed millions, tens of millions, hundreds of millions
of pictures to this system to train it. When you
(17:22):
start off, you might be doing this with a bunch
of images that you've already determined whether or not there
are bunnies in them. So you've got to control amount
of data that you're feeding just for the purposes of
training your system. And at the end, after it's sorted
through those images, you evaluate the system to see how
(17:43):
well it did in figuring out whether an image had
a bunny in it or not. Maybe in some of
the pictures it misses a bunny. Maybe in some pictures
it thinks there's a bunny there and there's not. And
then you might go in and start to adjust the
weights on those artificial neurons. This is the thing that
creates that transformational function. You might tweak those transformational functions
(18:06):
a little bit. You might start closest to the output
and work your way back. That's called back propagating. And
what you're trying to do is adjust all these settings
so that it is more accurate the next time you
feed all the images through, and you do it again,
and you might do this dozens or hundreds of times
(18:26):
in an effort to really refine your model so that
it gets better and better at identifying the pictures that
have bunnies in them. And then ideally you get the
system to a point where you could just feed it
raw data, like you haven't even looked at these images.
You're just dumping millions of images in and you're letting
it sort it through. And because it has reached the
(18:47):
level of accuracy that it's at, because you've trained it
for so long, you don't even have to worry so
much about whether or not it caught all the images
or if it misidentified some there's probably going to be
some error in there, but if your accuracy level is
high enough, then it's possibly good enough for whatever purpose
you've built it for. And yeah, the more data you
(19:09):
use to train your machine learning model in general, the
better it will perform, because it'll start to eliminate things
like outliers. And while image recognition is just one of
the more famous uses for deep neural networks in machine learning,
it is clearly not the only one. The one we've
been hearing about a lot lately involves large language models
(19:30):
or llms, like I mentioned at the top of the show.
So imagine feeding millions or even billions of documents to
a neural network that's trained to recognize patterns in language.
So you're feeding all sorts of stuff to this model,
and as you do, the system quote unquote learns how
(19:52):
words follow each other, like which words are likely to
follow other words. You probably wouldn't go so far as
to say the system under stands a language like English,
but it does have an incredibly sophisticated statistical model that
breaks down how likely one word is to follow another.
So you can think of it a little bit like
(20:12):
a word association game. You've probably played something like this
at some point or another. Someone gives you a word
and you're supposed to say the first word that comes
to your mind. So if I were to say the
word nuclear, you might think power or bomb or radiation.
You probably wouldn't think penguin or Chesterfield sofa just not
(20:35):
likely to pop up statistically, it's unlikely. Well, you can
kind of think of the large language model as being
an enormous version of that. So as these large language
models process increasing amounts of information, and as the neural
network experiences refinement over countless learning runs, you end up
with a system that is capable of doing some pretty
(20:57):
extraordinary things, at least on the surface life. It can
pull information together to answer questions about practically any topic. Unfortunately,
it can also invent answers by following a statistical probability
when it doesn't actually contain the answers to the question
you asked. This means you can end up with an
answer that isn't accurate at all, but it follows a
(21:21):
statistical model where each word is from a probability standpoint,
the perfect word to go in that point in a sentence,
which is a weird thing to think about, right, Like,
the answer you get isn't right, but each word is,
statistically speaking, the best one to put in that place
in lieu of any actual information. Now here's another thing
(21:45):
to consider. These tools can do stuff like build code.
This code isn't always reliable, it's not always right, but
sometimes it is. So maybe you use a tool like
GPT to look over the code that was made by
a group of engineers, and you do it to search
for errors, like you're using this to look for mistakes
(22:06):
that were made in the code. Or maybe you use
it to see if there's a way to make the
code that was written more elegant or efficient. Maybe you
figure you've reached a point where you don't even need
human engineers because the AI agent performs at a standard
that's high enough to replace them. Maybe you think it's
(22:27):
even better than what human engineers can do, and that
it's far faster, and that you can therefore develop and
deployee software at a pace that you couldn't before. So
the IT industry is in a particularly delicate place as
companies begin to explore how AI could augment or potentially
replace people. I go back to what IBM's CEO recently said.
(22:51):
He said that for nearly eight thousand job offerings that
the company has now put on a hiring freeze, he
might never hire a human to take one of those jobs. Instead,
he might rely on automation and AI to cover that job.
So it's not quite the same thing as firing someone
and then replacing him with a robot, but it is
(23:12):
given a robot a job instead of a human being. Okay,
let's switch gears. Let's talk about AI in the arts,
because that's also a really relevant conversation right now. So
last year we already started to see debates about the
validity of AI generated images. Should an AI generated image
be considered art? We saw people submit AI generated paintings
(23:38):
into competitions, some of which ended up receiving awards, and
then we're subsequently either stripped of those awards or you know,
people got in trouble for using AI even when they
were you know, admitting to it in an effort to say, hey,
we're trying to start a conversation about AI and its
role in arts. So is art actually art if the
(24:00):
image is a product of a complex series of decisions
that aren't driven by imagination or creativity, but rather some
really weird statistical model that's so complicated that no one
really understands it. Or is it just a meaningless image?
You know, maybe it's an image that mimics specific artists,
(24:23):
but in itself it's nothing more than just a picture.
I mean, you know, drawing a perfect circle freehand with
no tools is really really hard for a human to do,
but it's a piece of cake for a computer. So
should we be astounded by a computer's ability to generate
a perfect circle? What about a computer's ability to mimic
the style of say, you know, Picasso or Dali. Beyond
(24:47):
visual arts, there are examples like writing and music. There's
the case of the song Hard on My Sleeve that
features the deep fake voices of Drake and the Weekend,
so it sounds like Drake in the Weekend the song.
But these are just computer generated voices. So what happens
when people can create new songs that feature an imitation
(25:08):
of an established artist's voice or style. You could have
fun finding out what it would sound like if the
Beatles wrote a song in the style of the Ramones.
But this kind of distraction can become really harmful to
actual human artists. Honestly, what this illustrates is a need
to create more comprehensive right to publicity and right to
(25:29):
personality laws to protect people from being imitated without their consent.
Going a bit further, recently, Spotify had to purge a
whole bunch of songs from its streaming service because AI
was gaming the system. So there's this company called Boomy,
and Boomy lets you create a song based on a prompt,
kind of similar to how chat GPT will create a
(25:52):
text response to a prompt you type in a little
text field. So you could type something up like country
song in the style of Hank Williams with vocals like
Billie Eilish about going home after being away for many years,
and then Boomy would take this prompt and generate a
musical track for you, and then Boomy would actually release
(26:15):
that track on streaming services like Spotify. Now that's already
a bit sus because if you're using styles and voices
that actually originate with other people without their involvement or consent,
there's a problem with that. Even if there's not an
obvious law that you're violating, it's still an ethical issue.
(26:37):
But don't worry. It gets worse because someone maybe it
was Boomy, maybe it was one of Boomy's customers, I
don't know, but someone was trying to boost streams to
these AI generated songs because Boomy's business model was you
can create a song. You can use our AI to
(26:57):
create a song based on your prompts, and then we'll
post the song to streaming platforms and then we share
the royalties that are generated from the AI song. So
if your AI song is really popular, then you get
a payout, but Boomy takes a cut, So some money
goes to the user who's prompts served as the starting
point for that song, and the rest goes to Boomy. Well,
(27:20):
streaming royalties really don't amount to very much, so if
you want to start generating royalties, you need to get
like a crazy number of listens to a particular song.
So what better way to get a revenue bump than
to create a bot that artificially hits replay on a
track to get that number up into the stratosphere. And
(27:44):
if you think about it, it's a case of robots
making the music and robots listening to it. How insane
is that? Now? Artificially running up those numbers hurts everyone
in the long run, even if the streaming platforms didn't
pick up on it, and don't worry, they did. Well,
if you did this long enough the industry would have
(28:05):
to revisit how royalties are paid out. The whole business
would change, and ultimately that could hurt legitimate artists in
the process, you know, artists who aren't relying on bots
to artificially drive up the popularity of their music. There
are a lot of negative consequences to that kind of scheme.
But the platforms have already begun to remove those types
(28:26):
of tracks in response to suspicious playback numbers. Like there's
nothing inherently wrong with creating a track like that, at
least not on a legal standpoint, but you know, illegally
boosting the numbers, that is an issue. Okay, I've got
more to say about this, but we're going to take
another quick break and then we'll get back to doctor
(28:47):
Hinton's specific concerns with AI. Okay, so I mentioned AI
in the creative fields of things like, you know, the
visual arts and music. We've also got the current situation
(29:08):
here in the United States as I record this, it's
in May of twenty twenty three, I think I said
that at the top of the show, and the Writer's
Guild of America or WGA, is on strike. So this
union represents TV and film writers, and as they're on strike,
they cannot do any work in those fields. They can't
(29:29):
take any meetings, they can't discuss projects, nothing. One of
their many concerns, it's not the only one, but it
is one of them, is the role of AI in
the writing process in Hollywood. So the fear is that
studios will start to turn to AI in order to
do stuff like generate script ideas or maybe even a
(29:51):
first pass at a full story treatment, and then they
would turn to human writers to polish that idea up
into something that's con deceivably watchable. But see if you're
hired to do that, If you're hired to come in
and do a rewrite or a punch up on a script,
you make less than you would if you were writing
(30:11):
a new script from page one. So, in other words, studios,
the fear is that studios will lean on AI to
avoid having to pay people to come up with great ideas.
They'll just use the AI to create ideas, and then
the humans have to turn these AI generated ideas into
something that's theoretically going to be a hit, and those
(30:33):
ideas aren't always going to be great. Now. To be fair,
the stuff humans make is not a always great see
also pretty much anything coming out of asylum. That studio
seems to be run by committee, specifically for the purposes
of creating trash. But it's it's a real concern, right,
the worry that, oh, you're going to undercut writers, You're
(30:54):
going to make it even harder to make a living
to be a professional writer in Hollywood, because you're going
to take out one of the more lucrative parts of
the job by shifting that over to AI, and then
everyone ends up making even less money while cost of
living continues to go up, and the studio ends up
(31:15):
doing it all in the justification of cutting costs and
increasing profits. It's those kinds of concerns that partly led
to doctor Hinton to resign from Google. But again that's
just part of it. There is this concern about how
AI once was intended to be a thing to augment
(31:36):
a person's capabilities in their job, but there's this legit
fear that it could be more of a replacement than
an augmentation. But there's more to it. So imagine a
scenario in which an AI agent is not only able
to design code to build a program, imagine that it's
(31:57):
also able to execute that code. So it's not just
creating software, it's able to run that software. Now, imagine
an AI agent that creates code intended to improve the
AI itself. Now, this is one of those concepts that's
really popular in a lot of science fiction, and it
also shows up in variations of the Singularity. I mentioned
(32:20):
the singularity recently in an episode, but I didn't define
it in that episode. So let me do that right now,
because honestly, you don't hear the terms as frequently as
you did like a decade ago. But the idea of
the Singularity goes something like this, We eventually will reach
a point in technological development where there is a tipping point,
(32:44):
and after that tipping point, things will be evolving and
advancing so quickly that it becomes impossible to define the
present at any given moment, because from one minute to
the next, so much is advancing and changing that there's
no like the President is just change. That's it. It's
an era of incredibly, unimaginably rapid and constant evolution, and
(33:09):
it will encompass not just our technologies, but potentially even ourselves.
So some versions of the Singularity incorporate an idea where
humans integrate technology into themselves and augment their abilities, like
boosting their intelligence or giving them incredible skills, kind of
(33:31):
like the matrix idea of I know, kung fu, right,
like that sort of thing. Some versions of the Singularity
instead say, nah, humans just kind of get rid of
our fleshy, mortal bodies and we become digital beings. We
find a way to transport our consciousness into the digital realm,
(33:53):
and we become one with machines and potentially one with
each other. It gets really speculative fictiony when you start
talking about the Singularity, and I am not convinced that
that kind of thing is ever going to happen, But
I do see the potential danger of having machines design
their own code and then be able to execute that code.
(34:14):
That could include things like malware that is able to
bypass antivirus detection because it's built on a new model
and it's not based off some previous version of malware
that could potentially be detected. That's a real possibility. It's
something to really be concerned about. We've also seen already
with AI hallucinations, how these systems can present misinformation as
(34:38):
if it's the real deal, and with such unintended consequences
in a pretty innocent application of AI, you are left
to wonder what kind of problems would occur with coding? Right,
we've already seen a problem that can occur just with
simple text based interactions. What kind of problems could occur
if we start to depend upon A to build code? Now,
(35:03):
I think in a lot of cases we would just
end up with bad code. Like we'd have stuff that works,
but we'd also have stuff where, for some reason or another,
the AI introduced code that doesn't do anything, or it
causes the whole thing to crash, and so we just
end up with software that doesn't really work in those cases,
But there's enough doubt there to make us pause, Like
(35:24):
maybe the code would work, but maybe it would do
something malicious or ultimately harmful. I'm assuming that's how doctor
Hinton feels based on his statements post resignation. I don't
want to put words into his mouth, but this is
kind of the what I'm inferring based upon what he said.
Hinton is worried that deep neural networks and similar machine
(35:45):
learning techniques could be put to use in harmful, aggressive ways.
I mean this dates back to his decision to try
and avoid taking funding from the Department of Defense. Right,
he's worried about the stuff like AI controlled machines that
could be used in warfare, and we've already seen some
elements of that with things like drones. So it's a
(36:06):
reasonable fear to have, and a lot of experts in
AI have struggled with this and have you know, campaigned
to have kind of bands put on for AI controlled
weaponry and warfare materials. And there's a real fear that
in some countries, the push to create such tools will
be very hard to avoid, that there won't be these
(36:28):
checks in place or people concerned. It will be more
of a just an overall drive to develop those kinds
of tools and to thus dominate by having those tools
in your arsenal. That can lead to a situation where
everybody else rushes to weaponize AI because they're worried that
everyone else is already doing it and that they're going
to get left behind and thus be in a vulnerable position.
(36:51):
So it becomes kind of a self fulfilling prophecy. And
in that case, it's the AI experts that we have
to rely on to push back against that Trendeople who
are actually building the systems. We have to hope that
they will do so in a way that won't perpetuate harm.
But that's a big hope to place on that particular
(37:14):
group of people. Now, not everyone is as worried about AI,
at least not in the short term. Stanford researchers recently
published a paper titled are emergent abilities of large language
models a mirage? So, an emergent ability refers to a
system developing some sort of skill or function for which
it was not formally trained or programmed to do so.
(37:37):
For example, let's say you train a large language model
to answer questions that are posed in English, but then
you find out that it's also able to translate responses
into Spanish perfectly, even though you didn't design the system
to quote unquote understand Spanish. The researchers at Stanford concluded
that these apparent emergent abilities are in fact mirages. They
(38:01):
are not real, They are illusions. So the researchers are
saying that companies like Google and open ai might look
at the results of a model and then use a
metric that suggests the ability that was displayed was an
emergent one. But if they had chosen a different metric,
if they had looked at the output from a different
point of view. In other words, the illusion of emergent
(38:24):
behavior would fade. So, in other words, it just depends
on how you look at it, whether it looks like, oh,
this system is doing something it wasn't designed to do,
or oh, by looking at it this way, we see
it's performing exactly as it was designed to do. So
we're getting real obi wan kenobi here with a certain
point of view. Stuff, all right, But how worried should
(38:47):
we be about AI? Sadly, I think the answer to
that is really complicated. I wish I could just give
you a definitive answer from terrified to mift, But I
think it largely depends upon who you are and what
you do for a living, and how much you depend
upon automated technology. Honestly, I think, for example, that folks
(39:08):
who write code have a legitimate reason to be concerned,
not because I think AI is going to do their
job better, but rather because I have very little faith
in software company executives to avoid the temptation to push
their chips in on a big, long shot bet on AI. So,
in other words, I worry that business leaders are going
(39:28):
to make some poor decisions in an effort to cut
costs and maximize efficiency, and then get rid of human
engineers and rely on AI to build code, and then
we're going to end up with a really rough period
of subpar software. Now, in the long run, we might
either see companies that previously had discarded their human programmers
(39:51):
return to them and say, gosh, it turns out, yeah,
we need you because what the stuff that AI is
making is not consistent or good quality. But then again,
we might see AI generated code improved to a point
where you know, it is superior to what humans were making.
It's really impossible to say right now, we can't say
for sure which direction it's going to go in, and
(40:12):
it may even be more messy than that. Right It
might be in some cases the code generated by AI
is superior and in other cases it's inferior. It may
not be, you know, an industry wide thing we can
have a firm statement about, and maybe I should put
more faith in companies. I just know I've seen a
lot of decisions and a lot of different organizations over
(40:35):
the years that have proven to be really short sighted
strategies and ultimately harmful all in the name of returning
shareholder value. So I guess you could call me a cynic,
but I just feel like I've seen it a lot,
so I would not be surprised, and in fact, that
that IBM CEO statement of potentially filling around seveny eight
(40:57):
hundred jobs total, I think is the real estimate with
AI instead of with humans. That kind of speaks to
my worries. I do not think that we are on
the precipice of AI spiraling out of control and becoming
this malevolent superhuman intelligence that's going to ultimately decide to
(41:19):
get rid of all of us. However, that's just my opinion,
and Goodness knows, I don't have the experience or the
expertise of someone like doctor Henton, so I'm taking this
from what you could argue is a largely uninformed opinion.
I think that's a fair assessment. I do think AI
is posing real problems right here and now, and that
(41:42):
we have to consider those problems and we need to
address them, either in how we are developing and deploying AI,
or how we create legislation to protect humans who otherwise
might see their livelihood threatened. I do think we need
to revisit right to personality and right to publicity style
laws to make sure that the laws incorporate things like
deep fake video and audio and in the form you know.
(42:04):
Right now, we have protections in place if someone uses
your likeness without your permission. It's very specific rules about that.
It's not like if your image pops up, you know,
because someone took a photo and you happen to be
in the background. It's not like that's a case that
you're going to have a really strong, you know, legal
(42:25):
backing on if you want to protest the use of it.
But let's say you're a celebrity and someone runs your
image next to a product that you did not agree
to endorse. There are protections for that, but those protections
are largely for just likenesses like your image. It doesn't
necessarily cover things like the sound of your voice or
the style of music you create. The laws need to
(42:49):
be rewritten or tweaked in order to cover those cases, because,
I mean, it's a new world where that sort of
thing is possible. So right now, there's not like there's
no recourse for someone who hears a song that sounds
like they sang it, but they didn't there's nothing really
they can do, and you have to be careful with
(43:10):
how you word such laws because there are things like
you know, parody being protected by fair use. So if
you wanted to create a parody of a song and
you hire someone who sounds kind of like the musical
artist you're parodying, that shouldn't necessarily be illegal. But if
you're trying to pass it off as if it were
the artists themselves, that's a different story. So yeah, it's complicated,
(43:35):
it's messy. It's complicated not just on the tech side,
but on the legislative side, the cultural side, military as well.
I do think that doctor Hinton has some legitimate concerns.
I think some of them, at least I hope anyway,
are a little premature. I hope it's the things he's
(43:56):
worried about are far enough out into the future that
we can actually steps to prevent bad outcomes and negative consequences.
We'll never prevent all of them, because some of them
will be completely unintended, but I would like to see
them minimized at the very least in the meantime. I'm
not going to panic about AI, but I'm giving it
(44:18):
a lot of side eye so it knows it needs
to stay in line. That's it. I hope you are
all well, and I'll talk to you again really soon.
Tech Stuff is an iHeartRadio production. For more podcasts from iHeartRadio,
(44:41):
visit the iHeartRadio app, Apple Podcasts, or wherever you listen
to your favorite shows.