All Episodes

October 30, 2024 17 mins

What would YOU like to hear about on Bloomberg? Help make shows like ours even better by taking our Bloomberg Audience Survey https://bit.ly/48b5Rdn
Watch Carol and Tim LIVE every day on YouTube: http://bit.ly/3vTiACF. Dr. Terrence Sejnowski, Professor at the University of California at San Diego, discusses his book ChatGPT and the Future of AI: The Deep Language Revolution.
Hosts: Carol Massar and Tim Stenovec. Producer: Paul Brennan.

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:02):
Bloomberg Audio Studios, podcasts, radio news.

Speaker 2 (00:07):
You're listening to Bloomberg Business Week with Carol Messer and
Tim Stenebeck on Bloomberg Radio. Look, I said we're going
all in on AI. The author comes to our four
o'clock hour today. Alphabet just reported we got that news
just now, Caroline Hyde and man Deep Singh joining us
Microsoft's GitHub agreeing to bank artificial intelligence models from Anthropic

(00:27):
and Google into a coding assistant used by millions of
software developers. And then there was that Wall Street Journal
report earlier today Carol elon Musk's XAI is in talks
to raise funding at a value of forty billion dollars.

Speaker 1 (00:40):
Yeah, that's real money. So much of the reason we
are talking about AI is because of Jeffrey Hinton, Nobel
Prize winner in physics for work in AI, known as
the godfather of AI. He was on the most recent
episode of Wall Street Week with David weston.

Speaker 3 (00:54):
These big eye systems, great sub goals. Now, the problem
with that is if you give something the ability to
create subcoals, it will quickly realize this one particular subcoal
that's almost always useful. So if I have the goal
of just getting more control over the world. That will
help me achieve everything I want to achieve. So these

(01:15):
things will realize that very quickly. Even if they've got
no self interest. Then I understand that if I get
more control, I'm it'll be better at doing what they
want me to do, and so they will try and
get more control. That's the beginning of a very slippery slope.

Speaker 1 (01:29):
A slippery slope. Indeed, perhaps our next guest work with
Jeffrey Hinton developing the Boltzmann machine. It is the first
learning algorithm for Mueller neural networks. And if you're scratching
your et duh, you're going to get an explainer in
just a moment.

Speaker 2 (01:42):
We're very pleased to have a doctor Terrence Sanowski with
us today's Francis Crick Chair at the Salk Institute for
Biological Studies, Distinguished Professor at the University of California at
San Diego. He's also president of the Neural Information Processing
Systems Foundation. He joined us from San Diego. His new
book out. It's called chat GPT and the Future of AI,

(02:03):
The Deep Language Revolution. Terry, good to have you with us.
Congratulations on the book. You have been thinking about this
stuff for more than three decades. At this point, I
think a lot of people have been talking about it
and thinking about investing in it for two years. At
this point, are you just saying every day, finally people

(02:25):
are taking seriously what I've been looking at for a generation.

Speaker 4 (02:30):
Well, it's great to be here to have this opportunity.
My book launched today, and I'm really excited about it.
I think that what's happening right now is really unbelievable
in terms of the breath and the depth and the excitement.
And so I was there at the beginning. Jeffrey Hinton

(02:52):
and I collaborated in the nineteen eighties. All the learning
algorithms that are being used today for these large languages
models and deep learning were developed by us back in
that era. And of course what we didn't have back
then were computers that were fast enough that could scale

(03:12):
up these models to be you know, be able to
solve these very difficult problems and artificial intelligence. But this
you mentioned, the Neural Information Processing Systems Foundation on the
President reorganized the biggest AI meeting and in December, we're
expecting sixteen thousand researchers to descend on Vancouver. After by

(03:36):
the way Taylor Swift, she's the big headliner on Sunday,
but we have the rest of the week.

Speaker 1 (03:43):
So I want to ask you, how did you think
about neural networks in large language models in the nineteen eighties.
How do you think about them today.

Speaker 4 (03:52):
Well, we actually had a premonition that these large language models,
I should say not the large they were small language models.
Back we're really good at language. And that was a
particular project, a summer project for a gratitude in my
lap called net talk, which was trained on a dictionary

(04:13):
to be able to pronounce English text. You know, if
you give it an article from the Wall Street Journal
and they would pronounce it in an understandable way. And
this in linguistics is a very difficult problem because English
is so irregular. There are a lot of regularities, but
you also have irregularities, and then you have rules for
the irregularities. But it really was amazing that a small,

(04:37):
tiny network with just a few hundred units and tens
of thousands of weights, the parameters, the connections between the
units could do that. It was like an amazing compression
of complexity. And now we know that these large language models,
the deep learning networks, they love language, and they are
capable of things that we never could have imagined.

Speaker 2 (05:00):
Well, we're gonna we have a few minutes with you now,
and then we're going to come back and have even
more time with you. But that's really what I wanted
to talk about the idea of super intelligent AI. What
are we not thinking about? What's the threat out there?

Speaker 4 (05:18):
So you know, my good friend Jeff is very concerned,
and I think he's one of the smartest people I've
ever met. And if he's worried about it, then there's
some as a concern. However, I think that even if
you're concerned, it's very difficult to know when that's going

(05:38):
to happen, if it ever will happen. And there are
super forecasters out there, and this is from the Economist magazine,
who are much better at people who are experts at
predicting you know, if and when there may be a
catastrophic or existential threat, and it turns out that in fact,
they're not as a concerned as the experts in AI.

(06:06):
I'm happy that someone is thinking about the worst case
outcome because if not, then if it ever happens, we're
in trouble. But right now, I'm more concerned about trying
to understand how they work mathematically and also to learn
more from the brain. After all these were designed. Back then,

(06:27):
Jeff and I looked at the brain the only existence
proof you could solve any problem in AI. And you know,
we tried to build something that was based on similar principles.
So now we can continue. There's a lot more in
the brain we can learn from.

Speaker 2 (06:40):
But paint that picture for us, because I think a
lot of people are worried about doomsday scenarios here, and
if Jeffrey Hinton is worried about that stuff, I mean,
should we we should be worried about it.

Speaker 4 (06:52):
You're saying, I think that we should be cautious, that
is to say, we should be uh constantly thinking along
the lines that Jeff is in terms of what could
possibly happen, and you know, be cautious and put in
precautions so that it can't happen.

Speaker 2 (07:12):
What sorry, I just yeah, go ahead.

Speaker 4 (07:18):
What I'm really concerned about are the unintended consequences, things
that you cannot predict. Something may happen that you know,
no one thought of, even Jeff.

Speaker 1 (07:27):
Yeah, and like you know, we have learned certainly right
great financial crisis pandemic like the un the unthinkable can
happen and you throw technology into it and you just
kind of don't know where it's gonna go exactly. Okay,
so now I'm terrified. Okay, Terrence Stoke go anywhere. We're
gonna do some news. I do wonder what.

Speaker 2 (07:50):
Yeah, I think we have time for one more question
before we go.

Speaker 1 (07:54):
Well, so you know, okay, we have a minute and
then we're gonna take a break and come back. But
I just do wonder. You know, when you talk to people,
do you say, wait, this is really going to be
net net a good thing?

Speaker 4 (08:07):
Look, all new technologies have good and bad consequences, and
you try to mitigate the bad, and you you have
to balance them, you know. Yeah, and right now it
looks like the good is way way ahead of the
bad in terms of the impact it may have on
us and society and businesses. But you know, like I say,

(08:28):
we have to be careful because we don't really know
where it's heading.

Speaker 1 (08:32):
You know what worries me too, And we will talk
about this maye when we come back. I feel like
we throw around a lot of words, not you, but
all of us in the general like you know, whether
it's you know, AI, generative AI, and like you know,
and I don't know that we really understand what's going on,
and so it's hard to know what it could possibly become.
So we're going to pick your brain a little bit

(08:54):
more Terry when we come back.

Speaker 2 (08:55):
It's Terry Sanowski. He's a Francis Kirk Charity Sauk Institute
for Biological Studies, Distinguished Professor at the University of California
at San Diego, President of the Neural Information Processing Systems Foundation.
The new book out now, Chat GPT in the Future
of AI The Deep Language Revolution More with doctor Sanowski.

Speaker 1 (09:13):
Right after this, I want to get back to our guests.
We're talking with doctor Terrence Sanowski. He is Francis Crick
Chair at the Salk Institute for Biological Studies, Distinguished Professor
at the University of California at San Diego. He's also
president of the Neural Information Processing Systems Foundation, and he's
joining us from San Diego on this Tuesday. His new
book Chat CHEPT in the Future of AI, The Deep

(09:34):
Language Revolution. I got to ask you because I am
still trying to understand and I get worried that we
throw these words around. Certainly not you, but we as
we try to understand this with now, you know, not
having full comprehension of what art artificial intelligence, the large
language models that we're talking about today, where it takes us.

(09:54):
Is it as subtle at evolving in life changing as
the Internet was for us.

Speaker 4 (10:02):
So this is something that is emerging. And I have
since the book was sent to press in the summer,
I have a sub stack where I have tried to
fill in with, you know, the new things that are happening.
And I'm preparing something a new twelfth version the blog

(10:22):
on the question of whether AI is overhyped or under hyped,
and and you know, I've thought a lot about this,
and you know, I think that it depends on the timescale.
I think that on the short timescale it is overhyped.
There's no doubt about it. There's just so much out there.
I mean, every day the newspapers are filled with AI

(10:43):
and your program. But I think in the long run
it's actually under hyped. I think the real change in
the Internet, for example, didn't occur within the first ten years.
It was much later. Again, unintended applications that merge that
you know, have enormous impact on our lives, like social media,
So I think the same thing's going to happen with AI.

Speaker 1 (11:04):
But is it is it different? Like I guess, I'm like,
what do you mean, Like, the Internet is not I
wanted to say comfortable, but it's not because there's some
really bad things that happen and we know that, right,
and that's the battle we have with social media. And
we want to talk to you about kind of regulatory
oversight of AI in a moment. But I just I'm
just trying to understand. You know, it does feel so

(11:27):
seamless and just such a part of everything we do.
But it hasn't necessarily replaced a ton of jobs. It's
created jobs, It's replaced some jobs. I guess you could say,
I'm just trying to understand, Like on what scale do
you put it? You mentioned the internet, So is it
apples to apples or is it something else?

Speaker 4 (11:48):
No, Well, first of all, it'll it uses the internet,
So I mean that's like the the machinery that you
need to reach to scale up and reach a large population.
But it's more intimate than the Internet in the following
sense that it talks to us, right, I mean It's

(12:09):
as if an alien landed on the Earth and could
talk to us in English, and it knew everything about
you know, what, humans, history, everything, and the only thing
we can be sure if it's not human. But it's
really quite remarkable. Let me give you one example of
something that I was really surprised at when they did
a study of whether people who needed cognitive therapy preferred

(12:33):
real humans or AI. They preferred AI, which was really
quite remarkable. I didn't expect that. And part of the
reason is that the AI is not judgmental like humans.

Speaker 1 (12:46):
Well wait, but isn't it depends on the data, Like
we talk about, it wasn't.

Speaker 2 (12:50):
Getting trained on judgmental data.

Speaker 4 (12:54):
It was you know, Actually, it's a good question. What
was it trained on. I think that it was fine
tuned with you know, data from real subjects that we're
that we're talking with a doctor. But even without that,
I'll tell you something again, it's shocking is that it

(13:15):
is actually empathy. These large language models can empathize with humans.
And why is it? How is that? Well, it actually
absorbed a lot of text out there, novels, you know, letters,
and read it and so forth, and where empathy was
being part of the discussion, and so it absorbed that

(13:37):
too well.

Speaker 2 (13:38):
I wanted to hear a little bit of your thoughts
on what we heard from Elon Musk a little earlier today.
He actually participated in a surprise conversation at the Future
Investment Initiative to discuss the future of AI.

Speaker 5 (13:51):
It's most likely going to be great, and there's this
some chance which could be tense, that it goes bad.
The chances on zero that it goes bad. But overall,
at one point you said that covers eighty percent full
is one positive way to look at it.

Speaker 2 (14:08):
Maybe ninety percent, okay, eighty or ninety percent positive. The
question I have for you, professor, is do we need
an international regulatory body? Do we need the largest, most
powerful governments around the world to create some sort of
standard to ensure or help ensure that this goes the

(14:29):
right way.

Speaker 4 (14:31):
Well, as you know, in the UK they passed an
AI Act which is like one hundred pages long and
you know, incredibly detailed, and it's already absolute. I just
moving blasting forward and you know you're trying to catch
up with it. But I do believe that it's absolutely

(14:54):
essential that it be regulated, and it should be regulated
by people who a building it. The government, okay, is
the business of protecting people. And we'll see how that
plays out. But in for example, genetics, this happened, you know,
back in the sixties seventies. They had a meeting where

(15:16):
they came together at a solomar and they came up
with a containment rules and regulations for doing experiments under
the careful protection so that nothing leaks out, nothing gets out.
And I think we need to do the same.

Speaker 1 (15:34):
Okay, So when does as you said, ten years for
the internet to really kind of make its impact and
presence really known and maybe you know, integrated into our lives.
So is it a decade before we see LMS and
AI at this level integrated into our lives.

Speaker 4 (15:56):
We are at a stage that aviation was at when
at Kitty Hawk the Wright brothers made the first flight.
It was ten feet up and one hundred feet long,
and that really was the you know, something that then
took decades and decades to build. And the most difficult thing,

(16:16):
by the way that airplanes, you know, design of airplanes
had to solve was control. How do you how do
you make it go where you want to go without
crashing and that's something that again it's like we're going
through right now with AI. And yes, it will take decades.
It's not going to happen overnight.

Speaker 3 (16:37):
I know.

Speaker 1 (16:37):
It's just it's kind of fascinating. We have a million
more questions. Is it going to take all the jobs?
Is it going to is it going to create jobs?
Is it going to take jobs? Just got thirty seconds
and yes.

Speaker 4 (16:45):
Yes, you know, I get asked that question whenever I
give a public talk, and my answer is that you
won't lose your job, but your job's going to change
and you're going to need new skills, and you know
it will morph over time. Now, you know these are
people who are in the workforce now, but you know
young people coming up, they'll have no trouble whatsoever finding

(17:07):
new jobs in this new industry.

Speaker 2 (17:09):
We gotta get you back on the show, doctor Sanowski.
Really appreciate you.

Speaker 1 (17:13):
Just say, Terry, what we always want to know is,
like you know, people like us, you know anchors.

Speaker 2 (17:17):
We don't have time for him to answer that question.
We don't have time for him to answer that question. Carol,
I'm sorry, I got to give the book a play.

Speaker 1 (17:23):
Well, come back hopefully.

Speaker 2 (17:24):
The new book Chat GPT in the future of AI,
the Deep Language Revolution, Doctor Terry Sanowski. He was there
at the beginning. He knows it all. He's Francis Krickchair
at the Salk Constitute for Biological Studies, among many other things.
This is Bloomberg BusinessWeek.
Advertise With Us

Hosts And Creators

Tim Stenovec

Tim Stenovec

Carol Massar

Carol Massar

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.