Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Now here's a highlight from Coast to Coast AM on iHeartRadio.
Speaker 2 (00:05):
And welcome back George Doria, doctor Bart costco Bart. Seems
like the world has gone crazy with artificial intelligence chat systems.
Speaker 3 (00:12):
What's going on here? Sure? Yeah, sure has Well, they've
gotten more powerful. They're now up to more than a
trillion parameters in their brain, in their synapses, and they're wiring.
And that's a good thing in terms of applications are
more powerful. It's a bad thing because all the inherent
problems in these key forward black boxes we call them,
(00:36):
simply get worse. The black boxes get bigger and blacker,
and they hallucinate and they do a lot of other things.
But I think, George, it's because of the language interface.
I mean, the real advances in AI that are ongoing
and will continue to be for a long time are
not so much on the language front, but the tie
end to the language. It's a front end with words,
(00:56):
it's a back end with words and in between. It's
straightforward nineteen eighties nineteen nineties processing, but greatly speeded up
with just a lot of parameters added now courtesy of
chips like from Nvidia and elsewhere. Chips that were really
initially designed for imaging and for graphics and game playing
(01:17):
and that sort of thing. But the other types of
AI that will continue to be out there, applications of
data and statistics, I think a little safer. You may
not realize it for the better and for the worse,
medical applications and just about anywhere you use data, because
that's essentially what we mean in this context by AI,
access to a lot of data. But that language front
(01:38):
end has always been a big deal, and it's a
little bit illsory. You know, when someone uses language to
speak to you, you impute to them human like characteristics,
Like when you watch a Disney cartoon, you think that
Vambi is like a person. So we have that effect.
But I can assure you there is no person, no
homunculous inside those big black boxes.
Speaker 2 (02:01):
When you coin the phrase fuzzy logic, what does that mean?
Speaker 3 (02:05):
I want to clarify what my PHA advisors. The late
Great loss the Auto coined that phrase in nineteen sixty five.
He was subject by the way of a Google Doodle.
He died in twenty seventeen. Fuzzy logic means thinking in
shades of gray. It's maybe not the best adjective fuzzy.
Sometimes the word is vague, that's where the boundaries are
(02:27):
not black and white, like the name of the series
Twilight Zone refers to that fuzzy area of twilight between
night and day, or the rose that's both pink cannot peak.
And if you look at most human concepts, from cool
air to green grass and those kinds of things, rarely
do you get something that you could really say is
one hundred percent one way or the other. But classical
(02:50):
binary logic, the modern mathematics from one hundred plus years ago, anyway,
is based on assume that ran with that the first approximation.
But that's not how think, and that's not how we talked,
it's not how we reason. When you park your car
in a parking lot, in a parking space, you don't
have to get it exactly precisely right in a little
rectangular slot. It's approximate. It's fuzzy. And we came to
(03:14):
learn in the sixties, seventies, eighties, and nineties that demanding
that extra precision was just too costly and then necessary.
So what happened with fuzzy logic when it became popular,
I got involved in it. Forty plus years ago and
the Japanese started doing applications. Small skill are actually the
first commercial applications that we have and you probably own one.
(03:38):
Like I have a subru outback car. It has a
fuzzy transmission to smoothly shift the gears and soon in
my class this week told me he had a neural
fuzzy lights cooker. But thousands of products, largely commercial products
out of Japan and South Korea, commercial electronics type products,
and many many other systems are under control of fuzzy
logic because it's not complete black box like the neural
(04:01):
networks you're referring to we call AI today and not
like the old black white decision trees that we called
AI for about thirty years. It's sort of a compromise.
It's sort of a gray box reasoning with shades of gray,
and it has only grown over time. But the essential
difference between fuzzy Logic and old AI is old AI
in effect try to get people to think or model
(04:24):
how people think, like on off binary switches work. Where
fuzzy Logic tries to get and is getting computers to
think more like people think.
Speaker 2 (04:33):
Why does so many people bart seem to be afraid
of AI.
Speaker 3 (04:38):
You know, there has been a lot of hype, and
I think, just like when you watch that Disney movie
whatever happens to be, you impute two it human aspects.
When I ask people to mean by AI number one,
they don't know what the adjective artificial stands for. And
it was something the late great AI expert John McCarthy
just sort of threw out in the late fifties the name,
(05:01):
and so that's a problem. And intelligence they don't know
what that stands for. Just a real quick point. There
is a split between the computer science folks in the
engineering folks. Actual engineering folks like me. We have renamed
that to distinguish at CI may not be as catchy
for computational intelligence. By intelligence we mean behavior that tends
(05:23):
to increase or decrease a performance measure, like the accuracy
of a test or something like that. And by computational
we mean something on a computer. So what do you
fill in with if you don't know what the adjective
is the artificial and the noun you fill in with
what you remember of a lot of good and bad
old science fiction Movies'm a great one like how the
computer in two thousand and one of Space, Odyssey to
(05:45):
the Terminator fantasies and so on, And there's a lot
of the nature of two hour dramas like that is
to have a scary problem and then fix it at
the end. So a lot of the sense of the
overtaking of the robots and so forth, like the Terminator movies.
And they're like, I mean, there's some risk of that,
but it's pretty slim, George, And there's other problems out there,
(06:06):
but it's pretty slim. That's just your background associative memories.
We call it filling in teeing off of that word AI.
Speaker 2 (06:14):
Where do you see AI going bar in the next
fifty years? I know it's a long ways out there.
Speaker 3 (06:20):
Yeah, it's a tough call. And if you go online,
there's a series call Closer to Truth and tape an
episode of the late great founder of AI, the true
father of AI is Marvin Minsky, and we had one
of the episodes is what do you think it will
be like at fifty and one hundred years and far out?
We do think there will be supplements to the brain,
uploads to the brain at least implants more bringing human
(06:41):
interface and when it hit they'll hit as fast as
the smartphones hit. But well before that kind of thing happens, George,
we're just going to have a kind of AI creep everywhere,
because what we're really talking about is applying lots of
data to whatever problem we're looking at. And there's some
very good things about that. Your systems will become more accurate,
the scheduling systems and stores how much food they need
(07:04):
to buy based on much want that day. The all
griddles will get better and better and more precise. And
there's some bad things. We already have a big privacy creep,
and no one realize how much personal privacy we'd leave,
we would lose just when everyone has a smartphone and
is taping at will, even though a lot of that's
not legal at least in California, for example, without consent
and so on. So it's a mixed bag, and a
(07:27):
lot of us are very concerned. I'm also a lawyer
in the law school of USC besides engineering school. We're
concerned about the loss of digital privacy in the law
that's changing right in front of us, without let's say,
judges and justice is really being fully up to speed
(07:48):
on what's going on, and the law that we have
is based on paper, old paper, and stuff in dictionaries
and law books and things like that. It's not the
way the system is working today. And there's a huge
clash as we move in effect from atoms or things
to bits or information. It's a big clash. And we're
(08:08):
trying to update the law, I mean at a criminal level,
at the surveillance level, and also now that you're hearing
more about in all the lawsuits at the civil level,
but so far not we haven't been very successful with that.
Speaker 2 (08:21):
Are you impressed with the pace of AI.
Speaker 3 (08:24):
I am impressed with the development of computer chips, which
is behind a lot of it, especially in the large
language models that people are using out at chat systems,
and I've been contributing to them. We have a portfolio
patents for example at USC that we're taking the market,
so to speak, and so, rightly or wrongly, I'm contributing
(08:45):
to this process and have been for a long time,
and it's got problems and we're trying to fix those
problems in a variety of ways. Fuzzy logic is one
way to help out to deal with the black box.
Let me just say real fast, the problem with that
you can just expose a neural network system and this
is the lgims from the eighties to lots of pictures
of your face, and very soon it will learn to
(09:07):
recognize your face in many, many versions of it, noisy versions,
your face at night, your face with a killer mustache
or half covered in a blanket or something like that.
And that's a good thing. And your brain does that too,
So it has that ability to become a kind of
patterned computer, pattern recognized you. The trouble is is generalizing
that way beyond the scope of what you trained it,
(09:29):
so you don't know what it's learned. And when I
learned something new new, you don't know what it is forgotten.
And that was for small computers we were worried about
and it was hard to release these earlier for fear
in the commercial world of lawsuits because you really needed
ninety nine percent accuracy and you weren't getting that. Well,
now we have these giant black boxes, say for example,
(09:50):
the Chat and other systems that are up to way
beyond now a trillion parameters and growing, and they don't
just suffer from the problem of garbage garbage out, but
they have the problem of garbage in hallucinated garbage out,
or even when it's not garbage in, when it's really
good stuff going in known patterns. The system hallucinates in ways,
(10:11):
and it doesn't tell you it's doing. It's as if
an expert is lying to your faith and you can't tell.
So one of the things that doesn't do is when
you ask it a question. Even the chat system that
doesn't say, hey, I'm giving you an answer, George, but
I'm only eighty six percent or twenty three percent confident
of that answer, it spits out all answer is equally
confident no matter what it's training. Now, the advance of
(10:33):
fuzzy logic, as I call it fuzzy Logic two point
zero and discuss this in the re release of the
book Fuzzy Thinking, is we've gone from the earlier fuzzy
system of rules to the way the rules map over
the probability, which is the language of AI. And a
consequence of that is we can take some of these
(10:53):
neuralblack boxes, not all of them, but we can take
them and train on them and build a system of rules.
Gives us a kind of logical audit trail of inputs
to outputs to know what's being said, and a probabilistic
description that tells us for every output what the probability
is or the uncertainty is and that output or we
can open up its brain and look at which are
(11:15):
the rules fired. Now you have to pay for that.
You need more computation if you want a better XAI system.
We call it explainable AI. But that's one way to
do it. And it's hard to keep up with the
big black boxes. But that's been the contribution of funty logic.
And with the explosion in ongoing in Moore's law of
(11:36):
computer chip density, that the number on our circuits have
been has still been doubling about every two years since
nineteen sixty. We thought it would run out. It's still
going strong. And I think we're about to shift from
electrons to protons, from electricity to light as a lot
of its computations and some form of More's law may continue.
So these kind of gains, which are just scaling up
(11:58):
the old algorithms, are likely to continue at least for
a decade, if not much more.
Speaker 2 (12:04):
Part I think one of the greatest inventions for mankind
is the smartphone. I mean the things that they do,
text messages, sending pictures through the air. It's incredible.
Speaker 3 (12:16):
It is, indeed, and as I said, when I think
we get to the point, we're not there yet where
we can bridge this gap in the brain computer interface
between what's going on in your head and what's going
on in the smartphone and those kinds of systems, that's
the next really big advance. And for example, it's one
of the fourteen objectives or main projects of Modern Engineer
(12:40):
of the National Academy of Engineering called the Grand Challenges
for the twenty first century to reverse engineer the brain.
And that will surely happen. You and I live to see.
It is a very different matter. But when it does,
I think it will take off like a California brushfire,
and you'll have wireless access in effect to your brain
that you have right now to your smartphone. You may
(13:02):
not always want that and the fact it has, but
that will be truly revolutionary. We're not anywhere near that yet,
just because brains think in these on off spikes, and
you have one hundred billion neurons in your brain, each
connected on average to about ten thousand other neurons in
a totally asynchronous way, and that's completely different in a
structure than the way these chat systems. For example, the
(13:26):
chat systems have the form of spaghetti, whereas the human brain,
or the insect brain or the blue whale brain has
the form of a fish net. Spaghetti even though it's
all wound up, you could straighten it out, goes from
left to right. We call that feed forward, whereas the
fish net, the thing is linked up. In the case
(13:47):
of your brain instead of connected just to four other nuts.
Thereby in the lattice of the net. Again, each neurons
connected to on the order of ten thousand others, so
the organizational principles are different. That's the next frontiers computational neuroscience,
coming very soon to a computer chip near you.
Speaker 1 (14:06):
Listen to more Coast to Coast AM every weeknight at
one am Eastern and go to Coast to cooastam dot
com for more