All Episodes

October 11, 2024 10 mins
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Now we're going to do something much more important than that,
and that is we're going to ask see you physics
Professor Paul Beial Paul, what band when it comes on
the radio? Do you change the station?

Speaker 2 (00:14):
Ooh boy, you got me there. I don't know if
it's too heavy metal for me. I think I changed
the station on you.

Speaker 1 (00:24):
Okay, if you come up with a specific band name
at any point during our conversation, you let us know
the reason. The reason that Paul is.

Speaker 2 (00:31):
Here ignorance is rather complete.

Speaker 1 (00:35):
That's fine, that's not We don't have you here to
be an expert on rock and roll. We have you
here to be an expert on physics. Paul. By the way,
if you don't know, he's our most frequent show guest,
but I probably shouldn't assume that you've heard him before,
because maybe you're just joining the show for the first time.
Paul is a professor of physics at the University of Colorado.
He is our show's most frequent guest and we just

(00:56):
love having him on. And he talks about science, almost
all physics, but sometimes other science, in ways that people
can really understand and make you want to go to
see you and study physics. So I want to have
Paul on to talk about the Nobel Prize that was
just awarded in physics. It was split between a couple
of guys, and why don't I just open it to
you here, Paul, just tell us what the prize was

(01:20):
awarded for and who won it, and then we'll get
into more detail.

Speaker 2 (01:23):
Okay, So the prize went to John Hotfield at Princeton
and Jeffrey Hinton at the University of Toronto, and they're
physicists who won the prize for creating artificial neural networks
that resemble that enable machine learning, and that's the key
component of artificial intelligence algorithms.

Speaker 1 (01:43):
So it wasn't obvious to me why a Nobel Prize
more or less for artificial intelligence falls into the physics category.
And I was thinking, well, maybe computer stuff just goes
in the physics category. But you said that's not the
right way to think about it.

Speaker 2 (02:01):
Yeah. So both of these folks are physicists and they
built algorithms based on models in physics in the field
called statistical mechanics, and these models mimic the way neurons
in the brain store and then access information. So statistical
mechanics that's my research field as the mathematical theory that

(02:24):
underlies thermodynamics, and my way of describing it is you
can't hope to know what every molecule in the room
is doing, but you can hope to know what they're
doing on average, and that's the statistical and statistical mechanics.
So it's the field that underlies thermodynamics.

Speaker 1 (02:41):
So how does that lead to artificial intelligence? Because that
connection isn't obvious to a layman.

Speaker 2 (02:47):
Okay, So the models that they picked were specific models
that were initially designed to describe magnetism. So let's think
in terms of bits, because that's way they're used in
computer science. So one bit is the physicists would say, okay,

(03:07):
I can model that as thinking of an electron spin
in a magnet pointing up, and a zero bit would
be like the spin pointing down. And if almost all
of the bits are one, that means all of the
spins are pointing up, and that means the north pole
of the magnet is on the top and the south
pole on the bottom, and the reverse if it's mostly zeros. Now,

(03:30):
the models they use, instead of the spins trying to
all point in the same direction, they interact with each
other so randomly that they are happy to be in
billions and billions of different configurations that all have more
or less the lowest possible energy that the system can have.
And it's that energy landscape that they use to store

(03:54):
the information for this computer memory that they're creating.

Speaker 1 (04:00):
Okay, I still I still don't get it. But I
think maybe what you're saying is that what we're talking
about here is not so much the building blocks of
you know, how a how a computer picks the next
word uh in artificial intelligence, but rather the maybe is
it that some of these new systems are working with

(04:22):
billions or even trillions of parameters to to under to
come up with something that seems like intelligence. I'm still
not I'm still unclear on how we got from what
you said to chat GPT.

Speaker 2 (04:33):
Okay, I have an analogy. Okay, So this energy landscape
in this model is kind of like the bad lands
in South Dakota. It has many, many, many deep canyons
unconnected to each other, and the goal of the model
is to in effect store the information at the bottom
of one of these many, many, many canyons. So, for example,

(04:57):
let's take images as an example So a human will
look at a photograph and if they've ever seen that
person before, or many photos of that person, and they
sort of have been learned what that person looks like.
They see a photo of them that they've never seen before,
could be blurry, could be when they were younger, but

(05:17):
they would look at it and say, oh, that's uncle Walter.
You know, without without spending much time, your brain somehow
digs out this information, and the information is stored in
the neurons in the brain spread all over the brain,
and the neurons are strongly connected with each other. So
what they created is a model where these bits of
information are strongly connected to each other, and many, many,

(05:42):
many different images can be stored in that model so
that each of them is a the lowest energy state
for that particular image, and then you give the system.
You do that by making the system learn those images.
You keep in effect showing each of the system many
many many images and make each of those a local

(06:05):
energy minimum in the model. And then when you give
it an image of that person, even if it were
blurry or they were younger, it will it will go
down the canyon. That is the image that's closest to it,
and it is widely separated from all others, and so
it'll quickly decide, oh, that's George Washington. And so you

(06:29):
can do that with word, do that with sounds, you
can do that with images, anything that you can encode
as a set of ones and zeros.

Speaker 1 (06:36):
So it's essentially a form of pattern recognition.

Speaker 2 (06:41):
It's a pattern recognition, right, And the goal is you
just give it this new image, and the algorithm finds
the image that's closest to that image, even if it's
not exactly the same. So it's not a matter of
finding an exact match, it's finding one that has all
of the key characteristics of the images this already knows about.

Speaker 1 (07:02):
Wow, all right, this requires more bourbon than usual. I
guess my next question for you then, because I'm sure
I'm not going to fully understand this in just a
few minutes talking with you. When when these guys who
just won the Nobel, these scientists were or physicists were
working on this problem early in their careers. Were they

(07:26):
working on it with something like artificial intelligence in mind already,
or were they solving a different problem to begin with,
And then what they learned was adapted to artificial intelligence.

Speaker 2 (07:40):
Well, they were actually trying to mimic the way the
brain operates. And the first thing that they decided was
most important was how does human memory work? And in fact,
Hopfield was a biophysicist a summer nown and so he
was coming from the biophysics side to create a model
that mimics the way the brain stores the information across

(08:02):
all the neurons in the brain.

Speaker 1 (08:05):
Wow wow, All right, let me just broaden this out.
We got about two minutes here. Let me just broaden
this out. As long as you and I are talking
about AI, what do you personally is what do you
believe about sort of the balance or the range of
what AI is going to offer humanity in terms of
a scale of incredible gains in well being to peril

(08:31):
where the robots are going to kill us all or
do you think it really covers all of that?

Speaker 2 (08:39):
Well, I think it does cover all of that. So
in this you can ask this question about any revolutionary
new technology or invention and is it good or bad
for society? And the answer is yes.

Speaker 3 (08:53):
And our goal and mandate as human beings is to
try to put it into the good category most of
the time and try to avoid the bad outcomes that
can come from any revolutionary technology.

Speaker 1 (09:10):
I've got to say, I mean, Nobel prizes in the
hard sciences are to me are always worthy and fascinating
and incredible, and this is right up there with some
of the most in CREWD I'm not a physicist, so
it's hard for me to judge the way you would
judge looking at one physics prize after another, and it's
probably stupid to even compare. But for me, just the

(09:34):
implications of their work for the future are so enormous
that it's really something to think about.

Speaker 2 (09:44):
So the yeah, the implications of this, I mean, they
did this work in the early nineteen eighties and it
was another twenty years before any of that technology began
to become what we now refer to as artificial intelligence.
It appears very slowly, so the advances have been sort
of slow and are getting faster and faster. And I

(10:09):
one would always worry that we're the frog in the
pot that we've it's been coming on us slowly enough
that we haven't noticed how it's affecting our society, and
we need to be a little more cognizant of, you know,
where it's taking us.

Speaker 1 (10:25):
Great point. We'll leave it there.

Speaker 3 (10:26):
See you.

Speaker 1 (10:27):
Physics Professor Paul Beal, our favorite and most frequent show guest.
Thanks as always, Paul, have a wonderful weekend.

Speaker 2 (10:34):
Okay, thank you.

Speaker 3 (10:34):
Awesome.

Speaker 1 (10:35):
All right, we'll see you

The Ross Kaminsky Show News

Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.