All Episodes

October 21, 2024 82 mins
AI, Humanity’s Evolution, and Our Place in the Cosmos

Guest: Professor J. Craig Wheeler, Samuel T. and Fern Yanagisawa Regents Professor of Astronomy Emeritus at the University of Texas at Austin

Summary: In this episode of the Boundless Podcast, Richard Foster-Fletcher, Executive Chair of MKAI, welcomes Professor J. Craig Wheeler, a distinguished astrophysicist and author of the forthcoming book The Path to Singularity: How Technology Will Challenge the Future of Humanity. Together, they explore humanity’s place in a rapidly evolving technological landscape, discussing topics like AI, climate change, space colonization, and the implications of technological advancements on society. Professor Wheeler provides his insightful perspective on humanity’s future, contemplating how technology could shape or even redefine our species in the years to come.

Key Talking Points:
  • The Journey to Now – Professor Wheeler discusses his background in astrophysics, what inspired his exploration of humanity's evolution and technology, and how his astronomical perspective shapes his views on our future.
  • Technological Impact – A discussion on AI, genetic engineering, and how these technologies will fundamentally change human life, including the concept of the technological singularity.
  • Opportunities for Equity – Climate change, overpopulation, and AI as global challenges that demand equitable solutions. Could AI offer an alternative form of intelligence to guide ethical decision-making?
  • Ethical Considerations – The potential risks of unchecked technological advancements, with a focus on AI and genetic modification, and the ethical responsibilities of scientists.
  • The Way Forward – Speculation on humanity's future as a multi-planetary species, the need for biological change, and the role of AI in either aiding or threatening our existence.
  • Closing Thoughts – Reflections on responsible innovation, merging technology with humanity, and the importance of broad awareness around technological risks.
Key Quotes:
  • "If climate change significantly reduces our population, who caused the problem becomes academic. It will affect us all."
  • "We may need to think about death in a very different way if we develop the technology to live forever."
  • "AI has immense potential for good, but the key is awareness and ensuring that we don’t let it run unchecked."
Guest Bio: Professor J. Craig Wheeler is an American astronomer. He is the Samuel T. and Fern Yanagisawa Regents Professor of Astronomy Emeritus at the University of Texas at Austin. He is known for his theoretical work on supernovae. He is a past president of the American Astronomical Society, a Fellow of that society, and a Fellow of the American Physical Society.

Episode Links & Resources: 

Become a supporter of this podcast: https://www.spreaker.com/podcast/the-boundless-podcast--4077400/support.
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Professor Jay Craig Wheeler Craig, Welcome, It's wonderful to meet you.

Speaker 2 (00:05):
Oh, it's a pleasure to meet you, Richard, looking forward
to it.

Speaker 1 (00:08):
Craig is a Samuel T and A firm Janiskawala, the
region's Professor of Astronomy Emeritus and Distinguished Teaching Professor Emeritus
at the University of Texas at Austin and pasture of
the Department. I'm going to pause there and tell you a
little bit about what Craig has written. A book's coming
out very soon in November twenty twenty four called The

(00:31):
Path of Singularity, How Technology Will Challenge the Future of Humanity.
And this book is an incisive summary of the evolution
of humanity to its critical current stage and foresight into
our technological future from the broad perspective of yes, an astrophysicist.
And we are developing computers that are more than capable
of we are. We are peering into our own brains,

(00:52):
says the blurb. We have the capacity to direct our
own biological evolution. We are altering the ecological balance of
our environment, and we're venturing into space. What happens next
is we race towards a technological explosion, and that's what
the book covers. It strikes me, Craig, that for thousands
of years, we've marked our own homework as a species,

(01:13):
so we've considered the good of humans above all others.
Should you ask a person on the street are humans
good or bad? They're likely to say good. We overarchingly
we're good. If you asked a whale if we were
good or bad, if you could ask a whey, they'd
probably say that we're not so good. I've wondered for
a long time whether there's a lack of alternative intelligence

(01:36):
on our planet that can challenge the human perceptions of
what's right and what's wrong. Does AI offer us a
path to that? Do you think?

Speaker 2 (01:44):
Yeah, we are the dominant intelligence on the planet. There's
no question of that. We can make tools and do
all sorts of things, and accumulate our knowledge, write it
down so future generations can benefit from the past knowledge,
and that all bills on itself. That's part of the
exponential growth of knowledge is that it accumulates like compound interest,
not like a simple interest. And so that's just a

(02:09):
fact of the matter. But it is also true. I
think that we've always thought as nature as a thing
to exploit, and now we're seeing in the context of
climate change that there are limits to that. So one
of the things that I speak to in this context
is talking about Malthus. So Malthus noted that in a

(02:34):
biological context, species will sometimes or even often proliferate until
they're using up all the resources, and then they die
off and then they will maybe build back up again.
And there was a discussion back in the sixties that
Paul Erlake at Stanford made popular talking about the population bomb,

(02:56):
that we were producing people so fast that we would
use up resources. And that turned out not to play
out then because there were special rises and greenery of
various kinds that we found ways to feed a lot
more people. So that particular crisis right away, as I'm
writing about this topic, I'm thinking, isn't climate change the

(03:22):
next chapter in that? So, yeah, we solve that food
problem in the sixties, but now we're seeing that the
environment is biting is even more and I think one
could so Malthus is very denigrated in many quarters of
people who don't think we have a population problem, and

(03:45):
you don't want to worry about limiting our use of
fossil fuels and whatnot. But I think the case can
be made that we are still facing in climate change
a classic Malfusian disaster. I write about that a little
bit in the book, and I expect to get some
bite back on that, because I know there are people

(04:05):
that think walths is next to the devil somehow, So
I'll throw that out as a controversial thought that I
was happy to put down in black and white.

Speaker 1 (04:16):
Yeah, the population question has been debated many times and
the limits source God. Yeah, of course in many countries
the people.

Speaker 2 (04:24):
Are Yeah, people are rightfully worried. That is, populations decline,
and there are some indications of that starting to happen
that will be terrible for the economy. And I think
that's true. But a question of how many people the
earth can support in a healthy manner may require fewer

(04:48):
people than we currently have. And if that's true, then
somehow we have to think through with smart economists, how
do we maintain an economy where people's quality of life
is good and stays good and gets better without devastating
the environment at the same time and having fewer people

(05:08):
if that's what's required. And again, the idea that can
entertain the notion of having fewer people is an ethema
in many orders, so I expect to get some feedback
on that too.

Speaker 1 (05:21):
Yeah, the are two interesting analogies. And it does strike
me that our economy is predicated on growth regardless of population,
which is doing to fail right now.

Speaker 2 (05:33):
It's fine, but you.

Speaker 1 (05:35):
Can't grow forever no matter what, so that at some
point would have to crash.

Speaker 2 (05:41):
That is the point that I make in the book,
that we're on a finite planet, finite surface. And yeah,
and this has bothered me my whole life. While I'm
not an expert in economics as well, but the premise
that you must have growth to have a proper society

(06:01):
and things getting better, and that's worked. But we have
had growth, and people by and large throughout the planet
are much better off now than they were one hundred
years ago, two hundred years ago. But as you just said,
there are limits. It's a finite planet with finite resources,
and we have to eventually face that. So I think

(06:22):
the idea that our population would grow exponentially is that
won't work. And now that it's trying to curl over
a little bit, there are people saying, oh my gosh,
that's a disaster, and indeed there are issues associated with that.
So I touched a little bit on well, if we
fill this planet up, can we just go to Mars?
And I think the answer is we could go to

(06:44):
Mars and within fifty one hundred years you'll fill it
up and over polluted it too. So I think the
idea of going to Mars as a solution to climate
change and population issues here on the planet, that's I
think that's not right. I don't think that's the way
to go.

Speaker 1 (07:00):
Yeah, we're going to use up this planet, would probably
use up the next planet and so on, So.

Speaker 2 (07:04):
That's the idea. So space is very big, there's a
lot of room to go out there. But I think
they say, okay, we could just grow all we want forever.
I just I'm a little more conservative than that.

Speaker 1 (07:15):
I'm going to ask you about that in just a moment,
but it just before we move on from the point. Obviously,
in nature, if there's too many foxes, there's not enough
chickens to eat, and then they starve and that rebalances
the population. But interestingly, that's every fox's responsibility, and their
impact is almost the same across every fox. But of course,
the pollution is just a few human beings. Really, There's

(07:37):
a few companies and a few humans that pollute huge
amounts and then massive population growth in some countries, but
those people barely make any imprint on the world at all.
But to your point, that actually doesn't make any difference.
If climate change is going to significantly reduce our population,
who caused the problem is academic it's going to affect

(07:57):
us all.

Speaker 2 (08:01):
Yeah, it's of course, it has been the Western world
to develop world that is mainly doing the causing the
issues with climate change, and then it's people not in
the developed world they're going to be suffering from it.
It's a very complicated issue.

Speaker 1 (08:16):
Yeah, which is mostly geographic, and had the geography been
a different way around, it would have been a different
set of humans that got the opportunities to pollute. And
it's just the way that history went out.

Speaker 2 (08:27):
I think will in developed countries, the rate of growth
of population does tend to level out as people get
more successful. One of the most effective things to do
is to educate women. And the next thing, you're not
having as many babies before, and I think that's a
good thing. It's a natural kind of feedback looking at

(08:48):
the way things are going right now, that's apt to
happen more slowly in Africa. So I think we're going
to have just not my ideas, things I've read about
continued population explosion Africa, and we as a global society,
we're going to have to think about that. And that's
not saying anything bad about Africans. That's just the way
the demographics are going.

Speaker 1 (09:09):
And so you perceive that we will top out and
then start to rebalance. I have to see a number.

Speaker 2 (09:17):
Yeah, but it's going to go at different rates in
different places, in different times. So I don't think there's
going to be a single clear answer to that. And
I suspect problems are going to get more severe and
more people will wake up to the issues and address them,
and then I hope out of that will come some
healthy solutions. Right now, it's a bit of a controversial mess,

(09:40):
as so many things are.

Speaker 1 (09:42):
And we'll talk about your role as astronomer and what
prompt you to write this book, But if you did
mention Mars there and as part of your research and
reflection on futurism, do you see a multiplanetary future for humans?
Do you think that's realistic?

Speaker 2 (09:58):
Okay, yeah, I talked about that a little bit. But
one of the key questions certainly multiplanetary. I have no
doubt we're going to go to Mars. I think a
thousand years from now there will be ancient cities on Mars.
You know, we're just moving in that direction. We can
talk a little bit about what the dynamics of that
are because they're very interesting as well. Whether there are

(10:20):
other habitable environments in our Solar system is not clear.
Maybe we build big space stations and all of that.
Science fiction has speculated about that for a long time.
We just launched the Europa Clipper, which is going to
go out and fly around Europa and probe its oceans
and see. It won't directly detect whether there's anything alive there,

(10:42):
but that's what all the scientists are pursuing. They want
to know. First, you determine the ingredients, like we've determined yes,
there was water on Mars. Did life form there? Then?
We don't know yet, still working on that, but the
Clipper is a great thing. We will expand out and
fill the Solar and at some level, at a long
enough time, the question of whether we ever go anywhere

(11:04):
else is a big one, and it depends basically on
the speed of light, because with the physics we know
right now, you can't fly faster the speed of light,
and relative to the sizes of our galaxy in the universe,
that's rather small. And so if we can't develop physics

(11:27):
that will allow us to push the hyperdrive button the
way they do in Star Wars and oooh, off you go.
If we don't develop that kind of physics, then we're
probably restricted to our solar system for a very long time,
would be my guess.

Speaker 1 (11:46):
And do you believe that there is microbial life in
our solar system?

Speaker 2 (11:51):
Scientists are loath to use the word belief in this context.
So here's so I suppose the simple answer is yes,
I will be surprised if we don't find microbial life elsewhere.
But we have zero evidence for it now, and one
has to follow the facts as you know them, and
there's just no evidence for that now. But our technology

(12:13):
is limited, and we're still trying to do a better
job of that. And I think as we discover more
exo planets around other stars, and we develop the techniques
to study their atmospheres and see whether they're out of equilibrium,
that would indicate things exhaling into the atmosphere that will
come over the next again, same kind of time frame,

(12:34):
ten twenty years, I think. But right now there is
no evidence. But you see the same physics everywhere, You
see the same chemistry everywhere proto life in meteorized I
think most astronomers expect that there is life, at least
simple microbial life out there, but again no evidence yet,
so be careful.

Speaker 1 (12:53):
And if we did find it in the icy waters there,
or the upper atmospheres of Venus, or maybe in several places,
then would it be fair to say therefore across the
galaxy life is common.

Speaker 2 (13:08):
I think that would be the natural extrapolation. It would
be better to prove that in some way. But yeah,
if you find out it is not just restricted to
our planet, and on our planet, life finds a way
to live wherever it can, all sorts of crazy niches,
very salty, very acidic, frozen ice in the depths of

(13:31):
rocks down in mines in South Africa that haven't seen
sunlight in a million years. Life everywhere, And so the
indication is that life is following Darwinian evolution of taking
advantage of your environment and learning to exploit it, which
is how we've gotten into the probably we have now
is not just a human thing, it's the way life works.

(13:55):
That probably it's everywhere. If we can show that it's anywhere. Besides,
that would be a very natural thought to have. Yeah,
more important to have evidence.

Speaker 1 (14:05):
I suppose if we just know about life on Earth,
then we think it's rare, and then we there about
life and the Solar System could still be rare on
the same logic that we thought Earth was before this.
Are you do you enjoy talking about the great filter
or nothing.

Speaker 2 (14:19):
About which the great filter?

Speaker 1 (14:21):
And what what do you mean by the great As
a suggestion for the Fermi paradox in terms of why
we potentially won't find any intelligent I.

Speaker 2 (14:30):
Think you know. I don't know. I talk a bit
about the Fermi paradox in the book It's what has
to even though it's much discussed, I don't know if
that brought anything new to it. But so here's my
take on that that I think for me's point was
even flying at the slow speeds of chemical rockets, if

(14:51):
you could fly to the nearest planet, populate it, develop
an industry, launch new rockets from there on elsewhere in
the galaxy. You wouldn't have an individual fill the galaxy,
but you could have your species fill the galaxy in
about two hundred and fifty thousand years was the estimate,
which is about the time it takes the Sun to
rotate around the galaxy, just coincidentally. So it could be

(15:17):
by that hypothesis that the universe we see around us
is just swarming with extraterrestrial life. And the point there
is that we've got all sorts of sophisticated ways of
looking for that. There's optical radiation and radio radiation, infrared radiation,

(15:38):
ultramoil radiation, X raise gamma rays, that all sorts of
ways we probe the universe these days, and there is
not any sign of life out there at all. So
it is not true that life has expanded to fill
the galaxy and extraterrestrial life is all about us. There's
no hint of klingon battleship exhaust no matter where we look.

(16:02):
And so I don't know what that means. That maybe
it means that that the filter is that, yeah, we
any intelligent society kills itself off in some way with
no goodar weapons or bostficial intelligence. But I guess I'm
at that point. I'm agnostic. I see the possibilities. The
evidence says we don't see any now, so it's not

(16:25):
just obviously abundant, and that's just where we are.

Speaker 1 (16:29):
Yeah, And the great filter suggests that if we don't
see evidence of intelligent life, which we don't, it is possible,
it could be possible that there is a filter that
all developing species cannot get through, which may mean they
develop a technology and that technology will kill them, such
as nuclear war or climate change, or potentially people say
otificial intelligence. So when you look at ourselves, you have

(16:51):
to wonder, are we that side of the filter or
this side of the filter. Have we got through the
filter or is it still to come.

Speaker 2 (16:57):
Yes, well that's the question. I have nothing. You're absolutely
stating the question properly. I don't have anything terribly new
to say about that topic. One of the things that
I will add to it is I do wonder in
terms of we look out there and we don't see
any evidence for life. Maybe we don't have the technology yet.

(17:18):
And so my example of that is maybe advanced technological
species have learned to communicate with neutrinos, which are very
penetrating and go everywhere, and we just don't have the
technology to do that, same as I can imagine an
aboriginal person where there's self specific somewhere without a radio.

(17:44):
We're talking to each other by radio all the time.
And if you don't have that device and that technology
and all that's just sweeping past you in the world
on a radio as you tune it is just full
of life. But if you don't have that device, if
you don't have that techechnology, then you don't know anything
about it. And it may be like that that there
are ways of communicating somehow. Again, it would tend to

(18:08):
be communicating faster than the speed of light, and there
are some hints from physics that I discussed a little
bit that conceptually maybe something like that as possible. But
if it is, then it takes a very advanced civilization
to develop that technology. Then we're just you know, a
swarm in life, but in our ignorance of it.

Speaker 1 (18:30):
That's a possibility, and enormous amounts of power as well.

Speaker 2 (18:35):
Yeah, yes, yeah, you may have to tap the whole
power of your son, or maybe the whole power of
a galaxy or something. There are arguments against that as well.
There was a very clever paper published in an astronomical
journal about ten years ago now pointing out that even
if you trap all the energy of your son or

(18:56):
a krda chef try to civilization. You absorb that energy,
you do stuff with it, but you have to thermalize
some of it, you have to entropise some of it,
and that energy will be reradiated in the infrared, so
longer wavelengths than the human eyeball is evolved to see.

(19:16):
But we now have detectors of the infrared. And he
went out and surveyed and looked for Carter chef like
civilizations radiating in the infrared, and he saw zero. That's
part of the negative evidence that it could be there
slightly dimmer than our technology or something. But the first order,

(19:36):
it was a very clever thing to do. An answer
was no sign of such a thing. Even if you're
tapping the energy of your star, especially if you're tapping
the energy of your star, some of that has to
leak into the infrared.

Speaker 1 (19:49):
Yeah, it's an interesting thought, isn't it. That not only
would you have to be an advanced intelligent species who
could harness the power of your son, you would have
to at first part height it completely as well from
any other surrounding galleries.

Speaker 2 (20:04):
Exactly get from where we.

Speaker 1 (20:05):
Are twice as big an osk, isn't it really?

Speaker 2 (20:09):
Maybe we will do that million years from now. That's
another topic of whether we will be we a million
years from now?

Speaker 1 (20:18):
How long do you think that we will stay in
our current biological form for as we are now?

Speaker 2 (20:23):
Ah, that's a good one. That's a good one. Uh.
We're in a fourteen billion year old universe. We're living
on a five billion year old planet. Life somehow came
to be. I talk about that a little bit. Nobody
really knows how it came to be and got more

(20:44):
and more complex and developed the intellect we have now.
But Homo sapiens, as you and I and the folks
listening participate in that, are only a few hundred thousand
years old, hundred and fifty three fifty four hundred thousand
years old. A million years ago, there were no Homo

(21:05):
sapiens on this planet. So looking forward, my guess, which
I'm quite it's a gas, but I'm very confident of it,
there will be no Homo sapiens on this planet a
million years from now, and that's partly based on just
Darwinian evolution. Things are going to change, will adapt to

(21:26):
climate change, we'll move into space, will develop different genetics
that allows us to live in space, but that we
will be a recognizably different species in a million years
if it's just biological evolution. But it's not going to
be because we now know how to fiddle with our
own evolution with techniques like Crisper and all the genetic

(21:48):
work that's going on and mapping the human genome and all.
That's just setting the stage for being able to tinker
with our own biology. And a question of how fast
we tinker with our own biology to ensure that we
are no longer almost sapiens. That could be as short
as decades or as long as one hundred thousand years.

(22:10):
I don't know, but I think it will happen, and
we're going to have the technology and somebody somewhere or
many somebodies are going to use it, and we will
start designing our own evolution. And I think that's just inevitable.
These things have their own momentum, and that momentum is
going in that direction, and I suspect it's probably unstoppable.

Speaker 1 (22:32):
So a million years from now, we don't exist in
this form anyway. That's that projection.

Speaker 2 (22:37):
So maybe there will be creatures that are descendants still here,
but not as holmost sapiens as we recognize ourselves now.
The other aspect of that is, never mind, we can
take her with our own genes. But there are many
people wondering whether we have to somehow merge with our
machines AI planet in our heads or something to defend

(22:58):
ourselves against AI becoming artificial general intelligence and ready to
take over things. If we're going to do battle with
them in some way that we need to merge with them.

Speaker 1 (23:13):
Don't bring a knife to a gunfight.

Speaker 2 (23:15):
Eh, something like that, Yeah, bring your own gun.

Speaker 1 (23:19):
The gergetic changes would be essential if we look at
a long enough time frame, for example, a few hundred
years one hundred million years ago, and then we couldn't
breathe very easily on the planet. We'd have to have
adjusted ourselves to have lived then dinosaur times and maybe
in the future all living on Mars. Genetic modifications the
human body would be fairly essential, correct, Yeah.

Speaker 2 (23:40):
I think so, But again I think they will be
artificially induced. I don't think we will wait around to evolve.
We're going to we're going to engineer ourselves. It's my guess.

Speaker 1 (23:49):
Yeah, evolution doesn't have a goal. Evolution moves at its
own pace. Raptors consisted for out on time on this planet.
There's no evidence that they were evolving. They have a
very long timeline to do so, and didn't seem to
get any more intelligent or adapt So there's nothing suggesting
will change automatically.

Speaker 2 (24:06):
But we do have goals. Almost sapiens have goals, and
they will apply them. But just.

Speaker 1 (24:14):
A thought about evolution. You can extrapolate evolution to include technology.
And we've taken extreme of the Darwinian description of evolution
that we've adapted ourselves to the environment. Of course, we
didn't stop there. Then we started adapting the environment to us.
But that has not been successful, and we've meddled with it,

(24:35):
as I think you mentioned earlier, to the point now
where our environment is potentially going to destroy us. Is
that still evolutionary or is that anti evolutionary?

Speaker 2 (24:44):
We have benefited from that, so sorry, but you're getting
a little glib about that. MA understand we were going
but we have benefited from it. It's just if it
runs to too much of an extreme, then it starts
biting back at us, which is what we're seeing now.

Speaker 1 (24:59):
But is anything of well I described traditionally scientifically evolution
or is it.

Speaker 2 (25:04):
Just I'm debating in the back of my head your
definition whether you restrict evolution to a biological thing or
you allow it to expand into the entire enterprise of
what's going on. And I don't know it, so it
is a matter of definition. But I think at some
level we are going to merge with our technology, and

(25:27):
whether it's becoming hyper conscious or something, I'm not sure,
but that is part of the human condition, is to
work with your technology and absorb it at some level.
I don't know. I guess I don't have a firm
answer to that.

Speaker 1 (25:42):
And you picture the world a million years from now,
where we have changed ourselves biologically with crisper or whatever
the new techniques are. Potentially we've merged with machines to
make his fastest smarts strong. But that what's the rest
of the world doing what a snake's and chickens and
buffalo I was doing at this point.

Speaker 2 (26:01):
Ah, very smart chickens. No, I don't know whether they
evolved wrong with us.

Speaker 1 (26:08):
It was an evolution question. It's just I suspect habitation.

Speaker 2 (26:12):
The other species will continue to unless we decide to
get involved. They are going to continue to evolve at
a biological scale hundreds of thousands of years or something,
and that will continue to happen. But I don't preclude
what humans saying. We figured out how to fiddle around

(26:32):
with our own biology. Let's go after the chickens and
the whales and the porpoises. Maybe we can make the
porpoises and whales smarter, and we can talk to women
and they can tell us hold off, folks, you've gone
too far or something.

Speaker 1 (26:49):
I think the whales are already telling is that that
maybe they are communicators. I think it is always tricky
when you haven't got vocal cords and fingers and thumbs
to express us yourself quite the way that we do.

Speaker 2 (27:02):
But we are so there's a little incidents of this,
But we are so close to be able to use
AI to translate the sounds that the whales and the
porpoises make that I can't prove that, but I'm excited
to think that we may soon come to our ability
to listen to and understand what those creatures are saying

(27:26):
I've also read that that may just allow us to
perturb their lives even more than we do now, if
we can start to read their minds or something when
it's not a two way street. So you have to
be careful about all these steps. But AI is the
kind of thing that can assimilate complicated signals and make

(27:49):
sense out of and that's the state of the art now,
so I think it may be going in that direction.

Speaker 1 (27:54):
We've already seen some remarkable progress on that. We've established
the whales and dolphins each have names and out by
that name, and it's not a word.

Speaker 2 (28:03):
I mean, it's very quick and.

Speaker 1 (28:07):
Yeah, and I think the experimented once with some dolphins
where they played see the calls of to see dolphin
to the dolphins, who got very excited and gathered around
the speaker. And I thought that was especially cruel. Actually
that it is very cool. We led to believe that
this dolphin was back.

Speaker 2 (28:25):
But then the ethical side of that is, are you
doing some moral or ethical damage to suggest to a
bunch of porpoises that their dead friend is alive?

Speaker 1 (28:40):
It's very disrespectful. Oh, they're intelligence.

Speaker 2 (28:44):
Maybe it's just something to be thought about. There's just
so many nuances to all of this as we make
our way in the world.

Speaker 1 (28:53):
Yeah, I think as a more than part time vegetarian,
I think there's two aspects, isn't there? And being per
cruel there are lines that we cross for sure.

Speaker 2 (29:05):
Yeah, I'm conflicted on that and don't think about things
I probably should in terms of eating meat and whatnot.
So what about plants. Do you think plants are conscious?

Speaker 1 (29:17):
Yeah? So I thought about that. So I used to
call myself a vegan, and then I decided that I
would eat some seafood, and I'd go down the food
chain to things like prawn's that and oysters and so
on that are clearly conscious but probably don't have the
lives that other animals would have. And then you look
at octopuses, who octopi are very intelligent. And you look

(29:37):
at lobster who live longer than we do in many cases.
So you start to have a sort of species by
species view. And then you of course everything's alive, everything's living,
and plants experience consciousness and talk to other plants, and
obviously we don't understand pain. I think we talk about
you know, dolphins having language, and plants communicating it truly

(30:01):
is completely different, and the psychology is completely different. You
cannot after morphie human emotions and feelings, of course to
a plant, and so pain isn't pain. It's not like
I feel pain here, So a plant would feel pain there,
of course not. But you've got to eat something, and
so I think in the end we just say these
are the things I eat, and these are things I

(30:21):
don't eat, and try to draw a line, especially some
sort of superiority line over other humans. I think doesn't
make sense.

Speaker 2 (30:28):
And different people draw those lines in different places, and
I think that's just the reality of life. Yes, we
have to eat something. Maybe we'll learn to draw our
own food. Maybe that's microbes and stuff and you can
pluck your hamburger off the wall and have a good
time with it. So that artificial food like that is
clearly coming. That's going to be part of our environment

(30:49):
in the future. I don't know quite when that there's
smart people working very hard to make that happen.

Speaker 1 (30:55):
Then we have to assume that lab grown meat doesn't
have any consciousness.

Speaker 2 (31:01):
Yeah, LIFs a risk, a complicated issue. I don't want
to simplify it, but that would be a way out
of some of these moral ethical the lemmas that a
lot of people feel. And so I think there will
be pressure to do that for that reason of no

(31:21):
other one.

Speaker 1 (31:22):
Yeah, I look, some people will grab a clock roach
and put it in the superneat it, and other people
wouldn't go, you know, within thirty yards of it. Is
perceptions fascinating. So your book that you wrote as an astrophysicist,
you wrote it because you're concerned? Is that correct?

Speaker 2 (31:42):
It was a and I am, And some of my
concerns came out and became defined as I was trying
to teach the class in the first place and then
write the book. But it came more so just from
kind of intellectual curiosity. And I can tell you a

(32:02):
small anecdote about that of where it really got started.
There's a program put into place at the University of
Texas at Austin where some professors will pick a book
to read and announce that to the incoming freshman class

(32:25):
and then talk to those students who read that book
and want to talk about small grave like two thousand
incoming freshmen, but you might talk to a dozen of
them or something in is in that particular topic. And
I guess a step beyond that or before that. I
can't remember where I read this now, but I was

(32:47):
reading the words of a pastor, a religious man, and
he said, I believe in Darwinian evolution. I believe that's
how mankind came to exist on the planet. But it
is clear to me that mankind, Homo sapiens as we

(33:08):
know them now, is the ultimate. We're it. And I
read that, and I said, not thought deeply about this before,
but I think I've just assumed since I first heard
about evolution. Now, wasn't grade school or high school or
something like that, that you were catching us in a
particular phase and we will continue to evolve. There was

(33:30):
never any doubt in my mind about it. I didn't
even ask the question. But here's this learned man making
this statement. That got me thinking about it. So one
of the things I did in this book reading program
was to assign the students to read Darwin and we
just talked about evolution, and I raised that question, do
you think Homo sapiens is the end all and be all?

(33:52):
Or are we still evolving? And the students were way
ahead of me. They already some of them had studied
biology and they knew that things like crisper were coming along.
I don't know whether we used the phrase at the time,
because I learned that a couple of years later, but
they were already very much aware that we were coming

(34:13):
to an era where we could affect our own biology
and as astronomy Pfressor. I was completely oblivious of this,
and so that got me thinking, and then I ended
up teaching a course on that to raise that issue
Darwin and where are we now and where are we going?
And bought that for five years, and then took those

(34:34):
notes and turned the mid of the book. But it
all goes back to that preacher saying that the mankind
isn't evolving anymore, that we're the peak.

Speaker 1 (34:42):
But he has no choice but to say that, because,
of course, the Bible says that man is created in
the image of God.

Speaker 2 (34:48):
Yes it's complicated.

Speaker 1 (34:49):
There get better than God, can you? He gets a
bit stuff.

Speaker 2 (34:52):
As an uptious scientists, and I see how evolution worked.
I just I'm just a.

Speaker 1 (34:58):
Pragmatist, and I can see that God is crazy in
the image of men, no common but that creature is
answer for morphizing evolution and suggesting that it has a
goal of some kind or intentions. But we have intentions,
but evolution would definitely not have intentions.

Speaker 2 (35:14):
I think that's true, and that is my founding assumption.
That evolution polks and prods in every single direction that can,
and the things that can survive and thrive will win
and procreate. And that's how evolution works. And no, it
does not have a goal, doesn't have a leader.

Speaker 1 (35:34):
It's like a false thing like gravity, like.

Speaker 2 (35:38):
Entrape things striving to procreate, And that's not a conscious
will necessarily. It's just I don't know quite how to
think about that. That you can't attribute a conscious decision

(36:00):
vision for a cell to split and become two cells.
That's not a decision that celebate it somehow got wired
into it. But then that gives the ability to have
slightly different genetics, and then you can combine that into
a more complicated thing or more complicated beings themselves come

(36:22):
with them instinctually the urge to create. It's our our genes.
Just use our bodies as a means to propagate the genes.
That we're the accident that the genes are down there
saying I must propagate, I must make more of my
own and hey, being a sell and a human being

(36:43):
is an affective way to do that. I'll try that,
or a chicken or a whale or a prn cob
or whatever.

Speaker 1 (36:51):
Yeah, we're into Dulkin territory.

Speaker 2 (36:53):
Now, selfish gene hypothesis.

Speaker 1 (36:55):
But it'll strike me that you and I may not
want to die. I would suggest the primary reason is
the human ability to project the future. And we would
think about all the happy times that we have in
the future and all the pain that our friends and
relatives would suffer from our loss, and that would cause
us great distress, and that would be consciously a great
reason for us once to stay alive. But you take

(37:16):
a cat or a rabbit. It absolutely wants to stay alive,
there's no question about that. But it's unlikely to be
reflecting on the loss of its friends. Is that hardwirings
in ours as well, isn't it? We're just projecting on
top of that.

Speaker 2 (37:30):
Somehow that is hardwired in that you fight to stay
alive the genes point of view, I don't want my
host body to die before my host body has a
chance to reproduce, however it does splitting cells or bisexual
reproduction or whatever. I want you to stay alive long

(37:51):
enough to reproduce because I want my genes to propagate.
And again that's not a conscious decision, but it's a
framework to think about it. And then that gets really
billions of years of the evolution of life, that gets
really hardwired into the genetic structure itself apparently. And so yeah,
we all strive to stay alive. And it's for the

(38:14):
reasons you said. I would add to that. Now that
I've had a chance to think about some of these things,
I'm really curious to know what the future and if
I die, I won't know, and that's frustrating. So there's
lots of reasons to try to stay alive.

Speaker 1 (38:30):
Do you think you will stay alive in some form?

Speaker 2 (38:33):
Oh that's a tricky one too. So what are the
other things that is going on right now? And technology
is the drive to extend humans' health spans, at the
very least, to stay healthy up to the point you
drop off or even extend ages. There are people like
Ray Kurtzweil who say I want to live enough to

(38:55):
live forever, that I live enough that the technology of
biology gets good enough that I can repair myself and
stay young and live forever. And that's a very strong effort,
both technologically and spiritually. And I'm not sure exactly how
widespread it is, but it's a very strong trend among

(39:15):
people who thinking those thoughts. And I completely understand that.
I'm not sure whether I want to live forever, but
now that I've written this book, I'd like to participate
in some of that, just to see how it goes.

Speaker 1 (39:29):
Would you be happy just to experience issues?

Speaker 2 (39:31):
If we really are going to extend human lifetimes, for instance,
in a in appreciable way, then there's some ancillary issues
that we need to think through. Because death is a
major part of our culture right now, because we're used
to people dying. It's the way it works. You're born,
you live a good life, you die, leaning behind sadness

(39:54):
and disappointment and whatnot. As you were saying, but if
we all start living forever, now you're back to this
question of how many people can the earth provide. If
everybody who's born lives forever, where do we put them on?
And I've got a slightly facetious way of defining the issue.

(40:15):
If we may attain that technology where you can live forever,
what do we do with the babies? If all the
old folks don't go away. So it's part of the
natural thing now where you age out of the system
and fresh new people can come in and propagate the
species and develop new technology, all that exciting stuff, And

(40:38):
that will just change drastically if nobody dies and the.

Speaker 1 (40:43):
Parts of balancing that we talked about before, isn't it.
New ideas are able to come through, and the status
quote can be broken and we do get change. What
changes need is nothing else. The dictator cannot live forever.
We know that for certain they will eventually disappear.

Speaker 2 (41:00):
People are striving to live forever. So how do you
plug that into your thinking?

Speaker 1 (41:06):
And do you think, let's say, Ray Kurtzweil was one
of the first people who could live on in some
form of consciousness or replicable constrausness.

Speaker 2 (41:15):
Then there's also, yeah, downloading your brain into a machine
of some kind and living there. I don't know quite
what that would be like, but the fact that the
technology is leading into direction of what we might be
able to do that is certainly true. And then what
it would be like to wake up and find that
you're not in your young, healthy body or in some

(41:37):
kind of a box in a Microsoft data processing center
or something.

Speaker 1 (41:42):
And if that was the case, would you be content
just to experience the world or would you personally be
more interested in still contributing to the world.

Speaker 2 (41:52):
Oh boy, Between the two, I would have to put
experiencing the world first. That's what we're wired to do.
If one could make a contribution, that's even better. I
don't want it. Yeah, overshell my contribution to the world,
but I have certainly enjoyed experiencing it. We'd like to
continue doing that for a while. It would just be

(42:16):
very different if you were in a your silicon chip
somewhere rather than in this corporeal body. Good, bad, and different,
I don't know, very different. It would be different.

Speaker 1 (42:31):
It will touch on regulation, I'm sure in our conversation,
but this is a huge one in terms of rights
and regulations, because then we started a while giving. Okay, now,
not only are you carrying on in some form of aliveness,
but what about your estate? What about your money? Are
you able to hold on to that?

Speaker 2 (42:51):
Now your kids want it and you're still alive.

Speaker 1 (42:54):
Are you able to carry on?

Speaker 2 (42:57):
There are just lots of complicated issues that come with
issue of people living a long time or forever, in particular,
to go to the extreme case.

Speaker 1 (43:07):
It's certainly worrying to think about dictator who could be
positioned forever.

Speaker 2 (43:14):
Do you turn that around and takes what you and
I are talking about just now and say that you
and I are wishing for death. That's not exactly what
I mean, just that if people live forever, there's going
to be issues that have to be thought about how
you handle them. But you don't handle them by ignoring them.
And my impression is that most of the live forever

(43:35):
people are not thinking through what the implications are.

Speaker 1 (43:41):
Oh, and that elements of life, in the stages of life,
the entropy of life obviously shapes our lives. I'm forty
two and I'm fit, so I look and think, I
want to make the most of this decade. I want
to get to fifty and really have used that energy wisely,
because I might have different energy at fifty, and I
certainly will at seventy and then ninety. But that's fine,

(44:01):
that's passive being alive. I do wonder if we had
this idea that you could live forever, and that Romeo
and Juliet's and Sullen or parents or whatever, the experience
of loss could be really difficult if you were losing
somebody who, in theory, could have lived with you forever.

Speaker 2 (44:19):
Yeah. Absolutely. I have no more to say about that particularly,
but you're right, yes, it would be so the only
way you could die is by stepping in front of
a automated vehicle or something. Yeah. And if you're going
to live forever and then you have an accident, that
would be a heightened tragedy at some level somehow. Yes,

(44:39):
I don't know. I'm just laying in my own limited way.
I'm laying some of these issues on the table to
be thought about. I don't really know what the answer is.
The answer is certainly not let's stop doing research on
life extension. That's not the answer the answer. Go ahead
and do your research, but let's think about the implications
and what we might have to do to accommodate it

(45:02):
if you're successful.

Speaker 1 (45:04):
Yeah, And I think we've been doing that anyway. And
there's some misunderstandings about the length of life because with
infant mortality in the past, the average was much lower
than it actually was. You perceived to be much lower
than it was, but we have still added twenty thirty
years probably.

Speaker 2 (45:20):
Oh yeah, no, sure, people a few hundred years ago,
we tend to be dead at your age. Never mind mine,
this is good. I've had another forty years on top
of you, and I've enjoyed pretty much every one of them,
and I'd like to keep it going.

Speaker 1 (45:34):
Yeah, we're grateful for modern mens, and I think almost
all my family members will be dead right now.

Speaker 2 (45:39):
But for modern mins, you get to be one hundred
and fifteen. Don't you want to go to one hundred
and twenty five?

Speaker 1 (45:44):
And yeah I do. I don't want to be quite
fit as well, So I need the health span and
the lifespan.

Speaker 2 (45:52):
Yeah, so partly, I guess the suggestion here is, either
by force or just the evolution of things, we may
have to think about death in a very different way
if we can develop the technology to live forever. And
I don't know exactly what that means, changing our psychology
about it in some way.

Speaker 1 (46:15):
The book you mentioned Ray Kurtzwamer, and your book is
titled with the Singularity in it? Is that just choosing
a title amongst all the different topics in there? Or
was AI quite prevalent in front of mind for you
as you wrote it? And do you want to say
what you mean by the singularity just in case.

Speaker 2 (46:32):
Yeah, So the singularity is a term that goes back
to the engineer and author Werner Vinge, who used it
in a science fiction novel, and it comes from other kind.
It's a mathematical term of basically dividing by zero, so
everything goes to infinity. That's what through mathematical singularity is.
There are singularities in the middle of black holes, as

(46:53):
Roger Penrose, your famous scientists showed and grew, and then
it was metaphorically picked up in this other context. And
I'm not sure that Ray kurtzweildvented it, no, I'm sorry.
So it was then who invented it in this novelistic concept,
I think, and then Kirschwild picked it up and made

(47:14):
it particularly famous. But the idea is we get to
a stage of our technology where our machines, so artificial intelligence,
whatever manifestation that is, is more competent in every aspect
of human activity, and so then we're tied. But at

(47:34):
that point the artificial intelligence is going to race ahead,
basically at the speed of light or the speed of
moving electrons around in a silicon chip or something, and
we're still advancing at the speed of biology, which takes
tens of thousands, hundreds of thousands of years to change.
And so if we get to that point of equality,

(47:55):
the artificial life is going to take over, and who
knows what happens. That's part of what we're thinking about
and talking about. But that's what is meant by the
singularity and this context is the time when machines get
as competent as we are and then race ahead.

Speaker 1 (48:12):
Okay, And just revisiting that question, then, was that the
main purpose of putting this book together or is it
just a means of referencing the types of content that
you will include.

Speaker 2 (48:23):
Well, by the time I had picked up this idea
and did the first little group discussion with the students,
then had the idea of teaching it as a course,
and I used Kirtzchwall's what was the two thousand and
five book, The Singularity Is Near. I use that as
a textbook because it's full of clever ideas. It illustrates

(48:43):
the basic points very clearly, has some very interesting extrapolagi,
and a fascinating book, and so we use that. And
so I'm when thinking of writing a book at the time,
I'm just teaching a class, and it was great fun
because you learn new things and the students were very interactive.
It was a fairly small class. But as I taught

(49:06):
that class, I realized there were things that Kirtchwild didn't address,
like what are the implications of saying this? What are
the implications for capitalism he doesn't talk about They said,
maybe you can get rich in the future if you
invest in technology, but otherwise he didn't talk about that.
He didn't really talk about how all this might affect democracy.

(49:32):
And so I ended up with an urge to address
those things. And we talked about a bit in my class,
and it was putting all of that together, ideas that
had come up during the class, these notions that Kirtchwild
didn't address absolutely everything that well, maybe there's a book
in there. So that's how I came back. It was a
slightly messy process. Rather than saying I want to write

(49:54):
about AI, it's just clear that I had to play
a role in it. I don't quite know when chrisper
cast nine was invented. I in principle could remember I'd
write about a bit in the book, but I didn't
hear about it until like two thousand and thirteen or
fourteen after I started teaching the class, and it just

(50:15):
came on me like a bombshell that this is what
my undergraduate students in that book discussion were talking about,
that we can change our own evolution with these techniques.
And I took that as a prime example of how
technology advances exponentially, and something you didn't know about yesterday,
all of a sudden has become this overwhelming thing the

(50:36):
next day. Chrisper's just one of many things when could
cite in that regard, And so I wanted to write
about the genetics too. To get I talked about genetics
with a friend of mine. I had dabbled in astrobiology
for a while and earlier in my career, so some
of those thoughts were in my mind and they all
just assembled as I started thinking about the book and

(51:00):
space travel and other things. And there's an astronomer. I'm
thinking about that anyway. It's just the book was a
dump a bunch of things in there, and I was
surprised myself as I went along writing things and realizing that, oh,
I actually I've been thinking about that topic for twenty
or thirty years, and here's what I think about it

(51:21):
and put it in the book that it wasn't part
of my thinking when I started writing it, but just
things I realized I'd pondered for a long time, informally,
non expertly, as an interested human being, not as an
expert in it necessarily.

Speaker 1 (51:38):
Yes, we talked about climate change briefly ater on, and
we touched on evolution. I'll just try a comparison here,
see if it works. Because we've experimented with our climate,
if you like, and now we could be wiped out
by climate change, but there's no intention from our climate
to hamas we've experimented with coding and data and statistical

(52:03):
models to the point of which we have something we've
coined artificial intelligence, and there's some suggestions that art of
the intelligence could wipe us out. But is the next
sentence relevant as well? But like the climate, there is
no intention there. It's just something that we've meddled with
it and it could do us harm. Or are we
going further than that in this conversation that actually there's

(52:27):
a will and intention, a consciousness decisions that are made
by a completely separate, entive intelligence. Because that's a leap
from my perspective, But can you take me there from
your perspective.

Speaker 2 (52:42):
Yeah, I think a little bit more of the former.
I don't think there's necessarily any intention to it in
quite the way you're asking. So I guess the way
I characterize it is there are smart people in research
labs in businesses that see a cool thing they can do,

(53:09):
and so they work on that cool thing. And is
some of this in artificial intelligence was starting off? I
think the history as I read it is pretty clear
that people weren't thinking about implications the way they've come
to pass. They were just doing something cool in the
lab for the intellectual excitement of it, and that then
feeds on this if you learn something new, and then

(53:30):
you can build on that and off it takes. And
on the commercial side, there are people who can make
a profit on it, and so they work on that
because they can make a profit. And that's the way
our capitalistic system works. And I'm all in favor of
that until it runs off the end of the track
a bit. But I don't think anybody it wasn't a

(53:51):
global intention in any way. It's just people doing cool
stuff and that will lead to developments that in some circumstance,
answers run away, so we need to be careful about them.
But I don't. Maybe I'm not quite interpreting what you
mean by an intention, but I don't see any intention
that way, which is also why I think it's unstoppable.

(54:14):
You're not going to stop people from doing cool stuff
in the lab, and it's going to build. And so
here we are all of a sudden chat GPT drops
on us out of nowhere. But it wasn't nowhere. There
was this gradual building of that technology over decades.

Speaker 1 (54:30):
Yes, for sure, I think it's almost like an emergent property.
Isn't it from our consciousness that these ideas will arise?

Speaker 2 (54:40):
Yeah, And that's a fascinating topic that I also talk
about a bit because it may have done with the
origin of life, and it may be where AI is going.
But emergent properties is a very important principle in what's
going on, and so the question of whether consciousness in
machines will be a naturally urgent property in machines the

(55:02):
way consciousness is apparently an emergent property and life us
in particular, fascinating question exactly what the answer, But it's
a lot of fun to think about.

Speaker 1 (55:15):
And maybe I get confused when I listen to people
like Nick Bostrom, who was speaking to the podcast host
Sam Harris recently. They on that show. They made it
very clear that if you weren't appropriately concerned about artificial
intelligence as an extential threat, then you were foolish, was
the message I was getting. That you just didn't get it,

(55:35):
you're in denial, or you weren't clever enough to figure
it out. Somehow. They were quite direct with that, and
they didn't also give me a path to what they
were suggesting was going to happen, or even actually define
clearly enough what they're talking about. Because I'm told that
there could be an AI in the future that treats
humans now the way that we treat dogs. Sometimes we

(55:57):
look after them, sometimes we kill them when the dog
doesn't know what's going on, and that to me is
purely science fiction. And if we can't even understand our
own consciousness to suggest that we're going to create it
in a sort of Frankenstein's Monster style in a machine,
anything's possible. But that is more or less likely for

(56:19):
me than aliens landing tomorrow and saying hello, we're a
superior species.

Speaker 2 (56:24):
Interesting I disagree with you there. I think again, going
back to the aliens, we look around, we see no
evidence of aliens anywhere, so I think it's very unlikely
aliens are going to land tomorrow. I'm not maybe completely
hard over on Bostrom's view there, but I do think,
if you think about it a bit, that there could
be bad circumstances in the future with this technology, and

(56:50):
therefore I think it's really important that we collectively everybody
think about it, have discussions like you and I are
having right now. People could agree with this, agree with us,
but I think it's important to have the discussions to
raise the issues, to be aware that there could be
some kind of machine apocalypse in the future, and what

(57:13):
do we do to prevent that? And the solution is
not obvious. It's a combination of trying to identify which
directions are healthy, which directions are unhealthy, put regulations into
place to try to limit that, and all that's going
on right now, and it's much more in the public
discussion now than it was when I started teaching this

(57:33):
class ten years ago, so it's I think that's a
very healthy thing. I think the other thing I argue
about in the book, which is very basic and it
doesn't really tell you how to address this. Harris bost
notion is just to be aware because it's very easy
to go about your daily life, and we were worried

(57:54):
about having a job, feeding your kids, whatever, all these
things theys are which dominate most of us. But to
be aware that these technological things are happening ever more
rapidly and they are encroaching on us, and they can
have negative effects. And I think the more that just

(58:15):
general people are aware of that. So awareness is one
of the things that I try to emphasize in the book.
And if you want to be that kind of learn
how to be aware of these things that are going on,
then one thing you can do is read the book,
because that's a theme that I stress all the way
through it, and I'm still trying. When I learned new things,

(58:37):
this is something I started in the class. I would
ask the class to bring examples of new technology to class,
and we would talk about a bit and then go
ahead and talk about the theme of the day. And
I've continued that so I've been posting on x and
LinkedIn and whatnot, just little tidbits of how technology is
advancing to try to bring that awareness and other examples

(59:00):
of things a particular kind of automobile. If you don't
think about Bentley's a lot in your day to day life,
you may or may not see one driving bibe. If
you start thinking Bentley, then you'll start noticing Bentley's more
often because you're now just aware of it. And I
think this, I think.

Speaker 1 (59:23):
I could.

Speaker 2 (59:24):
I was trying to think of another brand of British car.
That's not the greatest.

Speaker 1 (59:29):
While since there was a British car.

Speaker 2 (59:31):
Unfortunately that too, that too, I'm sure there's a better example.

Speaker 1 (59:36):
Yeah, we were aware of what we not this.

Speaker 2 (59:39):
Yeah, but once you start becoming aware, you will notice more.
That's just part of the human psychology. And so my
urging and my quest at some level is to encourage
people to be aware of these issues and see how
fast things are coming in and realize, oh, this is
part of a big pattern here. This is not just
a little isolated thing. And so awareness of these changes

(01:00:04):
is an important mission that I now have with the
book and these other little postings that I don't do
as frequently as I should.

Speaker 1 (01:00:12):
Yeah, yes, just for my clarity, and there may not
be a guess or no answer to this, but everything
that you're describing in terms of that the level of
concern that we should have is that related to a
machine having consciousness or not.

Speaker 2 (01:00:29):
It is, but it's not the only thing. So one
of the other things that I've tried to emphasize in
the book is there is a human tendency to focus
on one thing of concern. Yes, so many concerns, but
focus on one thing of concern and not quite realize
other things are going along as well. So my example

(01:00:51):
of that is versus climate change. Lots and lots of
people are concerned about climate change, no question, that a
healthy thing. I think lots and lots of people are
thinking about AI and what direction is it taking is
and I think that's a healthy thing. There are people

(01:01:11):
worried about are we going to have an age of
designer babies now that we can tinker with our own evolution?
And another one I will throw into the mix that
I thought i'd admitted in writing the book, and then
I realize other people are writing about this is issues
of mind reading and mental telepathy. And my point in

(01:01:34):
that context is if you just focus on climate change,
or you just focus on AI, you just focus on
the genetics, you're missing the story because all these things everywhere,
all at once, whatever that movie title was, they're all
going to be advancing so fast we can barely if

(01:01:57):
we can adjust to them, all four of those topics
in the next ten or twenty years. So if you
just focus on one, that's maybe better than nothing. But
I think it's important to realize this. There's a whole
sweep of things going on that are going to be
overwhelming it.

Speaker 1 (01:02:16):
Yeah, and I take a passing interest, or more than
passing interest in everything you've just described, But I take
a strong interest in AI, both personally and professionally.

Speaker 2 (01:02:27):
I wouldn't think a little more about genetics, my friend.
So what do you think about mental telepathy?

Speaker 1 (01:02:36):
I think about neuralink quite a bit.

Speaker 2 (01:02:39):
Let me tell you what to think about mental telepathy.
We are clearly advancing to a stage where we can
with some accuracy record what's going on in human brains,
so you can there's an experiments where people were watching
a video and the experimentalists, with the use of AI,

(01:03:00):
this is definitely an AI catalyzed thing, can interpret those
signals and recognize what the video was the person was
looking at. That's a pretty amazing thing. It's not accurate
detailed mind reading, but it's definitely mind reading. That technology
is with us. Elon Musk has got this neuralink process.

(01:03:21):
We put needles in your brain and measure what individual
neural firings are doing. And that's gotten to be a
pretty sophisticated science has got a ways to go, but
things that have happened just in the last couple of
years have been quite dramatic, and then you can use
those signals among other things, so that's mind reading at
some level. It definitely is. You can use those signals

(01:03:45):
to control a prosthetic, an arm of somebody who's been
damaged and now they have a robot arm they can
control just by thinking. You can in principle control a
robot a cobot where you link to the robot and
the robot could be in the lab next to you,
or it could be on the moon and you could

(01:04:05):
tell it what to do just by thinking. And this
is more rudimentary, but people can't have are developing techniques
where they can implant ideas in your brain so they
know what the readout is. The reverse engineer that put
that feedback in some work done an MIT have sort
of implanting dreamlike states and then putting ideas into people's heads,

(01:04:30):
and that's another step in that direction. And if you
put all that together, it means the technology could be coming.
And I think because it's cool technology, people are going
to keep working on it. It's not something you could
just stop. Where you can read minds and write to minds,
and that's basically mental telepathy and the things we've thought

(01:04:52):
about for a long time. But with AI and our machines,
we can now do that. And then the question is
what do you do about that? Who owns that data?
Stay to Colorado or stay to California. Just pass rules
that say anything you measure out of somebody's head is
part of their private data, and you need to be
very careful about how you do that. And I think

(01:05:13):
that's an exceedingly healthy thing to do, because otherwise, the
very fact that there were high tech companies that fought
those rules in California means those high tech companies know
perfectly well that is valuable data that they can make
a buck off of. And I don't begrudge them making
a buck, but you've then got to put some restrictions
there of how they do that. We've seen that with

(01:05:36):
all the previous twenty years of companies sucking up data
and doing stuff with it, and some of u's been
very healthy and some of it's been not so healthy,
misinformation and disinformation and all of that. Anyway, I think
mind reading. I don't want you reading my mind. If
I want you to know what I'm thinking, I'm happy

(01:05:56):
to talk to you about it. And I don't want
to read your mind. I don't really want to know
what you're thinking. I'm expressure thought. They're just gigantic privacy
issues if we get into this AI induced mental weapathy stage,
which I.

Speaker 1 (01:06:12):
Think is great. I'm a man. Everyone knows what I'm
thinking about.

Speaker 2 (01:06:19):
I can take a guess, but I won't. I write
about it in the book. I won't tell you what it.

Speaker 1 (01:06:24):
Is only for ninety percent of the day, but fascinating topics.
And I do know a lot about data collection, as
you might expect. And yeah, they do recognize these technology
companies the power of these devices so Fitbit. For example,
when Google acquired it, Google were banned from using bit
data for ten years, and then in a number of
years from now they will be able to use that

(01:06:44):
and just putting on an Oculus quest headset. The company
that owns that data, which in this case is meta,
they can tell a lot of health information about you
from both wearing it and then the continuous wearing of
it and trends over time. You can really understand somebody's
health quite clearly. The movement and the.

Speaker 2 (01:07:01):
Access it is a very good thing for expert doctors
and or not to know exactly what your health is.
But you can amaze that it can get violated and corrupted.

Speaker 1 (01:07:10):
And some would argue that Steve Abs is dead because
he didn't take his health seriously. He didn't get the
check cuts he needed, didn't get the doctor enough, he
ignored things. And yeah, so AI is wonderful potentially for
early detection disease.

Speaker 2 (01:07:22):
Absolutely. I don't want to sound like a AI negativist.
Lots and lots of good things AI can do, but
you know there are avenues that are not so healthy,
and we need to be who are the awareness com
Let's sort those out, what's good and what's not good.
Let's put some limits on that that good stuff.

Speaker 1 (01:07:44):
And I see it this dichotomy of both being amazed
by AI and what it can do but also working
in the industry as well. But one's clearly aware that
it's not magic, and there is quite a lot of
particularly we take something like chat GBT, there's quite a
lot of Wizard of Oz perspective going on that when
you do peer behind the curtain. For example, Amazon ghost

(01:08:04):
stores were supposed to be fully autonomous and run by AI,
and then they admitted that they had several thousand people
in India watching the cameras and working here. Apparently, self
driving cars are being driven seventy percent of the time
remotely by people and other locations because they can't drive
most of the time.

Speaker 2 (01:08:21):
But then check back with me in ten years and
see where that technology stands. So you're right, there's a
lot of it are still rudimentary now, but you can
see where direction is going.

Speaker 1 (01:08:30):
But I share your concerns just by what we've said
as well, that we are exponentially humans more complex than
these technologies are. These technologies are incredibly primitive, I promise,
compared to us and compared to the future, and imprinting
emotions into a human brain when we don't even understand
the human brain is both dangerous and quite weird.

Speaker 2 (01:08:51):
Really, it is weird. But I'm telling you there are
rudimentary steps in that direction, and they're not going to
stop because it's really cool science.

Speaker 1 (01:09:03):
Yeah, but it strikes me that people.

Speaker 2 (01:09:05):
One year, ten years, one hundred years, I don't know.
I just I'm pretty convinced that's the direction things are going.

Speaker 1 (01:09:13):
My concern is that people do understand the AI is
a huge risk. They just they understand the wrong risks
for me. They understand risk in terms of the terminator
and the sci fi type risks quite clearly, and they
talk to me about those a lots, just regularly.

Speaker 2 (01:09:29):
So there are some concerns in that direction. I talk
a little bit about yeah, killer autonomous killing weapons, and
right now those are regulated, and I think that's a
healthy way to be. But it's not just that there
are more Nobody knew when the Internet was invented that
wad end up an era of virulent misinformation and disinformation.

(01:09:54):
Nobody saw that come, and yet it just emerged naturally
out of FaceTime and all all those I will things.
So it's trying to be foresighted about those kind of
unexpected consequences that I'm urging people to think about it.

Speaker 1 (01:10:12):
And I think we're fighting the same cause. There's lots
and lots of issues, of course, that need lots of attention.
One I wrote recently about was universal basic income. And
if we see mass job displacement, we might find that
companies like open ai, which is backed by Microsoft, starts
paying these universal basic incomes. Now I'm not sure whether

(01:10:33):
they would intend they have trialed this, of course, it's
not just speculation whether they would intend to pay these
to people who have been displacing their jobs in sril
Anger or Bangladesh, or whether it would just be universal
basic income from Americans by Americans. The every single tech
company of any scale outside of China is of course American.
That's not in Europe to speak of, So that's interesting.

(01:10:54):
But if that was the case, and we had a
future where Sam Altman's open ai was effectively providing the
living of ten million Americans, and then open Ai suffered
some financial distress, it would be unlikely that the government
would let open ai fail because it supports so many
people's lives, and that would be very concerning for competition.

Speaker 2 (01:11:16):
To me, your train of thought, there is exactly the
kind of thing I'm trying to encourage to start without
extrapolation and then see where it goes. And whether we'll
go in exactly that direction, I don't know, but that
absolutely is one of the possibilities, and we need to
think about that and think of what do we do.

(01:11:37):
And part of that then is and your awareness is
intelligent regulations. We don't want to shut open AI down now,
but we don't want one man, Sam Alton and end
up being responsible for economics of the whole world because
he controls the UBI. How do you get a grip

(01:11:57):
on that? And it depends partly on regulations that I've said,
But then the regulations are going to be made by
the people you elect to do that kind of thing.
So it comes back to voting at some level. It's
important to be aware of these issues and then vote
for people you think can handle them. And in the
current election we're having here right now, I have some

(01:12:20):
glimmer of what one of the candidates what direction they
would do, and no glimmer at all of what the
other candidate would do in regard to these technologies that
are coming at It's just not discussed as far as
I can tell. Now. I could say a little bit
more about I don't want to go into politics too much.
But it is important to try to vote for people

(01:12:43):
that you think can intelligence it and address these problems.

Speaker 1 (01:12:48):
They know big thoughts to have, for sure, and that's
a big ask of our politicians who tend to be
stuck on the dice to day.

Speaker 2 (01:12:55):
It's putting a lot on their shoulders. But if they
don't want to shoulder that burden, they shouldn't be run
for office.

Speaker 1 (01:13:02):
What does a U topic future look like for us?

Speaker 2 (01:13:05):
Oh, utopia. I'm a more middle minded person. I doubt
it's going to be a complete dystopia or utopias. I
don't quite think in that regard, but yeah, you would
aspire to a situation where, for instance, everybody on the
planet lives at the comfortable level of life that you

(01:13:30):
and I do, and we're a long way from that
right now. And so that's that is partly a question
of redistribution of resources in a fair way. You don't
want to strangle the capitalistic advance which leads to so
much as it is not anti capitalist at all, but
there need to be some rules. I think of it

(01:13:52):
in terms of sort of American football, but it could
be real football in general. You have a feel of
a certain size and you've got regulations of when you're
off size kicking the ball down the field, and the
game would be completely different. It was just okay, you
have an arbitrary number of guys here and the artainy
number guys there, go do something, kick the ball into

(01:14:15):
the goal. The whole structure of soccer is the half
formal rules they have to play by and then do
it superbly. Well, that's why it's such an exciting This
is a global enterprise. It's not just about the United
States by any means, even though we tend to control
a lot of the big companies.

Speaker 1 (01:14:32):
For sure.

Speaker 2 (01:14:33):
I'm sorry I drifted off topic there a little bit.

Speaker 1 (01:14:35):
Oh, I know, I see where you were going, And
I think, first of all, that's a great answer that
if we do consider ourselves to have a utopian life,
you and I in terms of the freedom and the
health and the food and the good air and families
and comforts.

Speaker 2 (01:14:48):
It could probably be a lot better. But first of all,
wish better to everybody. And then you can look to
see whether more fresh air or more parks and less this,
that and the other. But yeah, you and I live
a pretty good life, and that I aspire to spread
that around. Yeah, life that utopia. I don't know. Is
it utopia if nobody has to have a job for

(01:15:12):
whatever reason. And I'm not sure I've enjoyed my job
being an astronomer.

Speaker 1 (01:15:18):
I'm quite keen. I have a job again easily, and.

Speaker 2 (01:15:21):
It's been intellectually you otherwise, very stimulating to do what
I do, is I and it provided me an income
to have a fairly comfortable living as well. If somebody
just paid me an ample them out every year so
I didn't have to have a job. I don't quite
know how I, as an individual to react to that,
or how society react to that, but that is a possibility.

(01:15:44):
It needs to be thought about. The UBI. Things like
that has to be thought about.

Speaker 1 (01:15:49):
No, I'm with you, and it might be that you
and I you would have a wonderful life under UBI,
enjoying reading and studying and lecturing and whatever else that
we might find out time to and others would be
in a state of suffer, losing the meaning of their
life and the purpose the routine that gives them immense happiness,
or whatever form that mind.

Speaker 2 (01:16:09):
I did ask my students that question at one point
in the class of what there were UBI you got
a decent amount of money that you could live on,
what would you do? It was a very gender specific thing. Basically,
all the guys in the class that I'd play Fortnite
all day, all the time. This was of more than

(01:16:30):
ten years ago, when Fortnite was a rage.

Speaker 1 (01:16:34):
Yes, I remember.

Speaker 2 (01:16:35):
Whether that would really be true or not, I.

Speaker 1 (01:16:36):
Don't know, but I've never had the pleasure.

Speaker 2 (01:16:39):
I don't remember what the women said. I think they
were more thoughtful about it.

Speaker 1 (01:16:43):
My life is wonderful, but it would be even more
wonderful if say, Russia wasn't invading countries right now, and
if there wasn't so much dooing gloom about climate change,
and if you read the reports about climate change, it
is a tough read. That is a bad day. If
you spend your day reading that, that is not fun. Obviously,
span and lifespan are always things that could be brighter.
But for me, I quite enjoyed the activities of the day,

(01:17:06):
like washing up and making food and things like that.
I find them quite purposeful. But I think that the
greatest joy is turning your talent into some sort of
tangible meaning in your life. And a lot of people,
unfortunately have great lives but never really are able to
use their talents. And that excites me about artificial intelligence.
Does it allow more people to find their greatness in

(01:17:28):
the world.

Speaker 2 (01:17:30):
And I don't know exactly how that will come. That's
a lovely thought. I don't quite know how you would
use AI to manifest that, but that's a wonderful goal.
Is to make everybody more self certified and productive. That's
what it is, and yeah, that's how I want to
live my life. I understand maybe not everybody.

Speaker 1 (01:17:52):
Does, but yeah, it makes me happier because I can
do more projects that we find very meaningful. We're able
to do more with AI because it allows us to
be more productive. I'm able to have less staff and
I find it very stressful to manage people, so I'm
able to substitute some of what they do now by
being supported by AI. It's I guess it could just

(01:18:15):
be as simple as developing something with AI that then
gives you that income. When you think about e commas
and so on, and we've gone from having to lease
a physical store to having a digital store to having
a digital store where everything is planned and written for
you with AI. These are great leaps forward.

Speaker 2 (01:18:31):
As a writer. I don't look forward. I don't want
AI writing for me. Thank you. It's just something I
enjoy doing. It brings me pleasure to write, move words
around and rearrange things. And yes, I could then feed
that to chat GBT and see whether it would change
it or even improve it. But I'm very jealous of
my prerogative to arrange my own words in my own way.

(01:18:55):
I'm with you, a subjective thing. Not everybody's going to
think about it that way. Of course, I helped write
my email, what damn. I spend ten minutes writing of
four sentence email because I want to make sure it's
saying what I wanted to say. I don't know, So
I do not use GPT heavily right now in my
own life, just to express that inconsistency.

Speaker 1 (01:19:19):
I think some writing is utilitarian. We work with teachers
and a lot of the time they have to just
produce things that need to be shared out and do
just work. It just happens to be work that involves words.
So I developed a nonline store in the background while
I was working in a corporate job, and it sold
matcha Green Tea through Shopify. But it was very challenging
for me to know what to write, to how to

(01:19:41):
describe this t so that it would sell. That's a
very challenging thing. So I wrote these things all myself,
and I realized I started talking to my customers. I did, okay.
I got quite a few customers came and bought the tea,
and they were just amazing people. And they were writing
me emails and till it give me philosophical thoughts. And
I thought, what an amazing group of people. And eventually
the penny dropped. And I say this without any ego,

(01:20:03):
but the penny drop. The way I'd written the website
was appealing to a certain type of person. Uh huh,
a person like me. It must be in the essence
because I wrote it. And so I would never have
got that experience. If I'd use chatchip Petita write a
utilitarian post about this is matcha tea from this place
in the world and it does this for you, it
wouldn't have attracted the kind of people who brought joy

(01:20:23):
into my life at that point.

Speaker 2 (01:20:26):
That's a wonderful insight. You don't know. This is human
interactions is a very complex nuance.

Speaker 1 (01:20:33):
They yeah, But then Shopify says to me, what color
do you want the ad to can't button to be?
And I laugh. I think the best color whatever converts
the obviously, why would I care? Make you poker dot
if you want. And I have no idea what the
correct color is, but of course shop if I does

(01:20:55):
know the correct color, it's.

Speaker 2 (01:20:56):
Got very taken with the idea of getting the right color.

Speaker 1 (01:21:01):
Yeah, there is clearly a color on this website that
would make some difference to the sales, even if it's tiny,
and so on and so forth. And so I think
on a Shopify stall like that, you've got some elements
where absolutely your creative writing is that you can pesitive advantage.
In other cases, you just need to be able to
call on the data and say, look, just put this
in the right place on the screen, and choose the
order of these photos. I don't know. Should I show

(01:21:23):
the person drinking the match a first or should I
show the packet first? I don't know. I'll never know.
But data knows.

Speaker 2 (01:21:31):
People were that, people are experted.

Speaker 1 (01:21:33):
That, yeah, and they can tell me with the data.
So I think if it helps people to be more
successful like that and on activities like that, then it
does free up more time potentially that they're not spending
a whole day tweak in a website. It's working much
better for them and freeze up their time to find
more joyful endeavors. Cray, it's been truly wonderful to speak

(01:21:54):
with you.

Speaker 2 (01:21:56):
Really enjoyed it. I appreciate the invitation and I appreciate
the way it came out. It's a very fun conversation.
I appreciate you thought about all this already so much.
You're one of the people I don't need to necessarily
speak to you. I'm in the choir that you'll preach out
there that that can benefit
Advertise With Us

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Law & Order: Criminal Justice System - Season 1 & Season 2

Law & Order: Criminal Justice System - Season 1 & Season 2

Season Two Out Now! Law & Order: Criminal Justice System tells the real stories behind the landmark cases that have shaped how the most dangerous and influential criminals in America are prosecuted. In its second season, the series tackles the threat of terrorism in the United States. From the rise of extremist political groups in the 60s to domestic lone wolves in the modern day, we explore how organizations like the FBI and Joint Terrorism Take Force have evolved to fight back against a multitude of terrorist threats.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.