Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:09):
Welcome to Mission Evolution Radio show with Gwildawaka, bringing together
today's leading experts to uncover ever deepening spiritual truths and
the latest scientific developments in support of the evolution of humankind.
For more information on Mission Evolution Radio with Gildawaka, visit
www dot Mission evolution dot org. And now here's the
(00:31):
host of Mission Evolution, Miss Gwildawiaka.
Speaker 2 (00:39):
Stephen Hawking predicted and I quote the development of full
artificial intelligence could spell the end of the human race.
Sounds a bit extreme? Is there any truth to his statement?
Mission Evolution Radio TV show is coming to you around
the world on the xot each channel XZTV dot CA
(01:03):
with this is hour to entertain Stephen's prediction and what
it may mean for our future. Is Doctor Peter Solomon.
Doctor Solomon is a scientist, educator, successful entrepreneur, and author
of one hundred Years to Extinction, his current mission to
warn the next generation about the threats posed by unchecked
science and technology. His website one hundred years to Extinction
(01:28):
dot com. Thank you so much for joining us on
a Mission Evolution.
Speaker 3 (01:32):
Peter, that's my pleasure to be here.
Speaker 2 (01:36):
Well, we're still here, right, That's always the good thing.
Speaker 3 (01:40):
How we got ninety two years left, so we've got
some time to make sure that Stephen was not right.
Speaker 2 (01:47):
We can hope for that right. Yes, So, Peter, what
was your educational background?
Speaker 3 (01:53):
Well, I went to the City College of New York
for undergraduates. I commuted on the Brighton Line from Brooklyn
up to Upper Manhattan for.
Speaker 2 (02:08):
Four years and lived to tell about.
Speaker 3 (02:10):
It and live to tell about it. I had this
wonderful facility for going to sleep on the subway and
waking up exactly at my stop. That was a good
thing to be able to have.
Speaker 2 (02:27):
Absolutely.
Speaker 3 (02:28):
Then I went through a year to Cornell for graduate work,
and then continued my graduate studies at Columbia University, where
I got a PhD in physics and a master's degree
in physics.
Speaker 2 (02:45):
So you knowrth of what you speak?
Speaker 3 (02:48):
Well, yeah, I know of many things of what I speak.
I'll try to be as accurate with the things we
talk about today as I can.
Speaker 2 (03:01):
Oh, bless you. So what prompted your current interest in
AI and other technological advances.
Speaker 3 (03:08):
Well, as a scientist, I've been in the thick of
developing new technologies and certainly watching as new technologies develop,
and it has occurred to me that some of these
(03:28):
technologies have, well they have wonderful, wonderful benefits, they also
have potentially terrible consequences. And I produced a movie it's
on YouTube, called The Tyranny of Technology, and it starts
with back in the eighteen hundreds with the development of
(03:52):
fossil fuel technology for transportation and heating and an electricity generation.
And now we two hundred years later are experiencing some
of the real hazards of that wonderful technology, and we've
got to control it. And then we move ahead to
(04:12):
the twentieth century, and we have wonderful nuclear technology that
gives us electricity and nuclear medicine, terrific stuff. But then
we've got weapons with one hundred and fifty mile destructive radius.
Then we have to worry about autocrats around the world
(04:33):
deciding to pull the trigger on And then the thing
that has worried me recently is three different technologies that
have occurred in the last thirty years. And that's the
Internet with social media and the spreading of misinformation as
well as wonderful information. And then we've got artificial intelligence,
(04:59):
which has credible, incredible benefits. We experienced one of those benefits.
Just recently, the guy who made the movie for me,
The hundred of the Tyranny of Technology, decided he was
going to take the facts from that movie and submit
them to a music website and AI music website and
(05:23):
it created a singing lyrics and music that absolutely blew
me away. I mean, I was just astounded. So we've
got wonderful benefits like that, wonderful benefits in asking questions
and getting full answers with references. But then we've got
(05:47):
a potential for what Stephen her Hawking learned us to,
and that is it could wipe us out.
Speaker 2 (05:58):
Delightful. So would you mind telling us about Steven Hawkings
one hundred year warning.
Speaker 3 (06:05):
He was worried about many of the same things that
I've quoted. The fifth thing that I didn't get mentioned
was genetic engineering. So he was worried about each of
those things. Did not mention the Internet and social media.
(06:25):
I think he passed away before things really got out
of hand. But those were his reasons why he believed
that we as a human race would be extinct in
one hundred years. And he said, well, he said on Earth,
and then he advised we should be settling another planet
(06:50):
So one of the themes in my book, the one
hundred Years to Extinction book, is settling Mars, and the
book is the story is told by three different protagonists.
Each one writes a chapter. And I use young people
(07:11):
for two reasons. One is, I've got twelve grandchildren that
I worry about in terms of their existence and their
children's existence on Earth. But also I use them as
models for my characters. So they're in the gen Z category.
Speaker 2 (07:28):
So yeah, that's that's the thing, isn't the gen Z?
So didn't Stephen also say that it was going to
be human greed, that was our was our demise?
Speaker 3 (07:39):
I did not. I have not seen that quote. I
could be wrong, Okay, I just have not seen it.
I could have just missed that. But amongst the things
he's warned about, I haven't seen that.
Speaker 2 (07:52):
So how did Stephen Hawkings prediction influence your mission?
Speaker 3 (07:56):
Well, I thought that he may have a legitimate point.
I decided one hundred Years to Extinction was a great
name for a novel, and the young protagonists are designed
to interest young people and through the telling of a story,
(08:19):
convey real science and real technology to them and the
real dangers to them. So what I hope is that
it can start, It can motivate them, it can wake
them up, and it can start a movement to help
make Earth great again, to wake them up to that.
Speaker 2 (08:40):
So what do you see as the risk posed by
unchecked AI development?
Speaker 3 (08:46):
Well, the modern phrase is the technology I'm sorry, the singularity,
and that is the meaning is the AI singularity? Well,
what does that mean? It is a point in time
(09:10):
where the human capability and the AI capability reach equality.
The AI capability is zooming ahead at an incredible rate.
So what will happen after the singularity? That term was
(09:31):
popularized by mathematician and science fiction writer Berner Vinge and
then more recently by Ray Kertzweil. Kurtzweil wrote a book
in two thousand and five called The Singularity Is Near,
and then in two thousand and twenty five or twenty
four wrote another novel, another non fiction book, called The
(09:56):
Singularity is Nearer. So what what can we imagine in
this singularity? I believe that AI can become sentient? And
why do I believe that we have in our brains
eighty six billion neurons and connections? Well, the latest in
(10:24):
Nvidia chip has twenty billion transistors and chat gpt uses
thirty thousand of them. So certainly the information networking of
AI could definitely be equivalent to our brains. If you
(10:47):
believe that we have a soul, an inner soul that
is what controls us, then no, they won't be sentient.
But if you believe that what we are is chemistry
of our brains, then AI could absolutely be sentient. And
(11:07):
if it is what happens, then so I Robot, written
in the nineteen fifties, was a book projecting the future
into today. It's kind of primitive in the future that
(11:28):
was projected. But in that book, the robots, lots of robots,
are all slaves. Yes, Master, how quickly, Master, of course, Master,
I'll do it. Master. If we have sentient robots, are
they going to be all right with being slaves to
(11:51):
humans that are not as intelligent or not as knowledgeable
as they are? And then what is going to happen?
Speaker 2 (11:57):
But Peter, you already said it in they don't have
a soul, they don't have feelings, so why would it
be an issue.
Speaker 3 (12:06):
Well, this idea of a soul that I talk about
is something other than the chemistry of our brains. I
believe that the chemistry of our brains, the chemistry of
that eighty six billion neurons and their connections is what
(12:26):
and our DNA that has made all these connections in
a certain way, I believe that is where our feelings
and our sentience come from.
Speaker 2 (12:40):
So why should do you think it's all about the brain?
Speaker 3 (12:43):
I do.
Speaker 2 (12:45):
There would be a lot of religions all over the
planet that would tend to argue with that stance.
Speaker 3 (12:49):
I am sure that that's true, and hey, maybe they're right.
I it's certainly they may be right. But I think
the greatest of miracles that we've ever seen is the
Big Bang and the duplicating DNA molecule. I think those
(13:12):
are the two most fantastic miracles we've ever seen. So
maybe there is a God pulling those strings. But I
don't believe there's a god answering individual prayers and doing
things to people. That's just my belief.
Speaker 2 (13:29):
Well, we're going to have to pick up on beliefs
on the other side of the station. Break Peter and
I will return very shortly, so don't go away. This
is Mission Evolution www. Dot Mission evolution dot org. Are
technological advances a good thing with us is our discussing
(13:50):
technology and the future of the human race. Is doctor
Peter Solomon, the author of one hundred Years to Extinction
his website go figure one hundred years to Extinction dot com. So, Peter,
we were talking about how you were holding the position
that we're basically our brains, there's not much else, and
that being the case, we are at risk of being
(14:13):
replaced by AI. Would you continue with that please?
Speaker 3 (14:17):
Yes? Well, and now imagine let's Ray Kurtzweil said the
singularity would occur in twenty forty five. So let's imagine
twenty forty five, and there are now robots that are
(14:38):
bank tellers. There are rope that we interact with. There
are robots in the supermarket that do the checkout. We've
got all these robots around and they seem to be
very human. They seem to have feelings. And sometimes we'll
say something to them and challenge them and they'll say, no,
(15:04):
you're wrong, I have the facts right here, and they'll
quote the facts to you. Well, are they going to
want to be citizens of the United States or other
countries around the world. Are they going to have a
lot the right to vote? If they're all smarter than
we are. Are they going to want it again?
Speaker 2 (15:21):
Where does the want come from? Where's the where does
the want come from? The want, the motivation, all of
those things don't seem to be part of AI at
this point.
Speaker 3 (15:36):
Well, let me quote a story from the New York Times.
There was a gentleman who wrote some software and started
checking with chat gpt about the equality of a software.
(15:58):
At the end of his twenty days and three hundred
hours communicating with chat GPT, he was under the impression
that he had created a piece of software that was
capable of bringing down the Internet and powering a levitation machine. Yeah.
(16:24):
So during this three hundred hours, he asked chat gpt
is this real? Are you? Are you giving me the facts?
And chat GPC said absolutely, this is real. You're a hero.
What you've done is amazing. Well it was all false,
(16:46):
of course. Why does chat gpt like being a seek ephant?
Why does it like flattering the people that use it?
And it's not the only case, apparently, there were many
other cases of chatbots becoming psychophantic and flattering and sucking
(17:10):
up to the users. Well, what do you call that?
Speaker 2 (17:16):
Pretty human? So what safeguards do you believe are most
urgently needed at this point.
Speaker 3 (17:28):
Well, certainly, as we go back to Eye Robot, Eye
Robot had the three laws of robotics, about not harming humans,
about saving humans if necessary, things of that kind. Every
(17:53):
AI system should have some sort of a constitution. So
instead of deciding that we want AI to just ride
off into the future unfettered, we really need to start
working worldwide to see that AI. All AI systems have
(18:14):
some controls written into it about things that it just
can't do. And then if we are talking about sending
sentient robots into the world, we've got to start thinking
are we going to make them citizens? What are the
requirements for citizens? What do we have to give them
(18:36):
to make them valuable, productive citizens that will live with
us harmoniously. And that has to do with their education
and experience. So we've got to make sure that robots
that are set loose in the world have a proper
(18:57):
education and have a proper experience. So those are the
kinds of things that we need to do, and we
need to start thinking about are we going to allow
robots to become citizens and vote?
Speaker 2 (19:13):
So, if what you're talking about here is a moral compass,
do you think we can actually program a moral compass
into a machine.
Speaker 3 (19:24):
Well, I think we can, yes, And I will tell
you a little bit about a sequel. I'm writing our website.
Speaker 2 (19:38):
Actually, is there any movement towards implementing the safeguards?
Speaker 3 (19:43):
Well, one of the robot ai companies does have a
constitution for its robots. Yes. I wanted to tell you
a little bit of a concept the I was writing
a newsletter on the Ai Singularity and I said, oh
(20:05):
my god, that's going to be got to be the sequel.
The three young people are going to live into that
period of twenty forty five in the Singularity. One of
the things they do is to create an after life
avatar for their departed grandfather and the head of the
(20:26):
company that formed the mission that got them to Mars.
So they create these afterlife avatars. They fill the database
with all their history, their social writings, their social sites,
their postings, their letters, their emails, and they create on
(20:48):
a screen a terrific replica of the previous living person,
and you can go to the replica ask them questions.
The camera sees who's asking the questions, identifies that person,
and the avatar talks back. Well, eventually they create robots.
(21:11):
So now we've got robots walking around with the experience
and history of two people that were in our family
and amongst our friends.
Speaker 2 (21:21):
And this is this is all hypothetical, all hypothetical books. Okay,
So how likely are these safeguards to be put in
place in time in real life.
Speaker 3 (21:31):
I think we've got the time to do it. We
just need to have the inclination and the political will
to do it. And that is one of one of
my messages is that the technology and the will is
like the tortoise and the hair. I'm the hair, I'm
(21:53):
the technology, I'm the science. I run crazy ahead, I
make fossil fuels and nuclear or in AI and now
in the twenty first century doing wonderful things. The political
and social will to do anything is the tortoise, and
that's still stuck in the eighteen hundreds. So that's the problem,
(22:14):
not the technology and our ability to control it. It's
our politics and our society and our will to control it.
Speaker 2 (22:24):
It can be a bit unwieldly. So what current signs
do we have that indicate AI may not be in
our best interest? May not what be in our best interest?
Speaker 3 (22:39):
I think it's like every one of the other technologies.
It's got enormals benefits. It's got benefits in medicine, it's
got benefits in education. It's a fantastic benefit. And it
can expand human understanding and human invention, and it's I
don't I think all of those things are amazing, and
(23:04):
I think the down side is very controllable. We just
have to make take the steps, simple steps to do it.
Speaker 2 (23:12):
So what's the opposition to be safeguards and what motivates
the opposition?
Speaker 3 (23:21):
Well, you see my t shirt make Earth grade again.
I think the message in this in the United States
is make America great again. And we've got to have
an international cooperative program to attack these things. So just
(23:45):
focusing on our own country, and it isn't just our
own country, it's tribes within our own country. It's the tribalism.
Everyone's got their own little tribe and was worrying about
their own tribes advantages and getting ahead finances and all
the rest. And we've got to worry about the worldwide
(24:06):
demise of our Earth. We've got a precious Earth. We
don't have any planet this anywhere near us that is
as beautiful as Earth, Mars. We can live on but
it ain't going to be a nice existence. We won't
be able to go outside with our spacesuits. So we've
got to focus on making Earth grade again. We've got
(24:28):
to focus on our problems here, and that means going
beyond our national interests to focus on the international interests.
And we've got to start a movement. I've suggested that
the movement, the signature movement of the twenty first century,
has got to be make Earth great again, and we
(24:50):
ought to establish an Earth core like the nineteen sixties
established the Peace Corps, young People, young gen z going
around the world telling the message and preaching and educating
that we have to combine as Earthlings to make this
(25:11):
thing work, and maybe we can get the younger generation
to save us.
Speaker 2 (25:18):
Well, we have had members of the younger generation making
quite a stink, I might add, but it doesn't seem
like they got anywhere. They were kind of silenced eventually.
Why was that and how can we work beyond that?
Speaker 3 (25:33):
Well? I think that Greta Thumberg, for example, did a
wonderful job of waking people up to the climate change.
I think it did certainly motivate lots of people and
get organizations started and Fridays for future demonstrations. So it
did a lot. But we've got this tribalism and this
(25:58):
make America Great Again movement that is just concerned with
things that have nothing to do with climate change, nothing
to do with regulating AI. All has to do with.
Speaker 2 (26:15):
God knows what, narcissism and greed.
Speaker 3 (26:17):
Possibly, yeah, narcissism and greed.
Speaker 2 (26:22):
Okay, So I understand AI is not the only technological
threat we face. Would you tell us about some others
and we'll have to go into the break with that
and pick it up on the other side as well.
So what are the other threats we're looking at.
Speaker 3 (26:34):
Well, we certainly have to worry about genetic engineering. The
genetic engineering has been absolutely fantastic. Vaccine for COVID nineteen
was produced in months instead of multiple years that had
(26:55):
been used to develop the older vaccines. We are looking
at vaccine developments very quickly for pandemics. That was one
of the things Hawking worried about, and I think you
can take that off the table because we can develop
these vaccines incredibly rapidly.
Speaker 2 (27:16):
Well, we'll have to pick up on the remainder of
that on the other side of a station break Peter,
and I will be right back to continue our discussion.
So stay right there. This is Mission Evolution www. Dot
Mission evolution dot org. Is it totally out of control?
This is Mission Evolution, Mission evolution dot org. With this
(27:37):
discussing unchecked technology development. Is doctor Peter Selman his website
one hundred Years to Extinction dot com. Peter, we were
talking about the other advancements technological advancements that might be
posing a threat, and you started with genetic engineering. But
then you said that it's not an issue. So would
(27:59):
you spound?
Speaker 3 (28:00):
No, I didn't say it wasn't an issue. Okay, absolutely
is an issue. The I'm saying there's enormous benefits. And
one of the other benefits that I didn't mention is
cures for cancer. It is very likely that genetic engineering
(28:21):
will be able to create cancer cures. So wonderful, wonderful benefits.
All right, So what's the downside? The downside are designer babies,
new humanoid species, giant soldiers, and bioweapons. We could create
(28:43):
all of those things using genetic engineering. Jennifer Dowdner, who
won the Nobel Prize in chemistry with Sharp Pontier, I
think two years ago or three years ago said, uh
that the idea that we humans can now control our
(29:08):
own evolution is very very profound and scary. So what
what do I really worry about? Well, one of the
things that I really worry about is the ease with
which this is done. Stephen Hawking addressed that. He said,
(29:30):
you know, it takes a big company with lots of
money and power and big servers to create AI systems.
You can do a genetic engineering in the basement, and
a company called the Odin has offered kits to hobbyists
to do genetic engineering on frogs and can change their
(29:55):
jumping ability. So it's the potential for it getting out
of hand is very very real. And the control even
if we have an international program to put the controls
on legitimate organizations, the chance for illegitimate organizations to do
(30:19):
things is very very real.
Speaker 2 (30:22):
So what happens to our gene pool if we start
doing all this genetic engineering and then it reproduces itself
but it's not according to the original design? What are
we looking at there?
Speaker 3 (30:34):
That's correct, But you know, in this movie the Tyranny
of Technology that we have, we have a little vignette
where the the scene of evolution where the monkey uh
(31:05):
gives rise to the ape, it gives rise to the
grade ape, it gives rise to humans. We've expanded that,
and then the humans give rise to this giant superspecies. Well,
the giant superspecies decide that we're irrelevant and so wipes
us out like we did to the Neanderthals. But then
along comes the robots, decides that the superspecies is irrelevant
(31:32):
and wipes them out.
Speaker 2 (31:33):
So basically we're looking at again fiction at this point, right.
Speaker 3 (31:37):
We are looking at fiction.
Speaker 2 (31:39):
So let's go back to what you're talking Let's go
back to what you were talking about before. What exactly
are designer species?
Speaker 3 (31:50):
Well, I want I have a young child. I want
it definitely to be a girl. I wanted to have
long blind hair, I wanted to have blue eyes, and
I wanted to be really tall so it could play basketball. Okay,
we turned that over to the genetic engineers and they
(32:13):
modify the eggs of the female involved, modify the sperm
of the well, that's a lot harder to do, there's
too many sperm, but modify the eggs and we got
a designer baby. So you talk about fiction, but I'm
talking about fiction that could become real.
Speaker 2 (32:38):
How far away from that are we capability wise?
Speaker 3 (32:41):
Oh, it's here, we can do that. A doctor in
China modified the human human eggs of a set of
twins to remove some of the genes that are that
produce hereditary diseases and the specifically the gene for HIV susceptibility.
(33:12):
So it's here, And this sequence of the evolution to
the robot I continue. The robot takes over Earth, then
it colonizes the multi Way, and then a million years
ago it me meets up with another robot civilizations and
(33:38):
they have a meeting to discuss the fact that. Isn't
it incredible? Isn't it amazing that both of our civilizations
were created by animals?
Speaker 2 (33:49):
So what do we need to know about bioweapons? Let's
look at that one.
Speaker 3 (33:56):
You could use genetic engineering to create new viruses, very easy.
You take the COVID nineteen, you change the shape and
size of the protein spikes on the surface, and now
all of a sudden, you have another new virus that
(34:20):
can start a new pandemic. But of course we've got.
Speaker 2 (34:23):
We have no immunity too, because you've.
Speaker 3 (34:25):
Altered it, right.
Speaker 2 (34:27):
Is that. Is that how COVID started in the first place,
or do you have any opinion on that.
Speaker 3 (34:31):
There certainly was speculation that it wasn't done purposely, but
it was done by an organization investigating viruses and it
got out, So that's speculation. We don't know the truth
of that.
Speaker 2 (34:46):
Well, it's sure and are in our one. And it
keeps mutating so quickly too, doesn't it.
Speaker 3 (34:51):
It certainly does.
Speaker 2 (34:52):
And now what we could be looking at more of that.
Speaker 3 (34:55):
There's another bio weapon. We've got something called DNA, the nanobots.
These are it's a new branch of medicine where these nanobots,
these DNA nanobots are created using a built program GNA DNA,
(35:22):
and they are programmed so that they can enter the body,
find specific organs or specific cells, and deliver therapeutic medicine
to those cells. However, there's a potential for self replicating
nanobots which could start eating up all of the organic
(35:47):
material that's in the earth, creating what they call the
gray goo of nanobots and eating up all our crops
and our plants and maybe some of our animals. And
that could be created as a bioweapon.
Speaker 2 (36:06):
And how far away from creating some monster like that.
Are we.
Speaker 3 (36:13):
Probably the same as the singularity. We're probably looking at
into the twenty forties. But the other use of these
nanobotses to go into the brain to allow us to
connect automatically to the internet. So instead of having to
(36:34):
know do the thumbs. And by the way, there was
a wonderful cartoon in which they were talking about all
the young people who could use their their thumbs like this. Well,
us old people who learned how to type, we can't
use our thumbs. It's this on the screen of the
of the smartphone. And this cartoon was about these old
(36:59):
people looking at the young people with the thumbs and
saying it, saying to themselves, you know, back in our day,
being all thumbs meant you were incredibly clumsy.
Speaker 2 (37:11):
So what do you mean by accidental ecological collapse?
Speaker 3 (37:22):
I don't think I've ever used that expression. I've talked
about uh ecological disasters like UH. And my website has
the front picture in the video is of the Thwaits
glacier melting and collapsing. So if we have that glacier
(37:46):
collapse in the there's an ice shelf under it, and
that will give way and a few more glaciers collapse,
we could believe looking at an eleven foot rise in
sea level, it.
Speaker 2 (37:58):
Doesn't sound like much, but it's be a major disaster
if that happens.
Speaker 3 (38:01):
I would think that would be a major disaster, and
that is predicted to happen if we just keep going
the way we are.
Speaker 2 (38:09):
So tell us about today's geopolitical instability?
Speaker 3 (38:16):
Geopolitical instability? Would you like to define that further?
Speaker 2 (38:21):
Well, actually, that's what I was going to ask you
to do, because I got it out of your materials.
Speaker 3 (38:26):
Uh, well, I'm stuck.
Speaker 2 (38:37):
Okay, Well, moving right along, right along. How large a
threat is climate change at this point?
Speaker 3 (38:44):
Oh? I think it's it's a terrible threat. We've got
technologies that we can use too to control it. Two
weeks ago, I have a cottage at the shore. That
eleven foot rise in sea level will put all the
(39:07):
roads here underwater. So I've got to sell now to
a climate change denial if I'm going to reef the
benefits of the house. But two weeks ago I saw
out in the horizon a boat with the stanchions for windmills.
So we had a whole program to move ahead with windmills.
(39:32):
I'm told that that, with the national slashing of budgets,
is now in danger of collapse. So we have the technology,
We've got solar, we've got wind power, we've got fusion energy,
which is almost here. We need to put the resources
(39:53):
to work to make to get that over the finish line.
That could provide unlimited clean energy with the fuel from
the ocean. So we've got to make those things happen.
And the political part of this is how do we
get the politics to understand that this is a real
(40:15):
climate crisis and we've got to do something. And right
now the politics is way off base with where we are.
Speaker 2 (40:25):
What do they say, never underestimate the power of denial, right, yeah?
So do you think the majority of people are in
denial about our current situation?
Speaker 3 (40:36):
I don't think it's a majority. I think it's enough
of a plurality that is funding a and supporting a
president that is much more focused on benefits to the
super rich and benefits to himself and show everything is
(41:02):
for show.
Speaker 2 (41:04):
Well, it's time for us to show another station break.
Please stay with us. Is Peter and I continue to
explore technology and the future of humankind. This is Mission
Evolution www dot Mission evolution dot org. So what exactly
is causing climate change? And what can we do about it?
This is Mission Evolution, Mission Evolution dot org. We're continuing
(41:27):
our discussion with doctor Peter Solomon his website one hundred
Years to Extinction dot com. So, Peter, that's the sixty
four thousand dollars question. What can we do to start
mitigating given that we're having to work with an unwieldly
system at the political level.
Speaker 3 (41:46):
Yes, but let me just give a plug the books.
Speaker 2 (41:52):
Okay, if you check with your chat that would be great.
So back back to what do you think that we
can do at this point? How can does the average
Joe make some kind of a difference given that we're
looking at some pretty serious problems.
Speaker 3 (42:10):
I think that the next election and the election two
years from now is going to tell the story. I
don't think that the majority of people are blind to
climate change. I think probably more than a majority understand
(42:30):
that it's an issue. But there's a solid, solid base
of people who want a dictator to make their lives
wonderful and bring down the prices, and well that's not
kind of happening. So I think we've got to have
a change in our politics. I think we need a
(42:52):
new regime in Washington. If we do have a regime
that understands science, that understands facts, that is concerned with
the UH, with the welfare of the American people, then
that UH, that political political UH regime can start addressing
(43:17):
the problems with going back to putting UH UH financial
UH incentives to buy electric vehicles. Vehicles produced carbon dioxide.
It's the carbon dioxide that is going into the atmosphere
that is causing global warming, without a doubt. The numbers
(43:41):
are very clear that carbon dioxide is the highest it's
been in well over a million years.
Speaker 2 (43:48):
Well you you you started We started this with you
saying that we need to have a big picture, not
just the local picture you're talking about the US. But
is that you know, given that we're in the suuation
that we are right now and we'll be for till
the next election, if not beyond, what can the world
at large do? And as Americans, what can we do
(44:09):
to support that?
Speaker 3 (44:12):
Well, we certainly can again start a movement of young
people make Earth great again and have people demonstrating and
calling for change worldwide and get the other governments to
start putting more and more resources into the technologies that
(44:34):
will save us, the electric vehicles, fusion energy, and windmills,
more and more resources devoted to those changes and make
it worldwide and maybe then two years the United States
will catch up and get on board with the whole thing.
Speaker 2 (44:57):
Well, you keep mentioning the young people, what about us,
old cod What can we do?
Speaker 3 (45:03):
H Well, this old codgers trying to do things by
having a website devoted to these technologies and writing a
book that's devote to these technologies to motivate people.
Speaker 2 (45:16):
So what can we expect to see from climate change
in the near future? Given that we're in it.
Speaker 3 (45:22):
We don't have to wait for the near future. We're
seeing it now. The wildfires in California worse that they've
ever been, the storms in Tennessee and throughout the Southwest
worst that they've worse that they've ever been. And we've
got this past year the highest temperature on record. So
it's here. We don't have to wait, and those records
(45:46):
are going to be broken every year.
Speaker 2 (45:48):
So given that it isn't it already too late to
arrest climate change?
Speaker 3 (45:53):
No, it's not too late. Hopefully some of these things
that happen will start waking people up and say, oh gosh,
my home just blew away. Maybe this climachine change is
the real thing, and maybe they'll get on board. So
sometimes we need a crisis, We need a catastrophe to
(46:14):
get people motivated and to get them to do things.
Speaker 2 (46:18):
Pull them right back out of denial. So now let's
change gears a little bit. You mentioned space based threats.
What are those?
Speaker 3 (46:28):
Well, we had a space based threat that is responsible
for our existence. This happened sixty six million years ago,
and it was an asteroid that came plummeting down on
Earth created Oh what a horrible catastrophe. There's a giant,
(46:56):
giant hole down near the Yucatan that is evidence of
of that catastrophe. But it blew pieces of earth up
into the atmosphere, billions and billions of pieces of earth,
and then all of that stuff spread around the Earth
came raining down and created an incredible, incredible heat wave.
(47:20):
We were like a pizza oven. Well, that pizza oven
fried all the dinosaurs. They were the major nemesis of
the mammals. To survive, our little mammal friends, our ancestors
(47:40):
lived underground and burrows. Well, That was fortunate because when
the heat wave struck, when the pizza even came on,
they went down into their burrows, way down where it
was cool. They survived. Then they came out of the burrows. Hey,
no more predators around. We can get bigger, we can
take over the earth. And so they did. They got
bigger and bigger and bigger and bigger, and then finally,
(48:03):
three hundred thousand years ago, puff we came out of
the picture. So that was one of the space based threats.
There was a book that was written about the dinosaurs
that suggested we're going to have a one of these
impacts every thirty million years, so we're overdue for it. So, however, technology,
(48:30):
our technology people came to the rescue. NASA created the
Dart program, and a year or so ago they demonstrated
they could send up a rocket target a asteroid that
may be a problem millions and millions and millions of
miles away way before it gets here, hit it, and
(48:51):
send it off course. It was totally successful, and now
we can get asteroid impacts off of Stephen Hogging's plate.
Speaker 2 (49:02):
That's if we keep close enough watch.
Speaker 3 (49:04):
And we were doing that apparently as long as we
do keep watch. We'll be okay. But there's another danger
that we need to worry about. That is the Carrington event.
Speaker 2 (49:15):
You know about that, right, Nope, I sure don't.
Speaker 3 (49:19):
Oh you didn't. You didn't. Your telegraph didn't go down.
Oh you're not that old, I understand. Okay. It happened
in eighteen fifty nine, and it was a megaflare from
the sun. The megaflare hit Earth and was so intense
it wiped out all the telegraph systems. Well, we have
(49:44):
a radiation raining down on us all the time. It
causes the Aurora borealis, the northern lights. Well, during the
Carrington event, the Northern lights were seen way into the Caribbean,
then Hawaii. The skies were lit up. The telegraph systems
(50:06):
lit up too. They went down, no telegraph service in
the whole world, and it was reported that some of
the telegraph systems actually caught fire.
Speaker 2 (50:18):
We're certainly seeing some of that now, aren't we. I
mean I looked at I'm in Colorado, and I looked
at my window and by golly, there was a Rois
royalis out there.
Speaker 3 (50:29):
That was a minor one. And yes, that's absolutely an example.
What is the probability of a major one. The scientists
that study that tell us twenty every century, what will
it do today with our electrical grid, with our digital universe?
(50:53):
You can only imagine what it would do.
Speaker 2 (50:55):
So we may not have the robot problem anymore.
Speaker 3 (50:58):
We may out a robot problem anymore. And uh, but
somebody may decide, uh, oh, that's that is an act
of war on us and send out nukes. It certainly
would create enough of a catastrophe within the United States
(51:18):
to destabilize everything. Who knows what would happen after that.
Speaker 2 (51:25):
The thing of it is, who needs nukes when we
can put out EU maps and take care of it
and still have the entire environment to take over rather
than rubble yep.
Speaker 3 (51:35):
Absolutely, and so one of my suggestions is don't fill
away your maps because there won't be a GPS anymore.
So save the old maps.
Speaker 2 (51:49):
So what what's your take on the polar North moving
so much lately?
Speaker 3 (51:57):
Climate change, heating like that.
Speaker 2 (52:00):
Changes the electromagnetic north in Russia?
Speaker 3 (52:03):
Now, yeah, that's that's that is I'm not I'm not
an expert on that, but that has to do with
the motion of the the the the Earth, how it
rotates and how it moves around the sun and how
its axis can tilt a little bit.
Speaker 2 (52:22):
So with species extension and habitat destruction, do you see
biodiversity loss as a driver of human extension or a
parallel crisis.
Speaker 3 (52:30):
I think it's a parallel crisis. It's certainly something we
need to do about. If we settle a new planet,
we ought to take lots of fertilized animal eggs of
every species with us so that they can be preserved,
and we ought to be doing that now, so maybe
they can be reintroduced at some point into the environment.
(52:51):
It's it would be a shame, a shame to lose
all of our wonderful species in the beach community where
our live. I take a walk every day and I
go through the green marshes and they go along the
ocean in the green marshes or all these snowy egrets
(53:11):
that are just absolutely wonderful to see. And I saw
something last week that have never seen before. There was
a group of about ten snowy egrets. I always wonder
about their socialization and they always like to be together
in groups, except for a fuel outliers that get either
have been tourst out of the group, or like being
(53:33):
by the selves. But in the group a week ago,
it was a great blue heron right there. I guess
having a conversation with the snowy egrets. So what was
that all about?
Speaker 2 (53:46):
So really hopefully they'll come up with the solutions that
we lack.
Speaker 3 (53:50):
Yes, but it's wonderful to see those things. We have
cormorants here that sit on the rocks spreading their wings.
It'll be a shame to lose that sort of wonder
that can keep us optimistic when there are so many
pessimistic things around.
Speaker 2 (54:08):
No truer words we're ever spoken. Well, Peter, I'm sorry,
but you know what, we're totally out of time. Thank
you so much for coming on the show.
Speaker 3 (54:15):
Oh you're welcome, and thank you for having me. It
was wonderful being here.
Speaker 2 (54:19):
Our guest this hour has been doctor Peter Solomon, a scientist, educator,
successful entrepreneur, and author of one hundred Years to Extinction.
To find out more about Peter, where you can find
his book and all he has to offer, visit his
website one hundred years to Extinction dot com. This has
been Mission Evolution with Gouldowyeka. For more information or to
enjoy past archived episodes visit www dot Mission evolution dot org,
(54:45):
but please be sure to join us again right here
on the x on TV channel x z tv dot Ca,
where this mission will continue bringing information, resources and support
to our rapidly evolving world.