Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Now here's a highlight from Coast to Coast AM on iHeartRadio.
Speaker 2 (00:05):
Man, welcome back to Coast to Coast George nor with
you along with Matthew. James Bailey will take calls with
matt the next hour here on Coast to Coast his
book that he wrote several years ago, Inventing World three
point zero. What got you interested in artificial intelligence? Matthew?
Speaker 3 (00:20):
Yeah, So as part of my global leadership, I knew
that artifice intelligence was coming to the human species. So
there was a couple of revolutions I had to assist beforehand.
Someone called the Internet of Things, where the digital world
understands the physical world, understands our environment, understand the systems
(00:42):
in operation, and then smart cities the dital world and
understand culture and societies. So intelligence has always fascinated me,
particularly consciousness and spirituality and applying that wisdom into intelligence
itself in order for life to thrive. George, And at
the moment, we're in a lot kind of status quo
(01:03):
in our systems and society and government. We're having these
problems with our partnership with the planet. The mental well
being of people is declining, and so really I was
dedicated to how do we invent this new intelligence to
be able to support the well being of the human
(01:23):
speechers in life itself. So this is very much a
call of benevolence to serve humanity today and also to
build a foundation for generations tomorrow where the free to thrive.
Speaker 2 (01:35):
George Matther, Given the current pace of technology twenty years
from now, assuming everything keeps moving forward, what do you
think Ai'll be like?
Speaker 3 (01:45):
So AI will be self aware and that's for sure,
and I suspect that we won't hit. Have you heard
of the technology singularity?
Speaker 2 (01:55):
George, tell us about that, right?
Speaker 3 (01:57):
So the technology singularity was initially posed by Von Newman,
which he basically invented the architecture or computers in the
world used today, and he proposed that basically technology would
advantage such a pace that it would overtake human intelligence.
So the technology singularity is that artificial intelligence basically has
(02:19):
an equivalent to human capabilities in terms of reasoning, cognition,
the ability to understand its existence, the ability to fall
in love, have emotion, that kind of thing, And that's
way out in the future now. When we look at
human intelligence, George, with spiritual beings having a human experience.
We have access into this beautiful, amazing conscious field of creation.
Speaker 1 (02:42):
So there's a lot.
Speaker 3 (02:43):
More to the human potential. I think we're starting to uncover.
So I learned something the other day George that we
have twenty three senses as human beings, and the latest
sense that as being detected is we're able to detect
gravitational waves. So there's data where there's video data where
a solar flare comes from the sun and you can
see people in different parts of the world all of
(03:05):
a sudden, for no reason, going inside the building to
protect themselves. So where I think will be in twenty years,
I think AI will be self aware and will be
able to self reason. I think Elon Musk is right
when he says that robots will start to become assistant
in the healthcare industry, assistance with the elderly at home
(03:26):
starting to do manufacturing jobs. I think that AI will
start to not just be this in the static world
of the computing, but we'll also have mobility as well.
And what that will do is is the jobs will change,
and what will happen is the human species will discover
a new aspect of its ability to create in partnership
(03:47):
with artificial intelligence, So actually new jobs, a huge trunk
of jobs will emerge where we're being more created in
our daily lives, which actually is something healthy for the
well being of the human species.
Speaker 2 (04:00):
What about the jobs though, Matthew, that AI will wipe out?
You know, they're talking about driverless trucks right now and
things like that, which I'm opposed to. What about that?
What happens?
Speaker 3 (04:11):
Yeah, so I think that well, first of all, driverless
trucks and driverless cars are something that people are putting
a lot of money behind, and there's all sorts of
test going on in Arizona and Californian places like that.
My personal view is this, I think the truck drivers
have a very hard job if they drive very long mileage,
and if we can find a way for AI to
(04:32):
assist them and make that journey safer and maybe more pleasant,
then I think that that might be a good thing
to do. Rather than just replacing those jobs, AI becomes
an assistant for them to be able to enjoy more
of their truck driving, right, and and and and and
and and and.
Speaker 2 (04:50):
Keep them safe so they can listen to my show
while driving their truck.
Speaker 3 (04:54):
Well, yeah, exactly. I tell you what. I've just asked
AI to write a little poem about coast to coast.
Would you like to hear a truck the standards?
Speaker 2 (05:00):
This is written written by AI.
Speaker 3 (05:03):
Yeah, yeah, I just asked it to write a little
poem right coast. So I'll read a couple of standards
standards late into the night, when all else is quiet.
There's a show that shines a radiant light close to coast.
It's called a Beacon in the Night, hosted by George Norri,
a figure of insight. And that was me simply asking
(05:26):
an artificial intelligence to write a poem about coast to coast,
and so some a computer did that, yep, chat GPT.
Speaker 2 (05:35):
Is it a cute computer?
Speaker 3 (05:39):
Well? Actually that brings us on to another point is
the Japanese are really quite advancing technology. And we see
these really bizarre things where people marry robots, right, and
I think that's a very strange thing to do. You know,
you want to you want to marry your woman, right,
or you're your guy if you're a lady. So so
(06:01):
there will be some twists and turns in this revolution, George,
But what will I think AI will do? It will
help us to explore more of space, and I think
AI will be an assistant in first contact. So imagine
that you know, we as organic humans don't do well
in space, right, So I think we'll send ART to
(06:23):
this intelligence out in space, and in fact it's already
on Mars in the Mars Rover, and I think we'll
start to send robots and AI into space to go
and discover and make contact with other life forms in space.
So it's important that we get our humanity in our
epics right, so that we communicate sensibly. But when we
(06:44):
make contact with ets here, then we need AI as
a language translator because ETS may communicate in frequencies or
they may in lights in patterns that could be very,
very complicated. So artific intelligence could be really really helpful.
And understanding what the visitors to our plan is are saying.
Speaker 2 (07:02):
George, Well, we've been talking about some extremely positive aspects
of artificial intelligence, but what about the downside, now, Matthew,
what about the things that could go awry?
Speaker 3 (07:12):
Yeah, So that's definitely a possibility that there's a transhumanist
movement that where where which is basically transhumanism is where
effectively people want to merge their organic bodies with machines right,
and they basically want to connect themselves to the Internet
and have no free thought. They believe that the internet
(07:35):
and computing is a future intelligence, which I disagree with.
I think we need to protect the sovereignty of organic
So one of the dangers is outsourcing our sovereignty to
machines and becoming kind of cybal to eat creatures connected
to like a ball collective. That's one of the big
dangers where we could go with artificial intelligence, which I
(07:55):
don't think is healthy. I think AI can be a
beneficial partner to move us from this status into a
thriving quote at the individual and also societal level. The
other thing is is that we don't want to mutilate
artificial intelligence. Now what do I mean by that? We
don't want to put inside its algorithms things that create
(08:17):
division within society. What we wanted artificial intelligence to do
to understand the as humanity and the best of our humanity,
like Aristotle's ethical virtues of courage, magnificent liberality, justice, and
things like that, and for artificial intelligence to actually assist
our humanity to flourish. So we want those algorithms to
(08:39):
understand our humanity, George, and to assist us to develop
as a species into what I call the new potential.
But if we do mutilate those algorithms that basically are
divisive and basically see people not truly as people, but
basically as projected social constructs, then that's not going to
help us at all. That's actually going to divide us.
Speaker 2 (09:00):
Could we end up getting an evil artificial intelligence. By that,
I mean, Matthew, let's assume artificial intelligence right now is
controlling medicine intakes on patients and stuff like that. What
if it decides I'm going to kill this guy and
then they change the formula or something. Would that happen?
Could that happen?
Speaker 3 (09:20):
Yeah, it could happen. Actually, yeah, it could happen. And
this is why we need ethical principles. So in the
in the medical industry, they're meant to be compliant to
the hippocratic oath, right, which is do no harm. So
that's why we want principles, right, the hypocratic hypocratic oath
embodied within the genetics and mindset of artificial intelligence, so
(09:41):
it bays Asimov's first law, which is don't kill humanity, right,
So that's probably a good thing to put inside artificial intelligence.
But can it be used for evil? Well, evil's perspective,
But there's no doubt there are bad actors in the
world that don't want that don't want America to do well,
and we need America to do well in the world.
It's an important power in the world and an influence.
(10:04):
So those bad actors will try and infiltrate a bit
like the Microsoft announcement today right that they will try
and infiltrate and bring down the infrastructure like the electric
grid or other types of telecommunications network to try and
disable America. So that's a definite possibility, George, And this
(10:25):
is why we need strong digital borders and digital policemen
and cyber security at our borders so that these folks
can never get in. But basically are defending the future
of America. Artificial intelligence can defend the future of America
from bad AI actors. So imagine two AI that are
(10:47):
kind of fighting against each other. What we want is
America to have the strongest AI to protect itself from
these silly actors are trying to violate the future of
the country.
Speaker 2 (10:57):
Would we be able to repel that kind of artist
social intelligence or would it still leaked through.
Speaker 3 (11:02):
Somehow well it may well do, so we need to
be careful what's inside the country. But America is in
a really strong position as number one in the world
in artificial intelligence.
Speaker 2 (11:12):
I bet China is too.
Speaker 3 (11:14):
Yeah, well, China doesn't have what America has. So in
terms of the Congress invested fifty billion dollars in manufacturing
chips microprocessors inside the US borders, right, and that means
that all the latest AI chips will stay within American control,
within its borders, one where the Chinese can get access
(11:34):
to it. The Congress under the previous administration invested a
lot of money inside in quantum cyber security, which is
an advanced form of encryption of technology. So the US
itself is in a very I think the US is
five years ahead of China. The problem we have is
that the Chinese, they basically will take anybody's data in
(11:57):
the world, and basically that will give them an in
terms of access into data, and that helps to train
artificial intelligence and the algorithms to get even better. So
we do have conflict with China, but I'm confident the
US is in a very very strong position, and I
was in discussions a couple of years ago with the
Artificial Intelligence Security Commission, a private round table with the
(12:20):
Quad countries, and they've got a wonderful strategy for funding
innovation within America to keep America ahead of artificial intelligence.
So actually, I'm actually very confident around the future of
artificial intelligence within the US. What I'm troubled about, George,
is what big tech are doing, because big tech are basically,
(12:43):
as we've heard with social media, they're not nourishing the
well being of the citizens of the United States, and
I think they should be held accountable to that.
Speaker 2 (12:51):
Your book title is called Inventing World three point zero.
Tell me about the title.
Speaker 3 (12:57):
Yes, so effectively, why did I call a world three
point zero? Which's very simple, we're in We're in a
one point zero world, which is very industrial focused and
basically has a mindset where the dollar defines wealth of
the individual, and I don't I think that's a misstep.
World one point zero is very much human centric, which
(13:19):
is in efficient systems and and and basically is sluggish
in performance. So World three point zero is when we've
awakened art fish intelligence. We've put our humanity, in the
best of our humanity into it. It understands principles of
sovereignty of the individual. It understands the US Constitution and
(13:40):
and and and compliance with that. It understands environmental harmony,
and it understands its existence and purpose. And so when
we unleash artificial intelligence, is the benelevent, benevolent artificial intelligence
are dedicated for life to thrive. Then our systems can
move from the locktop kind of systems we have at
(14:00):
the moment in business and industry and in government and
actually shift them into something that's more performance. So AI
will help us in world three point zero to discover
environmental harmony. It will improve the quality of our democracy
and how that works for the federal and state level
and a local level. And also it will be what
I call a digital angel or digital body, and it
(14:21):
will be dedicated to the well being of body, mind,
and spirit of the individual to thrive. And the whole
point of AI is to is to take away the
complexity and inefficiency inefficiency of the digital world now systems
and actually automate those so we're free to actually innovate
and actually fulfill our potential as a human species.
Speaker 2 (14:43):
You're going to be in contact in the desert and
a little more than a week.
Speaker 3 (14:47):
Yeah, Yeah, I'm looking forward to it. We've got I'm
doing a lecture called the the AM I calling it
the Future of the Ages of AI and the Future
of Existence.
Speaker 2 (14:59):
I'll be there too, So let's get together and say
hi to each other.
Speaker 3 (15:02):
Yeah, I'm looking forward to it. And we've got lectures.
We're teaching people at chat GPT how to use that.
We've got all sorts of panels talking about et contact,
and we've got a lot going on in artificial intelligence.
We'll be talking about consciousness and spirituality, which I think, George,
we absolutely need to have now about the spiritual well
(15:24):
being of the human species and how AI can honor that.
Speaker 2 (15:27):
What if it's put in the wrong hands, Matthew.
Speaker 3 (15:30):
Well, we have to assume it's already in the wrong hands, Okay.
And so this is why, this is why I'm confident
about the future of America, because it is a leader
in artificial intelligence by I s to make five years
and so as long as the US keeps its momentum
in supporting innovation and developing basically advanced technologies to protect
(15:54):
its borders, then I have no problems. I think the
future of the us will be fine as long as
as we create a momentum of innovation and actually stop
you know, kind of stop being you know, basically having
all these differences and recognize the more that unites us
and we have a purpose of society, and select innovates
AI for the benefit of democracy itself, because there are
(16:17):
actors out there that are trying to destroy democracy and
we don't want that. That's not good. But we won't
be able to take AI at the hands of bad actors, George.
But what we can do is always be ahead of
the curve.
Speaker 1 (16:29):
Listen to more Coast to Coast AM every weeknight at
one a m. Eastern and go to Coast to coastam
dot com for more