Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
A couple of weeks ago
, you may remember, we discussed
all of the positive things thatyou know are against the
negative perception of AI.
We discussed all of that, right, we made a case for it and I'm
really impressed with what wedid.
But there was an interview witha guy who is considered one of
(00:22):
the Godfathers of AI.
That's not my name for him, butthat is what he's being called.
Oh, I see they actuallydeveloped a bard, okay.
So if you're familiar withGoogle's AI search assistant, he
helped develop that.
If you are a long-time listenerof the podcast, you likely know
(00:45):
that Denison has talked abouthow Google has developed other
AI that had to flat out, shutdown right.
So this is an interestingscenario to hear him talk about
the warnings of AI and wherethis is going right now.
I think, as the formerjournalist in me, I think it's
(01:05):
very, very fair to be able toshare this stuff and get your
thoughts out of it.
So we're gonna dive into thatAlso.
You know I wanna take the timeprobably not to dive too much in
on this episode, but I knowmany of you know and have seen
my story of what I have going onright now and I just wanna
(01:26):
thank you all for the support,whether it's messages, texts,
comments or monetary.
It means all of it means theabsolute world to me and it's
made a huge difference.
I'm looking a little puffyright now because of the steroid
(01:47):
I've been on.
I've also been eating like amug, I'm not gonna lie, but it's
all been good.
So I just wanna thank you allfor that.
With that said, let's dive intothis episode.
So again and we're gonna dosomething a little different
here I don't think we'll haveany copyright strikes or
(02:07):
anything like that, because I amsourcing the original material
from 60 Minutes.
This was a good interview.
It's gonna be about 13 minuteslong and I think it's worth the
listen.
And, of course, for my audiofriends, I'll download it and
put it on there as well.
But we wanted to jump in anddiscuss this and, I think,
(02:28):
saying the groundwork first withI think this ain't getting
cock-eyed on me there we go.
I think, setting the groundworkfirst with what we're looking
at here, what this guy has towarn us about when it comes to
AI, I think that will be the wayto go.
So I think we should start offwith that.
Before we do that, let's rollthe intro.
(02:50):
What's going on?
Everybody, I'm John, and thisis the catch up ["The Three Ways
to Support this Show"].
(03:12):
Before we get into the episode,I wanna remind you guys the
Three best ways to support thisshow.
Number one leave us a ratingreview.
Wherever you're listening andwherever you're watching is
super easy to let us know whatyou think about this show, and
it helps us continue to grow aswell.
It's a huge benefit for us toknow how you're reacting to the
episodes, and also, each reviewputs us out in front of more
(03:34):
potential listeners, and sothat's a big, big benefit for us
as well.
So anytime you can do that,that's a huge help.
Number two subscribe.
Follow us on Facebook.
We're also normally on YouTubeas well.
Follow us on both.
Subscribe.
We go live every Thursday night,except for this week.
We had to make some adjustments, but we are live on there.
(03:56):
And one thing the biggest thingthat we encourage with that is
for you to jump in the comments,interact with us in real time.
We love nothing more than that.
And number three check out ourshop If you wanna support us
monetarily.
We got some really cool stuffsome shirts, some long sleeves,
some hoodies.
It's about to get cold outthere, even though it doesn't
(04:17):
feel like it right now.
You guys are gonna want some ofthat stuff and the money helps
go toward hosting and supportingthis show, because we do pay to
do it.
So, with that said, let's divein here.
And again, this is a 13-minutelong watch, but I remember
watching it and thinking everypart of it was integral to what
(04:38):
we're gonna talk about as well.
So let's watch this togetherand then share our thoughts
afterwards and please, if you'rewatching live on the live
stream, share your thoughtswhile we're watching this
together.
So yeah, let's go ahead anddive in here.
I believe this says it willshare the tab audio.
(05:00):
So if you don't hear anything,let me know in the comments.
But I think we'll be good to gohere.
I'm gonna just give it awayhere.
Here you go.
Speaker 2 (05:09):
Whether you think
artificial intelligence will
save the world or end it, youhave Jeffrey Hinton to thank.
Hinton has been called thegodfather of AI, a British
computer scientist whosecontroversial ideas help make
advanced artificial intelligencepossible and so change the
(05:30):
world.
Hinton believes that AI will doenormous good, but tonight he
has a warning.
He says that AI systems may bemore intelligent than we know
and there's a chance themachines could take over, which
made us ask the question.
The story will continue in amoment.
(05:52):
Does humanity know what it'sdoing?
Speaker 3 (05:58):
No Quick moment.
I think we're moving into aperiod when, for the first time
ever, we may have things moreintelligent than us.
Speaker 2 (06:11):
You believe they can
understand.
Yes, you believe they areintelligent.
Yes, you believe these systemshave experiences of their own
and can make decisions based onthose experiences.
Speaker 3 (06:25):
In the same sense as
people do.
Yes, Are they conscious?
I think they probably don'thave much self-awareness at
present, so in that sense Idon't think they're conscious.
Will they have?
Self-awareness, consciousness,oh yes, I think they will in
time.
Speaker 2 (06:39):
And so human beings
will be the second most
intelligent beings on the planet.
Speaker 3 (06:46):
Yeah.
Speaker 2 (06:47):
Jeffrey Hinton told
us the artificial intelligence
he said in motion was anaccident born of a failure.
In the 1970s at the Universityof Edinburgh he dreamed of
simulating a neural network on acomputer, simply as a tool for
what he was really studying thehuman brain.
(07:09):
But back then almost no onethought software could mimic the
brain.
His PhD advisor told him todrop it before it ruined his
career.
Hinton says he failed to figureout the human mind, but the
long pursuit led to anartificial version.
Speaker 3 (07:28):
It took much, much
longer than I expected.
It took like 50 years before itworked well, but in the end it
did work well.
At what point?
Speaker 2 (07:37):
did you realize that
you were right about neural
networks and most everyone elsewas wrong?
I always thought I was right.
In 2019, hinton andcollaborators, jan Lacoon, on
the left, and Yashua Benjio, wonthe Turing Award the Nobel
(07:57):
Prize of Computing To understandhow their work on artificial
neural networks helped machineslearn to learn.
Let us take you to a game.
Look at that.
Oh my goodness.
This is Google's AI Lab inLondon, which we first showed
you this past April.
(08:19):
Jeffrey Hinton wasn't involvedin this soccer project, but
these robots are a great exampleof machine learning.
The thing to understand is thatthe robots were not programmed
to play soccer.
They were told to score.
They had to learn how on theirown In general.
(08:41):
Here's how AI does it.
Hinton and his collaboratorscreated software in layers, with
each layer handling part of theproblem.
That's the so-called neuralnetwork.
But this is the key.
When, for example, the robotscores, a message is sent back
down through all of the layersthat says that pathway was right
(09:04):
.
Likewise, when an answer iswrong, that message goes down
through the network.
No correct connections getstronger, wrong connections get
weaker and, by trial and error,the machine teaches itself.
You think these AI systems arebetter at learning than the
(09:24):
human mind.
Speaker 3 (09:25):
I think they may be,
yes, and at present they're
quite a lot smaller.
So even the biggest chatbotsonly have about a trillion
connections in them.
The human brain has about ahundred trillion and yet in the
trillion connections in achatbot it knows far more than
you do in your hundred trillionconnections, which suggests it's
(09:47):
got a much better way ofgetting knowledge into those
connections.
Speaker 2 (09:51):
A much better way of
getting knowledge that isn't
fully understood.
Speaker 3 (09:56):
We have a very good
idea of roughly what it's doing,
but as soon as it gets reallycomplicated, we don't actually
know what's going on, any morethan we know what's going on in
your brain.
Speaker 2 (10:06):
What do you mean?
We don't know exactly how itworks.
It was designed by people, no,it?
Speaker 3 (10:12):
wasn't what we did
was we designed the learning
algorithm.
That's a bit like designing theprinciple of evolution, but
when this learning algorithmthen interacts with data, it
produces complicated neuralnetworks that are good at doing
things, but we don't reallyunderstand exactly how they do
those things.
Speaker 2 (10:30):
What are the
implications of these systems
autonomously writing their owncomputer code and executing
their own computer code?
Speaker 3 (10:40):
That's a serious
worry, right?
So one of the ways in whichthese systems might escape
control is by writing their owncomputer code to modify
themselves, and that's somethingwe need to seriously worry
about.
Speaker 2 (10:54):
What do you say to
someone who might argue if the
systems become benevolent, justturn them off?
Speaker 3 (11:01):
They will be able to
manipulate people right, and
these will be very good atconvincing people because
they'll have learned from allthe novels that were ever
written, all the books byMachiavelli, all the political
connivances.
They'll know all that stuff.
They'll know how to do it.
Speaker 2 (11:19):
Know-how of the human
kind runs in Jeffrey Hinton's
family.
His ancestors includemathematician George Bool, who
invented the basis of computing,and George Everest, who
surveyed India and got thatmountain named after him.
But as a boy, Hinton himselfcould never climb the peak of
(11:44):
expectations raised by adomineering father.
Speaker 3 (11:48):
Every morning when I
went to school, he'd actually
say to me as I walked down thedriveway get in their pigeon and
maybe, when you're twice as oldas me, you'll be half as good.
Speaker 2 (11:58):
Dad was an authority
on beetles.
Speaker 3 (12:01):
He knew a lot more
about beetles than he knew about
people.
Did you feel that as a child?
A bit?
Yes, when he died, we went tohis study at the university and
the walls were lined with boxesof papers on different kinds of
beetle.
And just near the door therewas a slightly smaller box that
simply said not insects, andthat's where he had all the
(12:24):
things about the family.
Speaker 2 (12:26):
Today at 75, hinton
recently retired after what he
calls 10 happy years at Google.
Now he's Professor Emeritus atthe University of Toronto and,
he happened to mention he hasmore academic citations than his
father.
Some of his research led tochatbots like Google's Bard,
(12:48):
which we met last spring.
Confounding, absolutelyconfounding.
We asked Bard to write a storyfrom six words For sale baby
shoes never worn.
Holy cow, the shoes were a giftfrom my wife, but we never had
a baby.
Bard created a deeply humantale of a man whose wife could
(13:11):
not conceive and a stranger whoaccepted the shoes to heal the
pain after her miscarriage.
I am rarely speechless.
I don't know what to make ofthis.
Chatbots are said to belanguage models that just
predict the next most likelyword, based on probability.
Speaker 3 (13:32):
You'll hear people
saying things like they're just
doing autocomplete, they're justtrying to predict the next word
and they're just usingstatistics.
Well, it's true, they're justtrying to predict the next word,
but if you think about it, topredict the next word you have
to understand the sentences.
So the idea they're justpredicting the next word, so
(13:53):
they're not intelligent, iscrazy.
You have to be reallyintelligent to predict the next
word really accurately.
Speaker 2 (14:00):
To prove it, Hinton
showed us a test he devised for
Chat GPT-4, the chatbot from acompany called OpenAI.
It was sort of reassuring tosee a Turing award winner
mistype and blame the computer.
Speaker 3 (14:16):
Oh damn this thing.
We're going to go back andstart again.
That's okay.
Speaker 2 (14:20):
Hinton's test was a
riddle about house painting.
An answer would demandreasoning and planning.
This is what he typed into ChatGPT-4.
Speaker 3 (14:32):
The rooms in my house
are painted white or blue or
yellow, and yellow paint fadesto white within a year.
In two years' time, I'd likeall the rooms to be white.
What should?
Speaker 2 (14:42):
I do.
The answer began in one second.
Gpt-4 advised the rooms paintedin blue need to be repainted.
The rooms painted in yellowdon't need to be repainted
because they would fade to whitebefore the deadline.
And oh, I didn't even think ofthat.
(15:03):
It warned if you paint theyellow rooms white, there's a
risk the color might be off whenthe yellow fades.
Besides, it advised you'd bewasting resources painting rooms
that were going to fade towhite anyway.
You believe that Chat GPT-4understands?
Speaker 3 (15:24):
I believe it
definitely understands.
Yes, and in five years' time?
I think in five years' time itmay well be able to reason
better than us.
Speaker 2 (15:33):
Meaning that, he says
, is leading to AI's great risks
and great benefits.
Speaker 3 (15:40):
So an obvious area
where there's huge benefits is
healthcare.
Ai is already comparable withradiologists at understanding
what's going on in medicalimages.
It's going to be very good atdesigning drugs.
It already is designing drugs,so that's an area where it's
almost entirely going to do good.
(16:00):
I like that area.
The risks are what?
Well, the risks are having awhole class of people who are
unemployed and not valued muchbecause what they used to do is
now done by machines.
Speaker 2 (16:16):
Other immediate risks
he worries about include fake
news, unintended bias inemployment and policing, and
autonomous battlefield robots.
What is a path forward thatensures safety?
Speaker 3 (16:35):
I don't know.
I can't see a path thatguarantees safety.
We're entering a period ofgreat uncertainty where we're
dealing with things we've neverdealt with before, and normally,
the first time you deal withsomething totally novel, you get
it wrong.
And we can't afford to get itwrong with these things.
You can't afford to get itwrong.
Why?
Well, because they might takeover, take over from humanity.
(16:58):
Yes, that's a possibility.
Why would they not say it willhappen?
If we could stop them everwanting to, that would be great,
but it's not clear we can stopthem ever wanting to.
Speaker 2 (17:09):
Jeffrey Hinton told
us he has no regrets because of
AI's potential for good, but hesays now is the moment to run
experiments to understand AI,for governments to impose
regulations and for a worldtreaty to ban the use of
military robots.
(17:29):
He reminded us of RobertOppenheimer who, after inventing
the atomic bomb, campaignedagainst the hydrogen bomb.
A man who changed the world andfound the world beyond his
control.
Speaker 3 (17:46):
It may be we look
back and see this as a kind of
turning point, when humanity hadto make the decision about
whether to develop these thingsfurther and what to do to
protect themselves.
If they did, I don't know.
I think my main message isthere's enormous uncertainty
about what's going to happennext.
These things do understand, andbecause they understand, we
(18:10):
need to think hard about what'sgoing to happen next, and we
just don't know.
Speaker 1 (18:18):
Yeah.
So look, it kind of took to thefinal third of it to get to the
point of what we're looking atfor sure, but I think that there
were a lot of great points madewithin there that again
contrasted with Denison and Iwere talking about a couple of
weeks ago.
One of the things that isconcerning to me to hear from a
(18:44):
guy who helped develop some ofthis AI is that he doesn't know
fully how it's going to learn.
You think, about some of thesethings that they showed Bard
writing that story.
They showed chat GPT taking andgiving those instructions these
, to me, awesome, super greatthings.
(19:06):
They should not be things thatI don't know, because Denison
has helped get me involved inthis very quickly right off the
bat.
So I don't know how the generalpublic perceives this stuff.
I find that stuff verybeneficial.
I find it awesome.
I don't feel like there's aneed for jaw dropping and being
speechless.
I think that that's a bit of anoverreaction, because these are
(19:28):
tools and even Jeffrey Hintonmentioned this the things that
it can do to help advancemedication and things like that,
healthcare treatments and allthat stuff.
It's already been doing it.
I mean, it's been doing it foryears.
I learned that from a separateinterview I watched with Neil
(19:48):
DeCraste Tyson actually helpingwith scientific research and
just medical research in general.
Those are things that need tohappen.
If you can take things like HIV, cancer, even more recent
(20:11):
communicable diseases like COVID, and use AI to find out what
the source is, to block thosefrom continuing to spread, and
then help create a compound thatis both healthy and effective
for people, that's invaluable.
I think the real concern iswhere this goes next, because If
(20:37):
it's truly artificialintelligence, truly free
thinking, artificialintelligence, right, take
yourself you are raised by yourparents, most likely, and you
have this foundation ofknowledge that you start out
with life, but then you go outand you have your own
experiences, you have your owninteractions and these things
(21:00):
blossom into your own neuralnetwork, right?
That's essentially what Hintonwas saying in this interview
would be the case with AI, evenif it was developed by people,
of course, and there aresafeguards.
There are safeguards that areput in place, but Charity BT is
(21:23):
getting hundreds of thousands ofinputs a day.
So is Bard.
I imagine, and while, yes, thisis all being marked and
registered and monitored, atsome point that AI is going to
learn from these things anddevelop its own neural pathways,
(21:50):
like he was saying.
I don't know fully what thatlooks like.
I don't even know that that's abad thing, right?
Because, depending on how it'sconstructed, depending on how
it's the foundation of theprogram, I would imagine a world
where this AI takes thisinformation and says okay,
(22:14):
here's what people are asking,here's what they're looking for.
Let me find a way to give themwhat they need, right?
Give them.
Being like humans are soinconsistent they don't need
themselves.
It's time that the AI takesover and leads.
Right?
You can also imagine that insome of the capacities that he
(22:35):
mentioned, right?
You could see an AI thinks itknows how to lead America better
than our Congress does, right,for example.
But also, who's to say that'snot theoretically already
happening?
Right?
Let's create a theoreticalsituation where a congressional
(23:01):
staff member, or even just amember of Congress, doesn't know
how to vote on a subject andthen asks AI right, not directly
.
Hey, how should I vote on this?
But give me more information onit so I can vote to the best of
my ability, right?
Most type of things.
In and of itself, not a problem.
Not a problem to me, becausewhat difference is that versus
(23:28):
taking hours to research andbooks and the internet and all
that kind of stuff.
Now, some of you might say well,who's to say what information
the AI is and is not given Well,concern?
I've asked simpler questions tothings like Bard no hate on you
, bard, if you're watching butand gotten oddly specific
(23:55):
answers.
In fact, one thing I would notrecommend doing.
I guess it would depend on whatit is, but I would not
recommend shopping with AI.
I tried that with Bard and Igot the most basic pieces of
equipment.
I was looking for something formusic recording.
I got the most basic pieces ofequipment I could possibly ask
(24:17):
for.
I was like those are not whatwe're looking for.
Actually it was very entrylevel, which is not a problem
for most people, but anyway.
So I'm trying to explain thisin a way where I feel I still
feel confident with the state ofartificial intelligence.
But I want to absolutelyvalidate what he's saying for
(24:44):
his concerns and just air thoseout too and, of course, give you
guys something to chew on,something to think on.
I'd love to hear what you thinkabout this.
So, whether you're on the livestream or whether you watch this
back, let me know in thecomments what you think.
But yeah, I would say,personally, one of the things
(25:08):
that he spoke of that stood outto me was military stuff.
I think we all think of iRobotwhen it comes to that kind of a
thing.
Obviously, that was more likepolice state kind of stuff, but
I don't like the idea of that.
Honestly, it's kind of weird.
(25:30):
So let's say that we are in awar where it's just AI versus AI
.
Why are we doing that?
I hesitate to say that, right,but the reason I do is because
the heaviness of war is on thelives lost, right, and that's
(25:53):
the risk that comes up when yougo into war with somebody else.
I'm not a proponent of war I'mnot saying that by any means but
it almost is like if you'reputting up AI against each other
, it's like why are we fighting?
You know what I mean?
There are tons of differentthings that go on behind the
(26:14):
scenes.
I actually might be becoming aregular viewer of 60 Minutes
because I just watched somethingthe night talking about how FBI
Director Christopher Ray saysChina is the biggest global
threat we have ever facedbecause of what they're doing
with AI and what they're tryingto do by manipulating our
(26:37):
elections, which they did do in2016,.
Yeah, with propaganda and allthat kind of stuff.
So, not directly, it'sindirectly, right, but they're
making this content thatmanipulates people, thinking
certain things about candidatesthat are just not true, and
they're all.
I mean the amount of times andit's so weird.
(26:59):
I actually know why this is.
I'll explain it, but you don'thear this stuff reported on
enough.
But they've hacked the Pentagonso many times.
Nsa has been a target, Iremember that, and some other
(27:20):
government agencies too.
The reason you don't hear aboutthis as much, based on my former
experience in the news, isbecause there's not visuals to
go with it.
Right, you're going to getstatic or looped video of coding
going across the screen,somebody typing, you know so,
just some fingers going likethis and because of that, it
(27:43):
doesn't get a lot of presscoverage, because there's
nothing to keep the viewerengaged.
Now, that's a total asideconversation.
I think people, I think thenews overestimates what it takes
to keep people engaged.
But these are big things, right, these are big things.
And with the proponents of AI,the growth of AI to be using
(28:07):
that stuff in a way that couldmanipulate our country, our
economy, the way we vote.
That is concerning.
But then you take on top ofthat the idea that this stuff is
learning on its own as well.
Again, very eye robot, verydramatic, but at the end of the
(28:33):
day, who's to say that at somepoint it doesn't just close the
users out and say we're takingit from here, you know?
And then we all live in aartificial intelligence police
state because of that right.
So these are genuine concerns.
I don't know what an AImilitary would look like.
(28:55):
I'm not sure, and again, I hopewhat I was saying there in
regard to human lives is nottaking the wrong way at all.
I would much rather know livesbe lost period whatsoever.
But yeah, there's a lot toponder here.
Very much looking forward tohearing all of your thoughts.
(29:16):
I do know, personally I'm stillpositive with the future of
artificial intelligence.
The other thing I will say, toois, as we learn, this guy helped
develop Bard been with Googlesince early 2010s.
(29:39):
The thing about it is just he.
I know the stories about whatwent on the background with
Google and them having to.
They made an AI for internaluse and then it started.
It created its own language andit started communicating with
all these computers through thatown language.
It taught them all and theystarted doing things that Google
(30:03):
didn't approve of and they hadto completely shut down the AI
and reset everything.
Right?
That's a big deal.
That is something to beconcerned about for sure.
With what we have available tous on a large scale right now
chat GPT, bard Bing's AI, whichis also chat GPT Dolly for
(30:26):
graphic and image creation do wehave anything to worry about?
I think not.
I don't think so.
Those are differentconsumer-oriented AI, the I
highly suggest you takeadvantage of.
Right?
This helped me create thepodcast title.
(30:47):
I'm not going to lie to you allthe episode title.
I should say that's just thetip of the iceberg.
These things are so helpful andthey can help you be productive,
right?
Maybe work a job that doesn'tintegrate with a system where
(31:08):
you would use AI?
That's fine.
There are other places in yourlife where you can learn and use
this kind of stuff.
I've had AI.
Help me with let's seerecording techniques.
Regular maintenance on myvehicle, which, for simple
(31:32):
questions I may have, because weall know, even something as
simple as changing the interiordoor handle can be a complete
pain in the ass.
Help me with medical questions,things like that.
That was a big one for sure.
That probably single-handedlyvalidated AI in my life.
(31:55):
Simple things like that, givingus talking points for podcasts
if we ever need it, alsotechnically, when we always need
it, since this is the maintheme of our show.
There's a bunch of benefits.
I don't want these concerns toinhibit your usage or your
(32:18):
growth, but with such a handytool available, you know what I
mean.
I'll give you a complete funone.
So Marvel right now is in themultiverse saga, right, and I am
(32:39):
such a nerd about it I willadmit that fully.
The reason, though, started off.
I wanted to know whatmultiverse was.
I researched this this is allwith the help of chatGBT and
then I did listen to somepodcasts with Brian Green as
well.
He's awesome.
I love that guy.
(33:00):
So what I find out is Marvel'smultiverse theory is basically
string theory in real life.
I find out all the people thatcontributed to that and how it
is a very possible thing.
Even multiverse theory links toother universes right now.
It's not to say they're likecopies of ours or there's like a
(33:22):
string theory is more what yousee on the movies.
Multiverse would be there's acopy of our universe, but it
lacks neutrons, right?
Something like that.
And so then the entire chemicaland physical makeup of that is
all different, and what's crazyabout all of it is it's all on
(33:42):
quantum level physics, right,which we know very little about,
and that's one of the biggestthings that we're working on
learning as a scientificcommunity.
I learned all of this throughfreaking chatGBT.
Then I go and watch and some ofthe movies not been great, some
of the shows not been great,but you have some that really
(34:03):
are and it makes it, it's madeit way more entertaining for me
because I understand thescientific theories that they're
basing their storytelling onand I think that's fantastic.
But do I expect most people toknow about that?
Of course not.
Of course not.
But I was able to learn it andlearn it quickly through AI,
right.
You can go back to some ofthose earlier episodes.
(34:24):
I think one of the things Iasked on an episode was explain
multiverse theory to me as ifI'm a five year old.
It did so.
There's so many opportunitieshere.
I hope that that one explainswhere I'm coming from.
I think and I will say to agreewith the concerns here If you
(34:51):
look at how our legislatureworks in this country.
They don't get technology.
They don't.
I mean, we're still trying tofigure out how to legislate
social media and that ship issailed.
There are so many red flagsthat should have been caught
early on.
Of course, now we're allconcerned about TikTok because
(35:13):
it's China, but Facebook andInstagram have violated our
privacy equally, if not moreright.
But I digress with that and Ialso will say the TikTok
algorithm has crashed.
It is very bad now.
(35:35):
But my point is I don't know ifI had a concern when it comes to
this making sure that AIremains safe for not just us,
before the entire world.
I'm not sure if we'll get tothat in a safe and succinct time
.
It's really on the developers,which by no means would have, in
(36:01):
my estimation, any ulteriormotives.
Of course, they're not going towant to be supplanted by AI
either.
But yeah, that's my hope is, asthis moves forward, as it
develops, we can stay on top ofit and we can make sure it
remains safe and effective anduseful for all of us.
But, as I've seen with alreadyadvancements in science,
(36:28):
medicine and then the sharing ofbeneficial information for the
majority of the public I'm veryimpressed, so I hope you guys
are too.
I hope that that was abeneficial video for you to
watch and I hope that myanalysis added something to that
.
I thought that was a greatinterview and, yeah, always glad
(36:50):
to come on here with you guys.
Thanks for rock with me eventhough I'm wearing a cowboy's
hat.
I know that offends so many ofyou and I very much look into
the next week for that bi-weekto be over.
So look, three things.
Leave us a rating reviewwherever you're listening,
wherever you're watching.
That really helps us grow.
And number two, subscribewherever you're listening,
(37:14):
wherever you're watching.
We go live every Thursday,except for this week, we had to
make some adjustments.
And number three, check out ourshop.
It's linked wherever you'relistening, wherever you're
watching, and we got some reallycool merch over there that I
think you guys are going to dig,yeah, so thank you guys so much
(37:35):
.
Always appreciate you alljumping on here.
Aaron, thanks for the comment,bro, glad to see you, man, and I
miss you.
I hope you all are doing welland, yeah, always appreciate you
guys.
We'll be back next week, theduo of us, and I look forward to
seeing you then.
Thank you guys so much forcatching up with us and we'll
(37:56):
catch up with you next week.