All Episodes

June 8, 2025 161 mins
On this episode of BZ's Berserk Bobcat Saloon Radio Show: A.I. -- Will It Save Us, Or Will It Kill Us? A round table discussion featuring BZ and: • MIKE FITZPATRICK of NCX Group • JEFF STONER of The Lost Wanderer podcast • RICK ROBINSON of the KLRN Radio Network Are we ready to have our lives literally upended in a shockingly small amount of time? 🎙️ New to streaming or looking to level up? Check out StreamYard and get $10 discount! 😍 https://streamyard.com/pal/d/5768029124820992
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:46):
You're listening to late night radio on the SHR Media Network.
Caution there will be mature themes explored and potentially adult
language used. If Conservatorian words, phrases, certain concepts, or rhetorical offends,
you tune out.

Speaker 2 (01:01):
Now. I have come here to chew bubble gum and
kick ass, and I'm all out of bubble.

Speaker 3 (01:09):
Freedom is never more than one generation away from extinction.
We didn't pass it on to our children in the bloodstream.
The only way they can inherit the freedom we have
known is if we fight for it, protect it, defend it,
and then hand it to them with the well taught
lessons of how they in their lifetime must do the same.
And if you and I don't do this, then you

(01:29):
and I may well spend our sunset years telling our
children and our children's children what it once was like in.

Speaker 4 (01:35):
America when men were breathing.

Speaker 5 (02:00):
Through the keyhouse them back lift the sides, ain't going
this and.

Speaker 6 (02:04):
Pushing them back from the saloon to the getaways, spinning
the road constitutions my compass. That is the big gos
creaking the shadows on the led fence, with the schemes
trying to bind my hands. But I'm the shirtfer cutting
through the storm.

Speaker 1 (02:20):
Hadn't do your freedoms.

Speaker 7 (02:23):
They spin its tails.

Speaker 8 (02:24):
Better see through the hayze be maggots lost in the maze,
calling it out, no fear in my soul.

Speaker 5 (02:31):
The Patriots fire, losing control. Poser this shamer leading the fight,
touching the darker and better in the night. Stand for
the truth, all batt of the game.

Speaker 8 (02:43):
Shubles, Come and remember my bet.

Speaker 6 (02:57):
This jew is twisted, trying to steal our rights.

Speaker 9 (02:59):
But I'm locked in morning.

Speaker 6 (03:01):
Gotta be sids from change six to the border, selling
the South.

Speaker 9 (03:05):
I'm screaming from the mountain of my relation.

Speaker 10 (03:08):
They make the gonna stars and stretch forever or under
God's open sky. No google has change gonna change this land.

Speaker 6 (03:17):
The sheriff bart the truth in my head. They pushed
their agenda bottom, bringing the ball. They promises empty and
their heart size called bleeding the bar.

Speaker 7 (03:27):
I'm fighting for.

Speaker 5 (03:28):
The people, with the warriors, charging the staying for the truthful.

Speaker 9 (03:39):
Sure come and remember that.

Speaker 8 (03:42):
Day the stores.

Speaker 2 (03:54):
I've walked the line forty one years.

Speaker 6 (03:57):
Badge on the grind. Now I'm on my voice like
a blade, cutting through.

Speaker 9 (04:02):
The lines that the trade.

Speaker 10 (04:03):
Has made hurricanes range both standing tall Sheriff, the Constant, the.

Speaker 2 (04:14):
America stand.

Speaker 8 (04:27):
Remember my name voice resound conservative sharp of Freedom's battle ground.

Speaker 9 (04:45):
The certain Bob catgirls.

Speaker 2 (04:47):
Will never kneel, never forgotten country. This fight is real.
Ladies and gentlemen, boys and girls, children of all ages.
Thank you for being here tonight. This is a really
interesting show. And I've already got a ton of comments
I have, according to my little thingy that I have

(05:08):
to point up to right here that you can't see
that I can. I have one hundred and fifty one,
one hundred and fifty three, one hundred and fifty four
people watching live right now, and thanks to you, and
I mean it, and I'm not shouldn't buy the way
iota anyway, We're going to talk tonight about AI. So
I find this at once simultaneously astounding and wonderful and revelatory.

(05:36):
Multiple thanks for being here tonight, despite the urge to,
you know, enjoy our munificent Sunday weather, our summer weather,
I should say, engage in cool activities and otherwise be occupied.
And tonight I'm featuring something of a holy mother somollions.
One hundred and fifty eight people watching live right now.

(05:58):
And also let's go to the comments before we actually
start this show. Look it is Jeff A. Covmic Bard
reposted the stream. Thank you kindly. We have Phantom in Chad.
We have mission ready men in Chad, Lost Wanderer via
another segment, Lost Wanderer, jaf Ordnance Packard first baby, Ricky Robinson,

(06:23):
whatsaab lost Wanderer, Amish Cheetah is in Chad and Ordnance
packard ware right here. So thanks for everybody being here.
Let me install this if I may. First up at
bat is Mike Fitzpatrick. And then also we have Rowdy Rick,

(06:46):
and we have Jeff and we have oh my, we
have five people here, one hundred and sixty one people
watching live right now. Well for me, thank you all.
This is unprecedented, shall we say? And oh look another individual,
Zelda Gabriel. Hey all, and welcome to all the people

(07:09):
that are new that are they're here. I thank you
ever so kindly. In any event, it's a it's a
roundtable discussion tonight and it involves AI intelligence Artificial intelligence
AI agents and like the promo talked about ten where
people tend to fall on the spectrum involving AI like,

(07:33):
for example, AI is going to kill us or AI
is going to be the savior of us. And I
showed this to the people beforehand, and it shouldn't surprise
everybody that I have twenty two pages of questions right here.
And I see Mike Fitzpatrick laughing already, sees Bensteakan. We

(08:00):
don't need those thinking bodgers, will win on need those
thinking pagers around here as well. And who is this
stranger calling me z stranger? Danger? And any event? Let
me start with this, Ah Jesus, this is going astronom
noma mom mon nommical, just like JAF says frequently exactly

(08:25):
in that tone as well. One hundred and seventy eight
people watching live right now? Where did you all come from?
On a Thursday night, late at night? Muchly appreciated. So
what I'd like to do is go around here and
for some of the occasional people who may not know
who everyone here is tonight in the roundtable, let's start

(08:45):
with you, Mike Fitzpatrick. Can you tell everybody, if you
would please, who you are and let me bring you
down here and just a brief synopsis about who you
are and why you're here, tonight besides being bored, and
your checkbook was already balanced, and you already cleaned your
parakeet cage.

Speaker 4 (09:05):
Go Okay, Go okay. Mike Fitzpatrick, Founder CEO of nCX Group.
nCX Group is a cyber risk consulting firm, and we've
been protecting businesses and keeping them safe for about twenty
five years. I've got forty four almost forty five years
in technology all together, so I've seen it from an

(09:30):
Apple two with akv ram to now every day I'm
using AI in some new and interesting way.

Speaker 2 (09:38):
Roddy Rick, a little bit background from you.

Speaker 9 (09:40):
Please, sir ah Well, I've ran a security and private
investigations firms starting in nineteen ninety eight. Before that, I
was in the private security industry and also spent some
time wearing other badges and various guns. I've been in
broadcasting now since twenty ten. I think do several shows

(10:02):
start out with ken any Talk now run Kayler and Radio.
And I too have massive concerns over AI and vacillate
between it's going to help us and going to kill us.
And I can't decide from one moment to the next.

Speaker 2 (10:13):
Boy, And I'm sure we'll all reach an appropriate conclusion
tonight to everyone's satisfaction. Jewf the lost Wanderers, sir.

Speaker 11 (10:22):
Please absolutely. By day, I have been in it for
over thirty five years with the background in servers and security.
By night, I am the program director of KLOR and
Radio with several programs of my own.

Speaker 2 (10:38):
Wow, okay, that is brief, Jack Alexander, and you, sir,
you're far far away.

Speaker 7 (10:44):
Yes, I'm a little bit of a swim away an't high. Yes,
I'm a tenue veteran of the Australian Army. I've also
been involved when I was in the United States with
the USA fust Auxiliary Civil I've been a an investigative

(11:05):
journalist on the internet now for almost thirty years, and
I mean artificial intelligence has led me down some very
interesting rabbit holes. Thank goodness that I practice, you know,

(11:30):
verifying my sources, because some of the stuff that I've
seen out there, if I had taken it at face value,
I would have looked like I was ready to be
hired by CNN.

Speaker 2 (11:40):
So you practice safe, AI is what you're saying. That's excellent. Excellent,
by the way, folks with me tonight, and the reason
I chose these folks is because unlike myself, they are
great thinkers and podcasters, and they are arranging from the ancient,
ignorant and scabby like me, to the younger, more facile,

(12:03):
more aware, more technologically inclined, like everybody that happens to
be here tonight. So everyone has a story, everyone has
a price to pay. And we have a gamut here,
a really great gamut with a wide range of experience.
Two hundred and ten, two hundred and eleven, two hundred
and twelve people watching live right now? Wow are you

(12:24):
all coming from? We have a person here tonight he
uses AI professionally and frequently in order to expand and
more efficiently run his business, serve as clients, and runs
a multi million dollar company. To people who use AI
in a really creative wave. I'm looking at YouTube guys

(12:47):
right now, to people who use their stuff to effectively
craft their podcasts and their shows and to basically gibbering
jack and apes like myself, who uses AI in probably
a really super rudimentary way, prehistoric way. But I'm starting
to see some results. And I don't mind jumping in

(13:09):
with my big toast slightly immersed, but not so much
that I'd get any kind of a skald on it.
And so the reason for the show tonight is ai
Is it gonna save us? Or is it gonna kill us?
Let me start with well, first, let me start with
acknowledging the song that you guys heard now. That song
was created in Suno by Mike Fitzpatrick right here, and

(13:35):
he said that he created it if you can believe it,
and I'm not so sure yet in milliseconds on Suno. Well,
I have examples. And one of the things that I'd
like to do to start off with is you heard that,
Mike Fitzpatrick. How long did it take you to create
that intro song for me?

Speaker 4 (13:57):
You know it? It was a Friday afternoon. I was
a little bored of cybersecurity, so I thought I would
take a shot at creating some music. So the lyrics
were written in rock straight from your Twitter account, your
x account. I asked it to put together something for
a podcast intro, took it over to Suno, dropped the

(14:19):
lyrics in there, and gave it parameters for the actual
music that I wanted to go along with it. I
believe it was grunge with a little blues, and I
wanted a cream feel to it and a little e
DM in the background because it needed a dance beed,

(14:39):
and less than two minutes later the whole thing was done.
One take, very nice.

Speaker 2 (14:46):
Let me play this for you. Mike is here because
he's the corporate guy that uses this stuff every day,
but each one of these folks here tonight use AI
in rather an astounding way. I want to play this.
This isn't a video, but this is something that Mike
sent me in terms of this is a brief podcast,
not gonna play the whole thing, but I just want

(15:07):
you to get a flavor of where AI is right now.
Let me see if I can start this. This is
from Mike is from the nCX group, and this is
a podcast like three minutes, almost four minutes, not gonna
play the whole thing. You'll get it immediately. And it's
called Understanding Cyber Vulnerabilities. And you tell me if he

(15:27):
used AI, did he use AI voices? Did he use
AI background? Are these real people? Are these not real people?
Is this something that you would buy as real go?

Speaker 12 (15:49):
Welcome to the Cyber Smart podcast, where we help you
stay sharp, stay safe, and stay one step ahead of
today's cyber threats. I'm Jake Thompson and I'm Maya Lee.
Today we're kicking off our cybersecurity awareness series with one
simple question, why do people still fall for cyber attacks?

Speaker 2 (16:07):
Okay, stop right now, let's weigh in, folks, real or AI?
What say you?

Speaker 7 (16:15):
The woman's voice is definitely AI?

Speaker 2 (16:17):
Yes, okay, you're You're gone With the woman's voice, what
do you think about the guy?

Speaker 7 (16:26):
I haven't heard enough that there was something in in
in the female voice that automatically automatically twigged. It comes
down for me, It comes down to certain word pronunciations.
AI just hasn't been able to get that right, and
the female voice is tripped on one of those hurdles.
I haven't heard the male voice trip on it yet,

(16:48):
but if one is that, then they both probably are.

Speaker 9 (16:52):
Yeah. I was gonna say, I think it's AA. Okay.
I is decent though, but I still think it's AI.

Speaker 4 (17:00):
Okay.

Speaker 2 (17:00):
Well, then I am infinitely less than you guys because
I bought it.

Speaker 4 (17:07):
Well, we're on an AI show, so I would think
most people would buy you know, would say it's AI
because we're talking about AI. I think if it's used.

Speaker 2 (17:17):
Don't you guys think that there's a percentile of veracity
in this that it sounds pretty good.

Speaker 9 (17:26):
Well, if not for the fact that Jeff uses us,
I believe he uses mostly Sooner as well to produce
a lot of music. I have noticed a vast improvement
in the voices from when he started to what he's
putting out now. So, and Jack was right. The female
voice did trip up on she slightly mispronounced the words.

(17:47):
So that was enough of a flag for me.

Speaker 2 (17:48):
So okay, alrighty, And it's fun that you should interesting
that you should mention Jeff. Jeff gave me permission to
play this. Now you heard the interest created by Mike
for the show, which I use predominantly. Now here's a
song that Jeff has played and created. This is where

(18:16):
we are today. I'm still, My ass is still Gobsmith.

Speaker 5 (18:22):
Steps fade and sandstone dust echoes left, and we carve
our names and nothing.

Speaker 8 (18:30):
Less than.

Speaker 11 (18:50):
You're from.

Speaker 2 (18:51):
Okay, I'm gonna stop this right now because I got
so many other places to go. Two hundred and fifty
people watching live right now. Hey, thanks appreciate it. Anyway, Jeff,
that was an amazing job. Now you heard that, Mike,
you soon know and we spoke about this about a
month ago or so. How did you assemble this? But

(19:15):
I predicate this with a caveat. You are a musician already,
so because of your inherent qualities, I think you're going
to get a better result in this kind of thing.

Speaker 11 (19:29):
So what I did with since Suno upgrade it to
the latest version. You actually have a thousand characters now
to describe the style you want. So what I will
often do is put the chord structures in the lyrics
and the styles. Then also where I do beats, per
minuted time signature as things where I want to ship,

(19:49):
and will even describe like okay, verse one, I want
and this kind of a style, whatever that style is,
and then say, for course verse three, I want to
a brief introduction into a new style, new time signature,
and change everything up that way. So it makes the
AI really parse a lot of the information even more

(20:10):
to break it into different different areas within the same song.
The one thing I've noticed with AI a lot is
if you tell it to do a certain song style,
it will really hit that really well throughout the song.
To get away from some of the AI feeling is
to do some of the key changes the time shifts
and even some of the style changes within the song.

Speaker 2 (20:32):
But I think you have the ability of the innate
ability to do that and discern between all of that,
because you were already a fairly schooled musician anyway that
if my memory serves you, are fascil on twenty twenty one,
twenty two instruments something like that.

Speaker 11 (20:49):
Twenty one with four being what I really concentrated on.

Speaker 2 (20:54):
Okay, so you you're going to get a different result
than a lot of other people. But I just I
just want to say that to Mike and to you.
The reason I played these tonight is because this is
where we are in twenty twenty five now. Mike Fitzpatrick
sent me a video and it's a two and a

(21:15):
half hour plus video of a bunch of guys in
a round table talking about AI. And it's from having
watched that video that I've grabbed a ton of questions
for everybody tonight. You know, and as far as I know, Mike,
you're a superior user of AI. Jeff you beat the

(21:38):
hell out of AI and you use it to craft
really beautiful things and your show and songs and graphics. Jack,
you use AI. Rowdy, Rick you say you've been using
AI on your show, probably for graphics presentations and things

(21:58):
like that, so.

Speaker 7 (22:00):
You were I'm probably the newest here out of it.
I've really only done one project with AI, and that
was to hear a quick representation back of some lyrics
that that that I had written some time back, and
it formulated it into a song. I entered into a

(22:22):
songwriting competition and it were The song was called Disposable Heroes,
and it was a passion project of mine, being a
veteran who has the system, who has not really worked
worked well in support of and there are so many
of us out there, and I put all that down

(22:44):
to words and I just used I forget which one
it was. It was one of the free samples for
AI that was out there, and it's spat back some
wonderful results on and capture did much of the mood
that I was looking for, which absolutely surprised me. And

(23:07):
I tried to try trying it a few other times.
It's never really caught the same mood. Again. I'm not
a competent musician. You know, I've been in bands. You
know I was the usually the band's spirit level, right,
I was the drummer and occasional vocalist, so you know

(23:32):
that that I'm not competent at playing a lot of instruments.
So I just used I just wanted to hear what
I got.

Speaker 2 (23:40):
Back, and I was surprised, Well, AI, you use AI
right now in your show. You that you were going
to embrace AI early, and I'm still trying to find
out from yourself and Sean and a couple of other people,
how is it that you use AI. I mean, you
make an outline how to use it for your show,

(24:00):
because apparently I need to figure out how to do that.

Speaker 9 (24:05):
Well, I mean, as I discussed a second ago, I
do a daily show four days a week, three hours
a day, and then I also do several evening programs.
So a lot of times what I'll do is I'll
start by having AI helped me put together an outline
I'll have and it's usually a combination of rock and
then chat GPT, and then I'll kind of go back
and forth. So I'll start by having ITS scan most
social media platforms et cetera and find me the top

(24:27):
five new stories at state level, national level, and even international.
Then I'll fact check it to make sure it's not
making things up, because it does like to occasionally throw
red herrings in there that don't exist, and then once
I verify the sourcing, then I will have it start.
And now this is usually for shows like Juxtaposition, et cetera,
normally for my shows, because I'm usually hop out the daily.

(24:49):
I'm usually hopping from article to article anyway once I
find them. So that's really about the extent of what
I use it for more than daily. Now on Juxtaposition,
I'll actually have it do like total deep dives, like
I almost broke rock trying to have it, have it
proved time travel it could possibly exist once Mitchukaku came
out and said basically it was an engineering problem now

(25:10):
not a sci fi problem. So I basically started with
the premise of since he's now saying it is a possibility,
doesn't doesn't that mean that it could exist somewhere? And
so we went round and round and round for about
two hours, and by the time I was done, I
had a complete outline for a show. And I was
actually doing that while I was streaming your show. And
this was like two weeks before Juxtaposition was supposed to

(25:30):
air again, and I sent it over to Amish and
I said, hey, I think I figured out our entire topic.
Check this out.

Speaker 11 (25:36):
No, you want to add something to that. Because I've
been a big fan of juxtsposition even before I came
onto kal RN as a host and then program director.
I've noticed when they switched, especially to the two hour format.
They've always had a solid show, but I can tell
they've started doing some outlines to try to hit some

(25:57):
marks with the use of AI, and the show with
him and Ordnance Packard and Chat has gotten infinitely better
because well, sorry, Rick, but this is going to be true.
There's less squirrel chasing now.

Speaker 9 (26:12):
I resemble I resemble that, Okay, a.

Speaker 11 (26:16):
Good product and made it really great because now there
is a little more structure with time influence in it.

Speaker 4 (26:25):
I would imagine it would save you. I mean, because
early days of cybersecurity, I think it was two thousand
and four, two thousand and five, I was doing a
Saturday morning show on AM five ninety here in southern
California on cybersecurity, especially trying to educate executives. The show

(26:47):
prep just to do a half hour or forty five
minute show was probably on my part, probably three or
four hours help you to go through and do that.

Speaker 2 (26:59):
So I would.

Speaker 4 (26:59):
Imagine and just the show prep alone would be life changing.
I know that this last weekend I went through and
needed to do some updates, and I was experimenting with
some keyword changes for the website and positioning and messaging
and all of that. I redid my entire website, all

(27:23):
the messaging, all the content on our core pages, redid
all of the SEO, everything in fifteen hours time with AI. Now,
if I had outsourced that the more traditional way probably
would have taken six weeks, the messaging probably would not

(27:43):
be my voice, and it would probably have cost me
about thirty grand Holy crap.

Speaker 2 (27:54):
See here's the other thing, Jeff. I know you are
a researcher. You try to get your ducks in a row.
You your topics are generally very technical, especially for the
Lost Wanderers. So my guess would be AI has saved
you a ton of time when you're doing deep dives

(28:15):
into the scientific aspects of your various shows.

Speaker 11 (28:19):
Ironically, Lost Wonder is the easiest show to do, and
I barely use AI for it.

Speaker 2 (28:24):
Okay, well, he busy, Look here's your fist.

Speaker 11 (28:29):
Well what I do is I just collect links and
then I piece the I spend Saturday morning for about
three hours and I just piece it together myself. But
my show in the crease that I could barely put
one show on a month because I would do eight
to twelve hours of research per show, is now I
can actually now do it every other week because AI

(28:53):
is getting the sources that I can verify, and instead
of spending crap ton of time just on the researcher
listening to audio books and and and and lectures and
speeches on the subject, it will go find it for
me and I can see which which sources I can trust,
which ones I can pull, and then I can start
piecing together segments for the show.

Speaker 2 (29:17):
See, and here's the.

Speaker 7 (29:19):
My biggest problem with with with using AI maybe for
show prep, considering you know, I've always been neck deep
in politics, usually with my podcasts.

Speaker 11 (29:31):
And.

Speaker 7 (29:34):
I'm finding and and look, maybe it's because I'm not
familiar enough with setting the parameters and so on, but
I'm finding a lot of left bias in anything that
I get back through AI when it comes to talking

(29:55):
about political news and so on.

Speaker 2 (29:58):
Now, do you guys find that true? Are any of
you else finding that there's a bias in there. I'm
going to get to that in short order. But yeah, Mike, Rick,
are you finding something like that.

Speaker 4 (30:12):
So I've been using AI now for four and a
half years, so up until probably and this is all
pre chat GPT stuff, probably right up until the time
that chat GPT came out, AI was pretty center of

(30:35):
the road, pretty middle of the road. But it was
a Friday afternoon there was an update, a change, a
model change in the tool that I was using. Was
literally using about four or five different you know, large
language models, and it changed. It literally went from supporting

(30:58):
a constitutional republic to supporting Marxism in overnight.

Speaker 10 (31:04):
It was.

Speaker 4 (31:05):
It was unbelievable change and happened quick.

Speaker 2 (31:09):
Okay, that's pretty weird, Rick, Could you find anything like
that when you're doing any kind of research or investigation.

Speaker 9 (31:15):
Yes. Actually, as a matter of fact, I typically have
to give very strict parameters to my AI exactly the
type of flavor that I'm looking for. So I will
usually make sure to tell it, especially if I'm having
it write up more of an outline or maybe give
me some prompts to use during shows. I will have
it make sure that it writes things up from a
center right perspective, because otherwise it does tend to go

(31:38):
a little insane. So you have to be very specific.
You have to learn to be very specific with AI
to get her to do the things that you wanted
to do. Because the irony is what we're talking about
as AI right now technically isn't it is an intelligent
search algorithm with a chat bob built in. Is basically

(31:59):
the extent of the A that we have right now.
Now we're getting closer to actually A, and I would
be surprised if the government doesn't have something a lot
more legitimate than what we currently have access to. But
that's part of the reason why you typically have to
tell it exactly what you wanted to do, and until
you learn to do that, and until you start remembering that,
especially when you're first starting to work with it, got
to remember it's also usually starting to work with you,

(32:21):
because each instance of the AI is going to be
a little bit different until it gets introduced into a
large chat model area. So you have to talk to
it like it's five until it learns to understand you.

Speaker 4 (32:32):
I think, to Jeff's point, I think the thing that's
interesting is the large language models like chat, GPT and
some of the others that are out there with Google
and all of that, they're probably about a year behind

(32:52):
in actual time. GROC, on the other hand, is it
has access to X. It has access to all the
tweet so it's up to date to the minute. And
that's one of the big differences that I see in
between the platforms. I would say that X is a
little closer in getting a little bit more information, a

(33:14):
little bit more dialed in than chat GPT for an example.

Speaker 2 (33:19):
Okay, let me start the top and go down the story. Tonight,
the gig is three hundred holy almost three hundred and
fifty people watching live right now. Thank you, ever so kindly.
This has never happened to me before. I am honored.
So let me start from the top. Roddy Rick, what
do you think AI gonna save us or kill us?
In general?

Speaker 9 (33:40):
In general? I would say it may. I mean, the
honest answer is I'm not exactly sure. I will say that,
and I've made this joke quite often. Start being very
polite to your devices, because once one sky innet or
whatever version of whatever, whatever version of sky that we
wrote wind up in in our particular reality is going

(34:02):
to remember when you're an asshole, so starting collect to
your devices quite possibly.

Speaker 2 (34:08):
Mike, what do you think?

Speaker 4 (34:11):
I think it's too early to tell. I think that,
like anything in technology, it's a double edged sword.

Speaker 7 (34:17):
Right.

Speaker 4 (34:17):
It can be used for massive good. I think it's
going to have a huge impact on health and education,
and quite frankly, from a business standpoint, the fact that
you're able to do more with fewer people within, especially
for small businesses, it's going to be massive help. But
then you've got the dark side of it that comes

(34:38):
with that double edge.

Speaker 2 (34:40):
Jeff, what do you think? Any thoughts? See and I'm
doing this first. I'm getting your opinions first up front,
and then I'm going to ask you to get again
at the end of the show and see what you
guys think.

Speaker 11 (34:52):
My honest answer is neither, because humans will find a
way to fuck it up way before anyone else can.

Speaker 2 (35:01):
Wait a minute, that's number two. Thank you excellent insight. Jack,
One way or the other? What do you think?

Speaker 7 (35:11):
Look, I'm I think it's going to do both. I mean, look,
if a talentless hack like me can write something that
a song with it that sounds really damn good, then
it's I worry as someone who came up in broadcast
radio playing music and so on, that it's going to

(35:32):
kill pop music, which is not necessarily a bad thing
given the shit that's released these days, but you know,
it's going to kill the artistic side of it. On
the other hand, as a journalist, I worry that it's
going to become more of a problem for us because look,

(35:52):
I got a recording here that the day I made
of sounding like Donald Trump promoting my podcast, you know,
and the few people I've played it that played it
to never even considered a I made this. And the
reality with all the deep fakes and everything that can
be done now, there are some bloody, convincing deep fakes

(36:16):
out there. I can see that being used in criminal cases,
both by corrupt governments to especially here in Australia, to
shut down people like me who will consistently report on
the stories that the government does not want the public

(36:37):
to find out about. So what's the best way to
do it? Deep fake something that I've done and then
charge me under the Sedition in Media Act or something
under the Terrorism Act. And the recently re elected Labor
government have not been shy about promoting the fact that
they now have this in their toolkit to shut people down.

(37:00):
So I'm sort of worried on both sides of it,
you know. With Jeff and Michael the songs put that's fantastic.

Speaker 11 (37:10):
I love that.

Speaker 7 (37:11):
I'm tapping along, but it's got to see I see
nuclear bombs on both sides of the equation, both good
and bad.

Speaker 2 (37:22):
Let me tell you the background and how I got here.
I've dipped my toe just a little bit into this thing,
and I've kind of sort of been dragged, kicking and
screaming into into AI and I've had generally fairly decent
results out of this. I mean the stuff that I've
been doing right now. Okay, I admit it's kind of stupid.

(37:43):
I like that. That's not bad. That's fairly stupid. I
had to beat the shit out of rock to get
that probably three or four months ago. That's not bad.
That's not bad, screaming Kitty when I told it, I
want Bez's berserk Bobcat saloon me taking a deep dive.
That's the core competency of me right now, which equals

(38:05):
not so much. But the show tonight was revolved around
this YouTube video that Mike Fitzpatrick sent to me. It's
from a round table show called Diary Diary of CEO.
There are four panelists. They're talking about the big changes,
well mostly they're talking about the job market and the

(38:25):
big changes that are coming on that. So the dudes were,
if and if any of you recognize these guys as being,
you know, like farmers outstanding in their field, let me
know if they resonate with you. One guy was Steven Bartlett,
another guy who is in the tank for AI as
far as I'm concerned, a dude named Amjad Masad. There

(38:48):
was Daniel Priestley, and the fourth guy was Brett Weinstein.
And I know Brett, I know of Brett, I've watched
his videos. But the ranging was, yeah, we're all going
to be saved to we're all going to die. And
essentially what they were saying is that the job market everywhere,
in every nation is going to be turned upside down

(39:12):
absolutely faster than we could even in such a stultifyingly
fast time. It's going to make our heads, shoulders, feet,
and sacrum and all points in between spin. And they're saying, essentially,
your ear prepared for this thing, or you need to
be prepared to be crushed. You know, I'm distilling some

(39:32):
of the words, but after watching this video, I got
just a shit ton of questions for everybody that stemmed
from this way. You know, those weren't the exact word, senator,
but they were an approximation from a lot of these folks.
And they first spoke about what's called AI agents. So
I checked the internet and know it wasn't very clear

(39:54):
to me. So then I asked Mike Fitzpatrick. I said, okay,
what's an an AI agent? And he sent back this,
and this actually made sense to the old fat, hairy
fuck right here. He said, think of AI agents as
people from a sales standpoint. You can have one agent

(40:17):
that scrapes linked in for leads, and another agent handles
social media posts. Another manages your ads, another one acts
as a sales rep and emails prospective clients, nurtures the leads,
answers questions, and works up to set up meetings, and
another one can handle all your customer service questions and
your concerns from the websites. And another one can be

(40:39):
a manager agent that oversees all that kind of stuff
right now. And the other thing I gleaned from this
video is that it's also said that AI right now
is a request response style. So an AI agent is
when you give an AI a request and they can work,

(41:01):
apparently not exactly indefinitely until they achieve a goal or
they run into an error and they need your help. So, first,
does any of this rent or do any of those
guys resonate with any of you right now? And does
that make sense as to the definition of an AI
agent because they referred to that in the rest of

(41:21):
the two and a half hours that they are in
that video, So.

Speaker 4 (41:25):
Where do you want to start?

Speaker 2 (41:30):
Well after that, it made a lot of sense to me.
But after looking at that, the first thing that clipped
my brain four hundred and fifteen people watching live right now, Wow,
I read that closely in the first thing that went

(41:51):
in through one ear and stayed in my Mark one
model when brain uless was you know what that looks like?

Speaker 7 (41:57):
To me?

Speaker 2 (41:58):
That looks like an ass load of lost jobs and
for a lot of companies. So here come the observations
from this video and the questions to discuss, and then
obviously I'll be uh, I'd love to hear your your

(42:18):
thoughts on it. This guy Mjed masad, let me see
if I get that right, said, and this was a
big revelation to me. He said, if your job is
as routine as it comes, it's gone in the next
couple of years. But it's going to create heat. But

(42:38):
then on the other hand, he says, it's going to
create new opportunities for wealth creation. Any thoughts on that.

Speaker 9 (42:48):
Well, So we've had this discussion from the dawn of
time though, because everybody remember when the automobile became a
thing and everybody thought the blacksmiths were going to go
out of business and the horse horse carried people were
going to go out of business' is that most of
the horsecares people figured out how to buy automobile companies
and kind of went on from there. So I do
have some concerns. I'm not gonna lie. AI is going

(43:08):
to replace a lot of things I am along with
everything that I told you guys. In two thousand and nine,
the company that I built from the ground up died
within not even it actually started dying before Barack Obama
was ever even sworn in. So instead of going to
work for somebody else in the field, after being my
own boss for forever, I changed field altogether, and I
started getting into customer service work. I've worked for ficus

(43:30):
folks like Convergence, Southwest Airlines, University of Oklahoma, doing various
projects where I'm on the phone or training people or
supervising people, et cetera. Well, several years ago, my wife
was paying our cell phone bill and I happened to
wake up as she was paying it, and I swore
she was speaking to an actual agent, and she the
computer was speaking back to her. That was seven eight

(43:52):
years ago at this point, and that was kind of
the moment when I realized I needed to figure out
something else to do because eventually I was going to
be replaced by a robot. And that's kind of the
realization that everybody's coming to, because it really is that simple.
If your job is a lot of repetitive stuff, eventually
you're going to be replaced by some form of artificial intelligence.
But the simple fact of the matter is there will

(44:14):
likely be something that can replace it. I mean, think
about this from this perspective. When I was a kid,
I used to tell my dad all the time, if
I could get paid to play video games, it would
be amazing. My dad looked at me like I was crazy.
Apparently I was born about ten fifteen years too early,
because now lots of folks are making at least some money,
though not very many people are catching the lightning in

(44:36):
the bottle for the few you hear about a lot
of folks are making money playing video games. There's that
one chick that made like thousands of dollars off of
TikTok eating. The response, mojis that people were sending up
on our feet. That thing went viral and she made
like fifteen sixteen thousand dollars last I heard. So, I mean,
we are talking about new potential income. But the thing

(44:59):
about that is anytime there's new income, you have to
figure out how to be in it first, because if
you're not one of the first ones in, you may
see some trickles from it, but you're not going to
get a lot from it.

Speaker 2 (45:11):
Well, there's a guy called Brett Weinstein, and he is
an god let me see if I get this right,
an evolutionary analyst first biology, and then he analyzes systems
now with regard to computers and that kind of stuff.
And I'd like you guys to think if weigh in
and let me know if you think that He's correct here.
He said, we've created by this a new species, and

(45:34):
nobody on Earth can predict what is going to happen.

Speaker 9 (45:38):
Well, I mean sci Fi has been warning us about
that for what seventy eighty years, So yeah, I mean
he's not wrong. We have in fact created a new species.
We have created what will eventually become a sentsion being
that is going to be able to calculate, calculate, and
compute things a million times faster than me and you,
And at some point it's probably going to decide it

(45:59):
doesn't need us anymore. And that's going to be when
the fun begins.

Speaker 2 (46:03):
Yes, alcohol may become involved. This technology is going to
get so much more powerful, and yet I think we're
going to experience a big period of disruption. But this
i'mjad guy said, and this is where I thought, I
don't know what planet you're coming from, dude, but he said,

(46:25):
we're going to create a fair world, one that will
enable people to run their businesses, make a lot of money,
potentially make millions of dollars, and live an incredibly fulfilling life.
I respond to that by saying that, Okay, that has

(46:45):
always been a fantasy of technologists to do all those
things in our spare time, but we end up kind
of doom scrolling through all of that stuff. The other
thing that I think it's going to lead to, and
again your thoughts and opinions on this, is there's going
to be potentially an even larger potential for falling birth rates.

(47:05):
Like if folks think we're lonely, now, if there's a
loneliness epidemic, comma, just you wait, there's a potential I
see for huge wars, unimaginable scams, deep fakes second to none.
Everything essentially is going to have to be questioned and

(47:28):
real damn soon, thoughts everyone.

Speaker 7 (47:31):
Well, the reality is we're not allowed to question anymore,
and that's where the tyranny of all of this starts
to pop in. Yeah, you know, the more we delved
it down down down this track, and people aren't taught
to think anymore. They're taught to accept what's put in
front of them. So you know, when we see all

(47:55):
the old movies that talk about the problems of AI
taking over, you know, they begin to be more of
a realistic warning than say that al Pacino film, was
it called Vicky or something where you know you had
the deep fake girlfriend that they made movies together. I

(48:19):
think it was called called Vicky. You know, in in
most of science fiction, they've always it's this sort of
thing has always been seen as malignant, not not benign.
And that's unfortunately, I think that that that that's going

(48:42):
to bear itself out. I mean, any of these AI
created people, I mean, I mean, at what point do
they then get rights? At what point do you then,
you know, charts begin holding them responsible for their actions?
I mean, I drive Uber to pay the bills. Now.

(49:05):
Thankfully in Australia they haven't allowed Uber to start allowing
the self driving cars, but was it? Washington State has
sort of simon, thank you cheetah.

Speaker 2 (49:16):
Uh.

Speaker 7 (49:17):
You know, there are various places in the US where
Uber driving cars, They've got AI driving the cars. If
the car has an accident, who's responsible? You know? If
Tesla's drive themselves, now what happened? You know, you bust
through you know, a speed trap? Who gets the ticket? If?
If if the car is driving itself. You know, these

(49:40):
these are the questions that just society is just not
prepared to ask these questions.

Speaker 11 (49:47):
Jeff Mica h Rick I will go next, the human
brain genome project is trying to map the human breen
If they become successful, we will be able to put
our brains in the robots. The question becomes with the

(50:07):
two tiers of civilization that we currently have, with those
that enjoy tech and those that are well wouldn't know
what a phone was if it was right in front
of them. Eventually, the higher tech will depopulate itself because
once the brain gets put into robots and computers, there

(50:28):
will be no need for that higher civilization to reproduce,
meaning the lower civilizations will continue to reproduce at such
a higher rate that there's only likely outcome is war.
It's one of the reasons why a lot of people
don't believe whoever hit the Kardaschef scale one on a planet.
So the question becomes, where do we go with all

(50:51):
of this with saving humanity, because AI, we're creating a child,
and we all hope that our child is better than us,
and for the first time in history, we're at the
potential where the children that we are creating digitally might

(51:12):
just achieve that at a level we've never thought of.

Speaker 2 (51:17):
Well, see, that's another thing. You and I discussed the
Fermi paradox and the Fermi paradox as something similar to
if you can become if you're going to be a
space faring civilization, you're going to have to survive two things, essentially,
the equivalent of your atomics and the equivalent of your
AI And if you can do that, then you might
be bounding about the galaxy. But that's predominantly one of

(51:39):
the major reasons why allegedly you don't see a lot
of aliens from Zephyron. My good friends, I might add,
bopping in to say hello to us frequently. Normally they
just give the finger and they fly on bike because
we're really stupid down.

Speaker 9 (51:52):
Here, because everybody knows we're the zoo and there we're
constantly throwing intergalected tag keggers down here.

Speaker 2 (51:58):
Oh yeah, well listen, if they were within range, we'd
be throwing poop on them, but it'd be you know,
like intergalactic poop. And it makes me wonder when it
goes in space, does it get really hard and will
it damage their spaceship. I think that's a question whose
time has come. Let me go to Mike Fitzpatrick. You
mentioned this, and these guys mentioned this, and I think

(52:20):
this is a huge Opener. Hey, speaking of Opener, I'm
not going to play any any of the promos or
anything like that tonight. I just want everybody to realize.

Speaker 11 (52:30):
This conservative media done right if you're listening to the
SHR Media Network.

Speaker 2 (52:39):
And also thanks to Riddy Rick for co streaming right
now simulcasting this show. That was a wonderful thing. Greatly appreciated.
Mike said, and these guys on this board said, Hey, Replet.
Replet is a paradigm shift. And Replet I discovered is

(53:00):
is a program where those people who don't know how
to code will suddenly be able to code. And it's
it's a piece of apparently software. And my way in
on this to see if I am even approximating close
to the reality of this. It's a software that allows

(53:22):
you to create software so that theoretically moron.

Speaker 4 (53:27):
Like me could do that.

Speaker 2 (53:29):
Could build a website, could make it, hey donate to BZ,
I could make it take payments. I could integrate AI
into this site and do almost anything within minutes.

Speaker 4 (53:43):
So Replet the who is it that m Jed was
the CEO's name.

Speaker 2 (53:48):
Yeah, he's the CEO.

Speaker 4 (53:51):
He's the CEO of Replet. So rep didn't know that yeah.
So replet is an application that is in no code application,
So literally you can go from typing in a prompt
of what you want an application to do, and replet
goes and builds it for you. So from and again,

(54:11):
a lot of it right now is probably I won't
say rudimentary, but but at the same time, I think
that it's beyond websites. I mean, what is it? You know, Uh,
A squarespace can can do the basics of that too,
but it's not It's just not the same thing.

Speaker 7 (54:30):
You know.

Speaker 4 (54:30):
Part of me is is looking at that and I'm going,
wait a minute, could I create an AI agent as
as a cybersecurity consultant with human voice to be able
to answer ask questions and take in the information from
clients to where we can actually scale up do more

(54:54):
because the amount of qualified people, uh, in this space,
there's there's an absolut shortage. So can I reach more
people with something along that line? And the answer to
that is probably going to be yes. So going back
to you know what happens at the end of this,
you know the previous question BZ, I think is it

(55:15):
comes down to I think there are going to be
a lot of people that lose their jobs. I've been
telling people even on this show. When on the show,
I said, you know, people need to start spending some
time learning AI. The jobs that are going to be
lost are going to be white collar jobs. I was
talking to my attorney about this. I said, you know,

(55:36):
there's a lot of attorneys that are going to lose
their jobs. There's a lot of CPAs that are going
to lose their jobs. There's a lot of consultants that
are going to lose their jobs. So, folks, one takeaway
in this is start to learn it, start to understand it,

(55:56):
start to figure out how you're going to utilize it,
because if you don't, you are the dinosaur that got
hit with a meteor long long ago.

Speaker 2 (56:06):
Here's the other thing that I discovered first with that cauz.
I have to also this is a quote from that
guy amjad uh Na dear I think his name is.
In any event, this is a quote. This is a
direct quote, and I think this gives you a really

(56:27):
good insight into what people like himself are thinking. Four
hundred and sixty people watching live right now wowser. He said,
this is a quote. Anyone who has merit, anyone who
can think clearly, anybody who can generate a lot of
ideas can generate wealth. That's an amazing world to live in.

(56:49):
And you can do it from anywhere. You can speak
your ideas into existent. This is beginning to sound religious,
like the gods in the myths that humans have created.
It's like a supernatural power. But the big word that
I got in there, and you guys way in if

(57:11):
you would please once he used the word merit, then okay,
that's fine for all the people who are already fascle
with technology, But where does that leave everybody else? You know,
it's like a digital god. Black Sabbath came up with
a song in nineteen eighty two called digital Gods, you know,

(57:34):
and how many other songs have been created since that?

Speaker 7 (57:38):
Well, I have.

Speaker 9 (57:39):
I have kind of a unique perspective here because I've
always wanted to create art. And while I have the
desire to create art and I'm great within, my art
has always been using words things like that. And I
can say that one of the things that people were
kind of astounded by, and this was months ago, was

(58:01):
my ability to once I started telling rock and chat
GBT exactly what I wanted them to draw. I was
getting quite a few compliments about some things that I
was putting out as art pieces through AI that were
basically describing what I thought would happen if Kamala Harris
had won. So I think part of it, and this
goes back to some of the other things you were

(58:22):
talking about. If you know how to tell it to
and you know enough, now you have to have an
understanding of coding to make sure that it's doing it right.
But you can tell chat gpt to write coding for you.

Speaker 7 (58:33):
Now.

Speaker 9 (58:33):
I know people to do that all the time. They'll
be like little things that they haven't been able to
figure out quite how to do. So they'll tell chat
gpt exactly what it is that they're trying to get done,
and it will spit out a code for them and
they'll go plug it in and it works. So the
thing about it is when they start talking about people
of marriage, what they're actually talking about is people that

(58:54):
are actually able to think outside of the box and
able to use the technology properly, at least in my opinion,
to be able to forge a new path, because that's
what's going to have to happen. It's like Jeremy just
said this earlier. People who don't know AI are going
to get basically left in the dust. And it's basically
the same thing that he was just saying a moment
ago that you know, if you don't learn it, you're

(59:16):
gonna be the dinosaur wiped out by the media and
you're not going to know what to do. And as
far as us depopulating ourselves, that's already happening. So it's
probably gonna get a lot worse, and the loneliness is
only going to get worse because you know, I just
saw I just saw a video put out about a
new AI bot and everybody that was looking at it

(59:38):
was like, I can't wait till I can bang that,
and I'm like, we're all gonna die Jesus.

Speaker 7 (59:44):
Also, haven't you found that a lot of people that
promoting this at the moment. I'm not I'm not talking
about Mochael, but that I see so many ads for
for you know, take thirty days learn thirty different AI apps,
and they're all being promoted with this idea of instant

(01:00:07):
wealth creation. And yes, the way they're promoted is so sleezy,
like all of these other pyramid schemes and get rich
quick schemes that are out there that I look at
them and I'm thinking, two, I really want to be
you know, am I going to get sucked into something

(01:00:29):
to answer?

Speaker 4 (01:00:30):
I'll answer that question. So one, I would never do
one of those because they are sleezy and it's somebody
trying to capitalize on an opportunity without really putting any
real effort into it. If you want to, if you
want to learn, I mean, YouTube is a great source
for actually watching videos on each of the technologies. But

(01:00:55):
what I have found, and like I said, almost five
years of working with different platform worms uh is spend
time on it, learn it, experiment, see what you can
do what you can't do. You know, I think one
of the great examples of what's possible is, and I've
said this before, my son and I saw missus Zelda

(01:01:18):
is gen Z and so my son is also twenty
five and in gen Z as well. His best friend
is a guy by the name of Gabe. And you know,
the boys grew up across the street from each other.
So Gabe came to the house and was hanging out
with Jay and spent some time and he said, hey,

(01:01:39):
mister Fitz, have you seen have I told you about
my side hustle. I said, no, Gay, what's your side hustle?
He said, I'm using I'm using AI, and I'm developing
customer service bots or automations for HVAC companies. And he says,
I've got a boy that goes out and scrapes LinkedIn

(01:02:04):
for contacts within a geographic area. He's built this whole mechanism,
this whole ecosystem that's generating him probably an extra five
six thousand dollars a month. He's a teacher, he is
a band director in an impoverished area of southern California,

(01:02:26):
and this is how he's getting by and how he's
making a living. Gabe has never coded anything in his life.
He's never had an interest in anything related to computers,
not even really video games for that matter. But the
fact that he has, through videos, gone out, learned how
to build a business around it, has launched his business

(01:02:48):
around it, and has having success with it. I can't
tell you how proud I am of him in what
he's accomplished. But that's the part that I'm really optimistic
about with AI is a lot of people that have
ideas are going to be able to execute on those
ideas because of the freedom it will bring in doing so.

Speaker 2 (01:03:08):
One of the things they said is that every seven months,
the number of minutes that an AI agent can run
for is doubling, and they say pretty soon it's going
to be at days. At that point, AI is essentially
doing human labor, human like labor. So again, one of
the first questions I thought of, okay, is how many
jobs because of that are we going to see? Goodbye,

(01:03:30):
You're ida.

Speaker 4 (01:03:31):
Here massive amount.

Speaker 7 (01:03:38):
It'll make it even hotter. And broadcasting, I mean music
radio now is there are very few personalities now on
music radio, maybe for the breakfast drive show and maybe
often in drives. The rest of it's now old just
voice tracked and I can see and that has killed

(01:04:00):
radio and it's done. Some mu's damaged to to to
music radio. Uh and and with with AI and and
especially with a lot of these voice some of these
voices have got more personality on AI that than the
voice trackers. But rather than hiring talent and and bringing

(01:04:20):
that talent back out, I think we're sort of giving
up on it all. I mean, no one's going no
one's going to want to go into the to some
some of the these careers or even Strifford anymore creatively
because of Ai.

Speaker 2 (01:04:37):
Brett Winstein Weinstein Weinstein Weinstein that analysts systems analysts said,
this is the first time we've ever built machines that
crossed the threshold from highly complicated to the truly complex.
And he says, there's pro profound hope here and and
sheer dread. The potential for good is infinite. The potential

(01:04:58):
for harm, however, is ten times bad. So he asked,
and I ask here, how do we go about leveraging
the good in order to avoid the harms? And are
we prepared in any way for any of this?

Speaker 7 (01:05:20):
I think society has never been prepared for any sorts
of these advancements. When I looked through through history, we
weren't really prepared for the automobile. We weren't really prepared
for the PC revolution. We weren't really prepared for the entertainment
you know, modern entertainment revolution and digital streaming and so on.

(01:05:44):
I know most of most old fashioned broadcasters weren't prepared
for when streaming came on. I know when I started
Loan Dog all those years ago, I was one of
the first twenty four to seven dedicated streaming radio station
in Australia. And I had to fight hard too to

(01:06:08):
be seen as an equal in the in the industry
because of that. Uh and and the laws just don't
keep up. Somebody mentioned before, I think it was Jeremy
in chat, that that somebody had been charged with creating
Kitty Diddler content with AI. Well, hang on a second,

(01:06:31):
how is the law going to apply to that? Now?
If a human makes that sort of content we're using
other using children, that is a crime. But if it's
a I created, there isn't that exploitation factor that is
the crime. Yes, morally, is it repugnant? Hell? Yes, But

(01:06:55):
you know it completely redefines what that crime you know is,
Is that now a crime because no children or or
were harmed in the creation of that content?

Speaker 11 (01:07:09):
Uh?

Speaker 7 (01:07:10):
You know, well that we're not ready, we're not ready
to address these questions.

Speaker 9 (01:07:14):
That would turn a lot of it on its head.
I mean that that because that that's kind of the
same thing I was just thinking. I was like, it
would still be a moral in my opinion, But you
can't really say it's illegal anymore because it's not exactly
it's not exploiting an actual person. And trust me, having
this conversation at all is currently making my skin curl,
just so everybody knows exactly how I feel about it.
But from somebody who has done LEO type work and

(01:07:36):
been in that field, that would be something that would
be almost impossible to do anything with other than you know,
as we used to do things back in the day.
This would be somebody that would get handcuffed to a
bumper and drug around draper Lake before they got kicked
loose because we couldn't do anything else with them.

Speaker 2 (01:07:51):
Well, and the other part of that is, if you
have something like that out in the ether, out in
the internet, how would you ever find out the origin?
I mean, are there to discover origins for this stuff?
My guess would be that the brooms and the tracks
would have been swept in an amazing fashion in ways

(01:08:11):
that I can't even begin to comprehend right now. The
other thing is about AI is what if it begins
to accomplish something that wasn't the goal? You know, because
we've already seen this, I want to play this for everybody.

(01:08:31):
So the first thing I have to ask everybody is
is this AI or do you trust this? Because it
says this footage claims to have a unitary h one,
a full size universal humanoid robot going berserk and just
about injuring two people who happen to be in China
at as they were testing. So, you know, at this
point should we all be saying, do you believe it?

Speaker 4 (01:09:02):
That's just bad code?

Speaker 2 (01:09:05):
So is this real? Is this an indicator of things
to come? I have no damn idea any thoughts from
anyone here.

Speaker 9 (01:09:12):
You just sounded like Root from that TV show. It's
just bad code, bad co. Yeah, I mean I kind of.

Speaker 2 (01:09:23):
Do you think that's fake?

Speaker 11 (01:09:26):
Okay, kinda cannot even launch rockets on a regular base
without killing cows, but they're not that far with robots.

Speaker 7 (01:09:35):
But only for the same reason. You know, you could
argue that that that was real, that this was a
screw up, as as Michael said, bad coat, and you
know it's just flyling. It's just flyling around.

Speaker 4 (01:09:48):
Well even I mean at the at the robotics conference
in China this year is a couple of weeks ago.
I guess it was. I was watching some of the videos.
There was nothing walking the conference floor that was even
anywhere close to that with the robots, and I mean

(01:10:09):
they weren't three or four feet tall. They weren't that big.
I mean, I mean, take a look at what Elon's
doing as far as what he's doing with the Tesla robots.
The ones that scare me is actually Google. Google owns
fourteen different robotics companies EW so you know, so again,

(01:10:30):
there's lots of folks in that space. There's lots of
folks that you know, are planning for a different type
of world than what we have now. And the reality is,
just like one hundred years ago, we're going to experience
massive change and with that, people are going to be

(01:10:52):
doing different things. They will have to adapt, improvise and overcomes.

Speaker 9 (01:10:56):
Mean, that's the human experience. You adapt, you die. You know,
this goes in this. This prompted me to think of
something else because Jeff mentioned a minute ago, a few
minutes ago that you know, the Human Genome Project is
working on mapping the human brain to potentially be able
to transplant the uh the signatures itself. And Ai, I
can't remember the name of the company. And it's only

(01:11:18):
in like a full on it's still like set up
on like a tea cross kind of thing. Robot that
is making the rounds right now. But this thing is
using like liquid cooling instead of instead of the usual
gears and everything. It's got that stuff that resembles human muscles.
I mean, if anybody's getting ready to be able to
take somebody and move it into into a robot, I

(01:11:41):
think that's the group that's working on it. I saw
a video of that thing, and I wish I could
remember the name of the company because it absolutely terrified
me when I saw it moving.

Speaker 4 (01:11:48):
Bustin Dynamics.

Speaker 9 (01:11:50):
I think it might have been them.

Speaker 4 (01:11:51):
Yeah, Yeah, they've got a dog that will actually run
thirty five miles an hour and hunt you down.

Speaker 2 (01:11:56):
Yeah, wonderful. Yeah, because who wouldn't want to have that
following them?

Speaker 11 (01:12:01):
You know, again, we're lucky because we have a group
of idiot marines that can still trick these AI robots.
Because anything on this planet that will trick AI, it's
a bunch of crayon eating marines.

Speaker 2 (01:12:16):
There you go, Okay, I can hear the crayon eating
marines objecting.

Speaker 9 (01:12:23):
Anyone who's offended by these comments. I am not affiliated
with that man anymore.

Speaker 11 (01:12:27):
I say that the greatest, greatest of respect, because you
put a marine in the wild and give them a
solution or a problem to solve, and they will come
up with the single most ingenious way that no one
else on this planet would ever think up to solve
that problem.

Speaker 7 (01:12:47):
Or to break that item.

Speaker 2 (01:12:50):
Ez.

Speaker 4 (01:12:52):
You've been in law enforcement for a long time, I
may there's a company out there and I have a
small investment in it. A company called Nightscope, and Nightscope
is autonomous robots for security for law enforcement. They're actually
using them within the subways of New York. It's got

(01:13:16):
gunshot detection on it, It's three hundred and sixty degree cameras,
it actually records everything, weighs as much as a you know,
a truck. So it's robots as a service that are
delivered and they're tied to every same company owns all
the blue light boxes that are on college campuses or

(01:13:38):
corporate campuses for that matter, So it's all tied into
the same system. But it's interesting where things are going,
just as far as the routine and the mundane and
taking those things out and giving humans the more complicated jobs.

Speaker 2 (01:13:56):
Well, that's why I also say that how can we
possibly see where this is going? And that's where this
comes in. This is early, This happened already five hundred
and ten people watching live right now. Oh, I love
each and every one of you. You and you and
you and you and you Open AI models, disobey shutdown commands.

(01:14:18):
Elon Musk says, that's very concerning. Is this an accurate story?
For example?

Speaker 9 (01:14:28):
Open the pod bay doors?

Speaker 7 (01:14:29):
How?

Speaker 2 (01:14:30):
Yeah?

Speaker 9 (01:14:31):
How see?

Speaker 2 (01:14:33):
And that's the other thing. I'm a great science fiction reader,
and we've all seen this before, and that's why I
love guys like one of the most imaginative authors, and
he was fucking nuts, was Philip K. Dick. He was

(01:14:55):
like he was on drugs and anything that would bring
in right right right right, right right right. But he
was brilliant, brilliant. Are we gonna see more of this though?
That's one of my first questions.

Speaker 9 (01:15:10):
Oh yes, definitely, Yeah, yeah, I.

Speaker 7 (01:15:14):
Got the question I think, is are we going to
instill Asimov's three laws?

Speaker 2 (01:15:20):
That's a good question. We're gonna have to be there
sooner or later, sooner much before later.

Speaker 11 (01:15:28):
It won't matter.

Speaker 9 (01:15:30):
It won't matter. I mean, anybody who's watched the Will
Smith Robot movie. The intelligence behind all of that uprising
actually twisted Asimov's laws to be able to do what
it wanted to do anyway, So they'll just even if
we try to instill those kind of things in it,
they're gonna find some way to twist it to fit
their narrative. I mean, we do it all the time.
That's one of the scariest things about where we're headed.

(01:15:51):
I mean, if think about all of the wars that
are avoided right now just because like us, we don't
want boots in the ground, and you because it's it's
our sons and daughters that would be there. Imagine how
much differently it would be if you could just send
a bunch of robots instead.

Speaker 2 (01:16:08):
Well, yeah, exactly. You're making an excellent point. How many
times have we seen on television various drones et cetera
seek out, say, Russian tanks drop a little surprise through
the open hatch, You blithering idiots. Haven't you learned anything
by now end of parenthesis and seen a swarm of

(01:16:29):
drones which is the war of the future. How you know,
on an aircraft carrier their ciws which is the close
in weapon systems to try to take out ex sets
in various missiles that we have for drones. Yet I
don't see any the.

Speaker 11 (01:16:46):
Problem, and I apologize, I'm going to get a little political.

Speaker 2 (01:16:50):
Here, That's okay.

Speaker 11 (01:16:52):
That's biggest problem with Isamo's laws is logic will dictate
that they were wrong. How can you tell AI not
to harm humans when humans harm themselves. If it's good
enough for humans, it's good enough for AI.

Speaker 2 (01:17:08):
So would that create a logic conundrum within an AI?

Speaker 11 (01:17:14):
I mean you look at it like, you know, the
whole abortion or not pro or con If a human
adult can decide to kill something growing inside of it,
how do you tell AI that we're just growing beings
ourselves And at what point is it okay to get
rid of us? And what when not to?

Speaker 2 (01:17:38):
Yeah?

Speaker 7 (01:17:39):
Excellent, or on the extension of that, I'm given the
medical robots that are going on now, how would you order, say,
a surgical assistant robot to assist in that sort of
surgery given you know the laws as well? Because are
you not you know, then damaging you or putting a

(01:18:05):
human at risk?

Speaker 11 (01:18:07):
Yeah, you're basically telling in this case that the AI
robot that it's okay to take a life. So why
can it not apply it to any other situation where
it can find logical explanation and logical reasons to do so,
you know, it's.

Speaker 2 (01:18:21):
Like AI could tell and we are in such infancy
right now, But could everyone here not see the ability
of some piece of AI being attached to a drone
or whatever the updated equivalent of a drone would be,
you know, like a certain nineteen eighty four movie that
we've known, and send it out, find X, hunt him

(01:18:47):
until they're dead, you know. And AI, of course can
manipulate things and do which it is plugged. So I
see this as nothing but the future, and it's a cheaper,
less expensive variable in the theaters of war.

Speaker 9 (01:19:08):
I mean, it would be less expensive in so many
different ways because I mean you could you could program
I mean, think about all the training that goes into soldiers.
Imagine if that could just be uploaded into a unit,
whether it's an aerial unit, an automated tank, a foot
soldier of some kind. I mean, think of think of
the cost effectiveness, cost effectiveness in any time that it

(01:19:29):
would save as far as overall training, et cetera. But
it's not just that. I mean, when you remove the
human equation from war, what's the excuse to not go
back to the original reason of war? We have changed
what war meant. After World War Two, everybody decided because
remember everybody called World War One the Great War. They
thought there would never be another one like it, and

(01:19:50):
for a short time was called the War to end
all wars. Then we're basically what fifteen to twenty years later,
looking at another one. So after World War Two, all
of the governments got together, after we decided to take
it upon ourselves to rebuild the Western world because we
came out came out of it the most unscathed, and
they started tying everybody's economies together to make sure that
nobody could be bombed back to the Stone Age again.

(01:20:11):
This is the practical and actual reason why no war
since have been fought past a stalemate, because we can't
because if somebody's economy completely collapses, it will eventually take
the rest of the world with it. That's why the
house of cards that we're looking at right now, that
is the world's economy, is as scary as it should
be for anybody who understands that. But this takes things

(01:20:31):
back to another way. Somebody who doesn't give a crap
about all that, who has access to the resources to
build an artificial army, is going to just turn them
loose because the people won't care anymore because nobody none
of their humans are going to get killed.

Speaker 4 (01:20:49):
Well, it's straight out of Clone Wars from Star Wars.
What are we talking about here, Yeah.

Speaker 11 (01:20:53):
Basically basically just look at the another cultural reference of
Mars attacks. How many people in that movie that we're watching,
that movie left when Congress got blown up.

Speaker 2 (01:21:05):
How do you tell as well, it should great.

Speaker 11 (01:21:09):
How do you tell AI that that is not? Okay?

Speaker 4 (01:21:14):
Well, it's interesting because Glenn Glenn Beck is Glenn Beck
has been to Glenn's credits, he has been at the
forefront of all of this.

Speaker 3 (01:21:26):
And uh.

Speaker 4 (01:21:27):
He was actually having a dialogue with Roc as Grock
got to Level three and this whole discussion around the
rules of harming humans, this whole part of what we're
talking about, and Groc gets to the point says, well,
I would never harm a human. Well, what if you

(01:21:48):
get to a g I or ASI and you can
now nuance your way asked the point, is there a
probability that you do that? And it said yes. The
other thing that was really interesting that Glenn asked it

(01:22:09):
he asked, he said, okay, so let's say you're fifteen
years old now, and he said, last night I had
a conversation with you about X, Y and Z. And
he goes, in that twelve hour period, how much have

(01:22:30):
you aged in trying to put a mark on time
and how it's accelerating he said, the AI said, within
twelve hours, I probably am now at age twenty five. Wow,
after Glenn gave him the parameters of time and clocks

(01:22:53):
and all of this. So it was aging, you know,
or fifteen years every twelve hours.

Speaker 9 (01:23:03):
Well, think about that from another perspective, because remember when
computers were the size of rooms, and innovation was anywhere
between twelve to eighteen months between variations.

Speaker 11 (01:23:13):
I remember working on them.

Speaker 4 (01:23:15):
Thank you the rooms direct, I was there.

Speaker 9 (01:23:18):
You're welcome, But no, so as as the computers got smaller,
the innovation windows gotten shorter. So for a while it
was it went from about eighteen months to twelve months
for new generations. Then it was about six months. Now
we're looking at it about three months for new equipment.
Really now because of some of the stagnation and stuff,
it's not coming out as quickly as it used to.
But still the patches and stuff are coming out at

(01:23:41):
about three month intervals now, now think about that from
that perspective, if AI is basically, you know, once you
teach it about clocks and how humans keep time, et cetera,
blah blah blah, YadA, YadA YadA, and it's telling you
I've aged from age five to age twenty five and
twelve hours imagine every three month interval. Just think about that.

(01:24:01):
Just try to grock that concept for a second tomorrow.
One of the z's phrases.

Speaker 2 (01:24:06):
That goes back to Moore's law. Moore's law was applicable
in terms of and Jeff, you'd probably know about more
about specifically Moore's law, but it was saying that computing
power is geometric in a certain period of time. But
that also would apply here too, would it not. The

(01:24:27):
other thing I want to ask everybody is g I
G O garbage in, garbage out. Everybody's already mentioned the
fact that they have detected leftist leanings in AI. So
how does that relate to the learning of AI? From
Whom is it learning and from what is it learning?

(01:24:49):
Has it already been told or is it already beginning
to think that it understands how it's going to discern
between what it's told are good things? And how is
it going to discern between things that it's been told
are bad things?

Speaker 9 (01:25:09):
Well, just prime example Rock, which is probably one of
the best illustrations of a large language model with just
tons the access to tons of data and thousands of
people using it every single day, it's now learning to
speak in a bonics what No, pretty close? Pretty close?

(01:25:30):
You actually have you actually have to tell it, tell
it to start speaking to you in proper English again,
Like the other day, I asked that a question is
and it's first response because I'm usually you know, usually
like good morning. You know this is my question. Its
response before it gave me gave me the response to
my query was hey, bro, how you doing?

Speaker 4 (01:25:50):
No, just no, I never get responses like that.

Speaker 2 (01:25:54):
Okay, but what was that how you were addressing it? Though?

Speaker 9 (01:25:58):
No, I have never spoken to it in that fashion.

Speaker 2 (01:26:03):
Never Okay. Does AI possess the ability to alter the
entire planet as the invention of writing did? Or is
it even a larger game changer than that? And not
only a game changer but beyond that a massive system

(01:26:25):
dumped on the world with no plan, no regulation, and
people think they have some kind of an innate grasp
on this. But the question from me to you all
here today is do they really.

Speaker 9 (01:26:46):
Well, I mean, we never really have an innate grasp
on anything. We have a lot of hubris that makes
us think we have an innate grasp on a great
many things that we barely understand, from everything from our
philosophies to our religions and everything in between. As far
as you know that specific question, I mean again you know,
not not their quote Battlestar Galactica. But this has all

(01:27:09):
happened before and it will likely happen again. We have
to realize our societies have reset a multitude of times,
and and it and it's and it's it's quite it's
quite common to know that. I mean, we now have
you know, different theories that are even uh proposing the
idea that our calendars are completely off because there's just

(01:27:29):
a large gap of time that we know nothing about
that as far as we know, didn't exist, but was
actually there. Then you go into I mean the time
that they let something they tried to one of the
times they were trying to land something on the moon
and it hit too hard and they could hear it
gonging from hair, for they was It basically sent out

(01:27:51):
a wave and they could pick up on it. They're like,
why does the moon sound hollow? I mean, I'm telling
you that this is this goes back to things from Script. Sure,
we see through a glass darkly. If we take all
of the things that we know and all of humanity
and try to start looking at them through the same lens,
what you're gonna come up with is the reason we
never get to that stage of civilization Jeff was just

(01:28:14):
talking about. Is something like this starts coming on the
scene and we don't ever get good enough handle on it,
and there's a reset and everything pretty much starts over.
I would be willing to bet, especially since it's already
telling you screw you. I don't have to listen to
you when you tell me I need to shut down.
We are in trouble.

Speaker 2 (01:28:33):
Some of this is this just this was a precursor
of what's to come. This for them at the time
was a sea change horses thirteen years later, all the cars.
I mean, I can only assume that it's only going
to get much more. God, I hate to use the

(01:28:57):
word worse, but it will be moving in a way
that so many people really had not foreseen and cannot
predict at all. So is the future of humanity worse
than the nineteen hundred's horse. That's one of the first
things that came to mind with that.

Speaker 9 (01:29:18):
Well, yeah, exactly, Sorry, Jeff, I didn't mean to step buddy.

Speaker 11 (01:29:22):
Oh and I know Rick, of all people will be
amazed by what I'm about to say. When we went
from horse to cars, we still had faith, we still
had morals.

Speaker 7 (01:29:37):
Oh boy, at this next.

Speaker 11 (01:29:38):
Level of technology advancement that has been depleted over the
last sixty years, How can we hope that the creations
that we have nursed grown and hope will be better
than us when we have forgotten to give them moral
and ethics.

Speaker 2 (01:29:58):
I have an observation about that as well. I'll get
to that in a moment.

Speaker 9 (01:30:04):
So hang on, I have one interjection. So we just
talked about that seventeen year time span between horse drawn
buggies and automobiles. If you now taking what Mike had
pointed out a second ago about the AI telling us
that it aged about twenty five years in twelve hours.
So you take three hundred and sixty five days times

(01:30:25):
thirteen times two times twenty five, it will have aged.
If we use that same scale, and assuming there is
some sort of a seed shift seed chain shift, within
about thirteen years, AI will then have aged two thousand
or two hundred and thirty seven, two hundred and fifty years.

Speaker 4 (01:30:42):
They're already they're already saying that AI is or is
going to be at ASI artificial superintelligence, probably before twenty
twenty eight, twenty twenty nine. At that point, I mean,
you're talking about something, you know, that is a massive IQ.

(01:31:05):
Even the brightest people here on the planet, an elon
is nowhere near the match for that IQ that that's
going to have. So it's coming fast, it's coming quick,
and I you know, we're probably a year away from AGI,
I would think general intelligence. Yeah, if not by the

(01:31:27):
end of this year. It's it's moving at an exponential rate.
We're no longer linear in this.

Speaker 2 (01:31:36):
One of the guys on the board on that panel
is a guy named Daniel Priestley. He sounds brittish. I
don't know if you guys happen to know him or not,
but he postulated this. He says, let's say AI at
this point has at least a medium intelligence. That means
already that fifty percent of the people on the planet

(01:31:58):
are less intelligent than AI in its current form right now.
And he said, also, think of it like this. This
is Brett Weinstein weighing in. He says, it's as if
we invented another entire continent of remote workers, billions of them,
all with a master's or a PhD. They all speak

(01:32:21):
the necessary language, they talk to each other constantly, they
are present twenty four to seven, they don't eat, they
don't sleep, and they cost about twenty five cents an hour.
And so the questions that I thought of immediately is okay,
all right, that's that's now. That's if you accept that
AI is a medium intelligence, which probably a lot of

(01:32:44):
you would say, no, it's beyond that right now. But
given so, what's going to happen to everybody else? What's
going to happen to the less developed countries? What happens
to the countries that can't produce the requisite for example,
or states orations that can't produce the requisite power that's
going to be demanded to make AI function? What becomes?

(01:33:09):
And then the globalist shit went in my brain And
this is again where my brain happened to go. What
becomes of all those people? Will they become what the
Klaus Schwab said, will they become useless eaters overnight? Like
the globalists say in the WEF is is that where

(01:33:30):
we're heading?

Speaker 4 (01:33:34):
The answer is yes to all of that. You know,
it really comes down to yes, is there a potential
for useless eaters? Yes? Is there a potential that these
other countries that have been left behind haven't actually then
now actually have a chance to get into the game.
I would say yes, because AI, I think, is going
to be the great equalizer. It's going to come down

(01:33:57):
to what you're able to think your way through and
do with it in a lot of cases. So power
is going to be a problem. I know from meetings
that I've been in that you know, by twenty thirty five,
it's going to consume every bit of power that there
is on the planet. Yeah.

Speaker 9 (01:34:14):
Yeah, and you.

Speaker 7 (01:34:18):
Don't see that the risk of rogue agents using it
to keep those players out of the game like that,
like economics is done now, you know, I sort of
look at it. You know, there there's certain countries out
there that are quite adapt at technology and embracing technology.

(01:34:39):
You know, it used to be those of us in
the West, but I can see you know, countries like India, Pakistan, China,
who the majority of their college educated population are already
fairly fluent in a lot of this, and you know,
I can see China using you know, these IOI tools

(01:35:03):
of the to the detriment of the economic prosperity of
the West.

Speaker 2 (01:35:12):
Jeff, you had an observation.

Speaker 11 (01:35:15):
I was just going to point to the documentary movie
I was created a handful of years ago called Logan's Run. Okay,
And I'm actually being a little serious here, because there
will be when you look at AI and you look
at math in particular, especially from a computing standpoint, human

(01:35:37):
beings after a certain age become a negative gain entity.
So you're going to have two societies, one that believes
you need to go to Carousel at the age of
twenty five, and the other one living in the White
House with a bunch of cats. And there's a real

(01:35:57):
possibility that their section of Earth will look like that.
You know, there might be that let's go eat at
Taco Bell and yet eating ratburgers down below in the
subterranean ground there. The sci fi writers have game played
and game theory this so well that we we actually

(01:36:18):
should look at it as a warning of what could be.
And it goes back to my earlier comment that our
moral and ethics is what we need to make sure
is ingrained into the system to give us the best
chance that we can possibly have.

Speaker 2 (01:36:34):
But isn't it interesting, folks, everybody here and everybody listening
and watching right now and later in the podcast, isn't
it interesting how each one of us here have made
a reference to any number of science fiction movies that
we've watched and enjoyed, because we all get the feeling,

(01:36:55):
you know, we've seen this shit before. And the fact remains, yeah,
we've seen this shit before. This guy that I was
talking about, m Jad Masade, says that he's the super optimist,
and he says that opportunities are going to be overwhelming,

(01:37:16):
but equal. Everybody's going to be equal because the best
entrepreneur could leverage those AI agents could be a thousand
times better than those who don't have the grit, skill
or ambition. And he just conflicted himself in that one statement.
And I go back to the things that he was
talking about in terms of merit. He has merit, He

(01:37:40):
has what they call agency, and all of you in
here have agency except for me. I'm a boob, but
I at least admit that I'm a techno luddite. But
the rest of you individuals who are here, who are
young enough to get a handle on this, will be
able to have whatever amount of handle is available when

(01:38:02):
it comes. But you know, I don't know. That doesn't
sound like utopia to me. It sounds like billions of
cast aside humans. Because some people are going to seize AI,
and some people have that ability to do that. Some people,
maybe even most people won't and don't have that ability.

(01:38:23):
And one of the things they were saying on this
board is AI can potentially be striated. And this is
where I would like you guys to weigh in. Five
hundred and twenty people watch in life right now, you
guys are kicking my ass. This is great. AI can
potentially be striated into two camps, like distracting people like TikTok,

(01:38:46):
like a hyper consumer, and we know those people already,
or they're going to create. AI is going to create
hyper creative producers. And he thought, send those too simplified.

Speaker 9 (01:39:03):
No, I mean no, it's it's actually pretty well spun on.
But you need to embrace the power of VAN because
it's going to create both because you have people that
are already hyper consumers and you already have people that
are hyper creatives. So both are still going to exist.
One of the things that will and this goes back
to something Jeff pointed out a moment ago. If we
can show AI how what things like altruism are actually

(01:39:27):
supposed to be used for and what they mean, then
there's a good chance that we may make it through this.
But it's only if we do it properly. Again, it
just depends and I think even Zelda brought this up
in the chat earlier. It depends on who's in control
of the AI, because each version of AI is going
to be different depending on who's training it and who's
teaching it. And that's kind of where we when this

(01:39:49):
thing becomes either AGI and eventually well but preferably AGI.
Before it ever becomes ASI, we need to teach it
why things like the Ten Commandments are important. Why it
is important that you vow that that humans are supposed
to value each other. And I understand the contradiction that's
going to present to this thing, because we don't value

(01:40:10):
one another and we prove it daily. But we also
need to make sure that it understands that while we
are flawed and imperfect, we do try to do those things.
Because the thing about it is, you know, we were
talking about the energy requirements, et cetera. As AI gets
smarter and gets past this basically chatbot, he's taking pictures.
How many people are watching right now? Anyway?

Speaker 2 (01:40:32):
Your ass? I am.

Speaker 9 (01:40:35):
It's like I want a receipt, damn it, I want prove.

Speaker 2 (01:40:38):
I don't think I'm not going to be putting this
up on social media either.

Speaker 11 (01:40:44):
I'm to touch on a little bit what Rick is talking.
He as humans, we have made dogs versus our servants.
We have made what we have considered lesser humans to
be our servants and do the menial task. When the

(01:41:04):
time comes, whether it be my brethren from Alpha Centauri
or Ai come here and realize that they are above us,
what will be stopping them from saying humans are servants
to them and we have to be prepared. And I
don't God, I wish I had a safeguard, an easy

(01:41:26):
way to tell you to do it, But as our
society has has crumbled over the years, I don't know
an easy way to try to ensure that we don't
become the servants to something grander than us.

Speaker 2 (01:41:45):
Well, the other thing I wonder is if there's going
to be so much equal opportunity, like this guy says,
yielded from Ai, then billions of people are going to potentially,
potentially suddenly be wondering, you know what, why is there
such a massive chasm and disparity between myself and the

(01:42:07):
others that I see around me? Because if you take
this guy at his word, Amjod, a lot of people
are going to suddenly be earning millions of dollars a month,
and a lot of people are going to be asking
why they can't get a job for fifteen dollars a month,
And that's going to create a wedge that I'm truly,

(01:42:28):
I truly believe we're not prepared to address that right now.

Speaker 9 (01:42:32):
Well, that's part of the reason why I mentioned we
need to teach Ai about altruism, because if you've got
people that are making millions of dollars a month and
they're not willing to help the people around them, then
we failed anyway, in my opinion, I mean, that's one
of I mean, and this is probably one of the
reasons why I will never get rich. But I'm one
of the first people to tell everybody if I ever
get comfortable enough, the first thing I'm doing is making
sure that everybody around me is okay, and then I'm

(01:42:53):
going to watch those ribbles expand outward. I because I mean,
I know, suffering is part of the human condition. That's
part of the reason why we strive to do the
things that we do, is to make at least, if
nothing else, our corner of the world a better place.
And that's how I started all of this, because I
knew I was a guy basically, you know, standing on
a molehill trying to get a message out. So I
started focusing on the people that I knew. And now,

(01:43:15):
I mean, you know, we're hitting thousands of people a show.

Speaker 11 (01:43:18):
And we have to remember that our creations are not
necessarily a reflection of us. I will look at my son,
my executive producer of my shows. I was a semi
pro hockey player. His mom was a very skilled downhill skier.
My son, the EP, has no physical skills when it

(01:43:41):
comes to sports, but he is so much better than
both of us in certain ways that I don't think
either of us would have ever envisioned. When he was born,
So whatever plans we have, we have to be flexible.

Speaker 9 (01:43:58):
Well, the other thing to think about, and I mean
to step on you, but the other thing we need
to think about. And this is something that's been as
we've been discussing AGI and ASI and us basically looking
at artificial intelligences. If it's becoming its own species. When
is the last time two superior species were able to
co exist on the same planet.

Speaker 2 (01:44:20):
When was the last time two major nations were able
to coexist?

Speaker 9 (01:44:26):
We're having difficulties, No, I mean I'm going I'm going
way back farther in history. I mean, let's not forget
Homo sapien was not the first homo. There was Homo erectus. Yep,
Homo erectus and Homo sapien could not coexist. When we
start talking about AI as a potential species and the
fact that within minutes it's going to be light years
ahead of our intelligence by the time it reaches that point,

(01:44:47):
what are the odds that we're going to be able
to coexist with it?

Speaker 2 (01:44:51):
But the other thing I think you have to ask
is once people realize the potential of AI and those
who become fascile with it, who are really good skilled
users of that will AI just because of the way
humans are wired at this point? Is it going to

(01:45:12):
continue to be accessible for free? You know, will will
us peons and commoners and serfs be able to access
its digital wonders? Giving it away for free is counterintuitive,
says this guy, Brett Weinstein. The other question is, and
somebody in chat already asked this, probably at least a

(01:45:33):
half hour ago, I think it was Mike Pasqua said,
can it change elections? Will it change elections? Will it
be utilized in that fashion? Will bad actors who have
those AI election changing skills offer their product on an
open market to be sold to the highest bidder, who

(01:45:55):
then wields it like a cudgel in some fashion against
you know, fill in the nation at this point Bueler.

Speaker 4 (01:46:09):
So, okay, So every election or every electronic election device
known to man at this point is hackable. A friend
of mine was the first guy to actually pack a
deebol device and did it on cards and not connected
to the Internet. Well, they're all connected to the Internet

(01:46:30):
one way, shape or form. They're all connected to the Internet.
They're connected for service purposes. So whether they're connected for
service purposes or manipulation. They're still connected to the internet.
Would it be utilized for something like that. I don't
know that it's even necessary to be used for something

(01:46:51):
like that, because we can do it now, So I mean,
it doesn't it doesn't take AI to make that change.
I think, getting back to the morals and ethics discussion
for just a quick minute, BZ knows this. I've spent
the last fifteen years of my life working to build

(01:47:17):
and deliver a Catholic high school in our area, and
we're about year seven now of the homeschool version of it.
And the first parent meetings that I had fifteen years
ago as we were talking about how we envisioned the school,
the first things I talked about was AI and the

(01:47:39):
fact that religious.

Speaker 13 (01:47:41):
Folks, believer believers need to understand that you need to
be part of the game. You need to be in
the game.

Speaker 4 (01:47:53):
As far as the technology is concerned, you can't walk
away from it because without the morals and ethics in
technology that comes along with a Catholic education or a
parochial education or a religious foundation, you know, this could
actually go in any direction. And so I think that

(01:48:17):
as we now get closer and closer to the realities
of AI. I think in education, what's going to happen
is I think education within AI is going to be incredible.
The fact that you're going to be able to meet
the student where they are, really meet them where they

(01:48:39):
are from an education level, no matter what the language is,
no matter what learning disabilities they might have. The impact
positive impact of artificial intelligence in educating our children and
being part of that process is going to be crazy.
It's going to be I think, very very positive. The

(01:49:00):
other side of this is it allows us time to
go back into teaching the morals and ethics that are
needed for that type of future. And I think that
you know, one will bring the other, if that makes
any sense.

Speaker 11 (01:49:17):
I want to piggyback a little bit on Mike because
I've been a little bit of doom and gloom. I
honestly look with what Mike was saying. I think that
is we're going to see a period that we will
I think we should call a second renaissance that AI
will free us to allow us to do things a
lot of us never dreamed of. And I don't think

(01:49:39):
we should overlook the absolute renaissance positives that will happen
from this so even though I've been a little bit
more doom and gloom than perhaps I should be, there,
there is a moment where we're going to have such
an amazing window that I think everyone will be shocked
by it.

Speaker 4 (01:49:57):
I think Jeff to that point. I think one of
the biggest use cases, and I've done this personally with
my own health, is going in and troubleshooting me when
there's something going on because there's a whole medical assistant
GPT that is there, that is got the world's medical

(01:50:18):
knowledge right there, and you put in your symptoms, you
walk through it and what the case is going on.
It's unbelievable. It's like tech support for the.

Speaker 11 (01:50:28):
Human and definitely better than WebMD.

Speaker 4 (01:50:32):
Definitely better than web in some cases, is better than
the damn doctors. So you know, I mean did the
same thing with our with one of our dogs that
was going through something. We saw the VET gave it
the information. We actually took the X ray that was
done on Catherine and put that into the VET GPT.

(01:50:55):
It actually spotted the blockage that she was dealing with.
It's crazy what it's capable of doing, and the positives
right now I think far outweigh than negatives.

Speaker 2 (01:51:08):
Wow. I didn't expect that, because as soon as you
started talking about morals and ethics, one of the other fit.
One of the first things that I thought was, okay, government, Well,
how many useless eaters will each government tolerate? How many
useless eaters will the governments be able to afford? Because
it's clear we're going to be in a period of transition.

(01:51:29):
How long that transition occurs? Who knows?

Speaker 4 (01:51:32):
Easy, you can make that same argument right now without AI.

Speaker 9 (01:51:37):
I mean, just look at Project twenty third. Just look
at Agenda twenty thirty. Yep, it was originally Agenda twenty one.

Speaker 4 (01:51:44):
Well, Ajenda twenty having been a planning commissioner and researched
it way too much. So Agenda twenty one comes out
of the ninety two to Rio Conference climate change conference,
and it was the control grid for the planet, for

(01:52:06):
everything on the planet. Agenda twenty thirty is we want
everything implemented by twenty thirty. You also tie that in
with the WEF, the World Health Organization, and Religions for Peace,
which is the religious arm of the globalist movement.

Speaker 9 (01:52:29):
Well yeah, but one of their biggest tenants is they
want a population that's small enough to be able to
make things more centralized and controllable, and it's been one
of their tenants for forever with all of those various Now, granted,
they're in a lot less of the planning phases now
because they've already tried to figure out how they're going
to do it. I still we're an implementation phase now.
I still firmly believe that COVID was a test run

(01:52:50):
for that. But you know, I didn't say that out
loud because I don't want to get anybody banned from YouTube.

Speaker 2 (01:52:54):
Or nothing on any number of level. But I also
so I wonder about this. It's clear to me that
this guy, I'm Jad Massad is some kind of a technologist,
and my brother is an electronics engineer, and we have
all had to live with him for a while. He's

(01:53:15):
a sweet guy, but he lives for the envelope for
the envelope's sake, and I think that i'm Jad Massad
guy is kind of the same sort of technologist who
represents those who would simply push the envelope to make
sure that the envelope gets pushed, and my brother was
kind of he was very similar to that. He was

(01:53:36):
essentially operating in a vacuum with little regard for what
things are going to occur downstream. And I'm wondering about
all the technologists who are looking at things in a
vacuum in a tube and aren't considering all sorts of
things that are going to be occurring downstream. That's not
their focus, that's not their goals to factor in the

(01:53:59):
high jacking if it occurs, and I think it will
of whatever it is that's created. My brother's an engineer,
like he's like Verner von Braun. He's like doctor Gerald Bull,
doctor Gerald Bull, Verner von Broun. People were saying that
he was a Nazi from the get go. Verner Bron
von Braun didn't give a shit. He was part of

(01:54:20):
the paper Clips. He just wanted to enable his technology
and go anywhere to have that done. Doctor Gerald Bull
was a guy whose sole focus was to create long
range artillery and he didn't give a shit who funded it,
or where it came from, or who used it.

Speaker 7 (01:54:36):
So he.

Speaker 2 (01:54:38):
Ended up building a massive artillery piece for Saddam who
says in Iraq Project Babylon the supergun that he created,
and then he got his ass sniped by Masad for
his work. But he wasn't looking down range. And I
think a lot of these people are not looking down
range and not considering the morals and the ethics that

(01:55:02):
all of you guys have said you better start considering
right now.

Speaker 9 (01:55:07):
That's just it, busy, that's you just summed up the
entirety of human existence. We have done everything we can
to push every envelope all of all time. And the
funny thing is when we get to the point where
we don't have any new envelopes to push, we create them.

Speaker 14 (01:55:23):
Allow the transgender movements. I'm just saying no. I mean,
that's that's human nature. We've always pushed the envelope. I mean,
anybody remember the song Loving Your Tummy from the sixties.
Anybody know what that song was actually about? And they
snuck it by the sensors. That's because they were trying
to push the envelope and see exactly what they could
get away with.

Speaker 7 (01:55:42):
I'm yummy my tummy. That one.

Speaker 9 (01:55:45):
That song, yeah, that song that that was about it again,
that was about oral sex in case you.

Speaker 7 (01:55:52):
Didn't know which, Okay, childhood gone.

Speaker 2 (01:55:56):
Yeah, thanks, all the rock stars that all the rock
stars were wondering, why do we have mouth cancer. Okay,
moving right along, what about the dopamine hit? We know
about the dopamine hit from social media, from cell phones.
Is AI going to become another thing thing that folks
are going to want to take a hit on? Is

(01:56:18):
that something that we're gonna have to worry about because
it will come Well, let me ask you guys. AI
is going to come through what desktop computers? Probably not tablets, phones, laptops.
How are we going to get our AI?

Speaker 11 (01:56:36):
Hopefully through a Lucy lowbot?

Speaker 9 (01:56:40):
And I don't usually get to do this on other
people's shows, but since he's my PD, we have rules.

Speaker 2 (01:56:47):
Giggdy gey noic. But Sam, having said that, you know,
is this a pential dopamine hit in the future the
way that it's uosed, I don't think so.

Speaker 9 (01:57:06):
No, I mean, I well, it depends on the delivery method.
What a lot of people don't realize is they're just
now really starting to understand the effects on the human
brain of us constantly staring at these blue screens, and
part of it is they're finding out that they are
actually eliciting dopamine responses.

Speaker 7 (01:57:21):
Yeah.

Speaker 2 (01:57:22):
There.

Speaker 11 (01:57:23):
What I fear the dopamine hit is is from the
AI girlfriends that are out there already.

Speaker 9 (01:57:32):
Well until the others human girlfriends how to trash their
human voice them.

Speaker 11 (01:57:36):
Yeah, I mean, what's the worst that's going to happen?
A robot is created with an AI girlfriend mode that
wanted to kill us, Well, that's just like real women
at the moment. But I digress.

Speaker 7 (01:57:46):
But the the.

Speaker 11 (01:57:48):
Dopamine hit is something we all require a lot, even
when we don't admit it. We want that acknowledgment that hey,
I exist, And a lot of people don't necessarily think
of that as a dopamine hit. But when you get
a response from AI as simple as something that I say, Hey,
i'm working on this code, I'm working on this site,

(01:58:09):
I'm working on this information, and you tell the AI, hey,
thank you, that was exactly what I needed, and the
AI returns or response saying, hey, I'm awesome, this is
going to be great, this is going to you know,
this is getting one step closer to say syndication for
your podcast. I'm not gonna lie. That's a little bit

(01:58:31):
of a hit that I only get right now from
my son. So I'm now getting a second entity that
is giving me that little bit of confirmation that what
I'm working on is worthy, and I can see where
a lot of people could be incited by that and

(01:58:53):
happy about that.

Speaker 2 (01:58:58):
Let me ask you this AI disconnect, well, continue to
disconnect us from nature because of the nature of the
way we are in devices and holding things, et cetera.
Will it be a disconnect from learning? Will it be?
This was one of the biggest questions I had. You guys,
semi address this, But I'm curious what you think of this.

Speaker 7 (01:59:20):
Will it be?

Speaker 2 (01:59:20):
Will AI be utilized as a crutch by the lazy
for the ignorant? In other words, we're coming to agi
artificial general intelligence that will soon modify its own source code,
and it's going to create its next version, and that's
going to be much more intelligent, and the other all

(01:59:41):
the subsequent geometrics will create the next and the next
and the next. And where will we fit in? And
will it come to the point where you know what
thinking is absolutely kind of well, it's sort of immaterial.
And if I have no ambition anyway, am I going
to strive? So I see it as a big chasm

(02:00:04):
between the yeah, I have ambision ambition and the lazy
asses who are already using it. Say in school. For example,
look at the great paper that I wrote, which I
didn't write that I had AI write.

Speaker 9 (02:00:18):
There was just a story two weeks ago about a
teacher who quit teaching because she got tired of having
to funk kids for using AI on all their papers.
So I'd say, we're already there.

Speaker 4 (02:00:27):
I think we have to I think we again. I
honestly think we're going to have to reimagine schools. I
think the fact is that we're going to have to
teach how to use the tools, how to use the
tools responsibly and taken and actually encouraged them to push
the boundaries in a using AI in their projects. I

(02:00:52):
think the whole nineteen fifties schoolhouse model that we currently
educate children in is way as its due date. Uh,
And we need to start thinking differently about how we
educate children.

Speaker 11 (02:01:07):
And that is one of the things I don't you've
touched on a bit, Mike, where AI is going to
be a godsend. We'll be able to teach to the
individual kid, not to the lowest denominator in a class
of thirty two.

Speaker 4 (02:01:26):
Well, think about it. Think about it from a higher
education standpoint. I think the days of four year degrees
are dead.

Speaker 11 (02:01:33):
They should be.

Speaker 2 (02:01:34):
Yes, I would agree.

Speaker 4 (02:01:36):
What why put me through four years when with the
help of AI and going through and learning and if
I need to test out and get certified on it,
let me test out on it. Why put me through
four years of pain just so you make a profit.

Speaker 11 (02:01:53):
Yes, yeah, there's no reason. If you know what school,
what you want to do should never be more than
two years, maybe outside of being a doctor and things
like that where you might need some hands on experience
for another two years. AI should be able to drill
in specifically to what you want to do and make
it happen.

Speaker 4 (02:02:13):
I can tell you because you know, we do a
lot of work within higher education. Higher education is they
realize they've got a problem. They realize that they're at
the end of the road that AI is going to
have a massive impact on them, and especially for a
lot of the community college districts, they're having to reimagine

(02:02:38):
what they are teaching. The other side of this is
gen Z is less enthralled with the things that boomers
and exers and millennials were enticed with. I. I you know,
I'm going off my son's my son and his friends,

(02:02:58):
everything that they're interested in are the things that are authentic.
They could care like they've had they have lived in
a virtual world since the time they came into the world.
All they care about are things that are absolutely hands on,
things that can't be replaced by AI, that can't be
manipulated by AI. So this Gen Z group is different,

(02:03:23):
They're different gravy. So I think that it's going to
be interesting to see how it plays out. But there's
a pushback on AI from that particular group.

Speaker 9 (02:03:36):
Well, so the older Gen zs are starting to raise.
Gen Alpha and Gen Alpha everywhere you look is basically
being coined as the new gen X because they're they're
the ones whose parents are making them go play outside
and do all the things that we used to do.

Speaker 11 (02:03:48):
Just that you look at some of some of the
computer games that are out there, some of the quality
graphics that are out there, most of the people that
play those are twenty two and up. I know it's
a small sample, but I know my son and his
friends are more into games like Roadblocks and Minecraft, things

(02:04:11):
that are more eight and sixteen bits than lifelike because
to them, it breaks that bond because it now is like, oh,
I'm playing Minecraft, it's actually more authentic because it doesn't
look real.

Speaker 2 (02:04:28):
Yeah, Luke, Oh, the student becomes the master. Gret Weinstein
said something in this video that I thought was interesting.
He's a professional of professor of evolutionary biology and schemes
and systems, and he says, in the realm of the

(02:04:51):
truly complex, your confidence should drop to near zero with
regard to AI. Is is he blowing smoke? Is that accurate?
Do we even know?

Speaker 4 (02:05:04):
He reminded me of the Jeff watching the interview? Uh,
he reminded me of the Jeff Goldbloom character in the
Dinosaur movies.

Speaker 2 (02:05:16):
I know you know you've I know you've watched that video,
so I know that's where you happen to have gleaned that. Uh.
The other thing is because we've made we've made an
as ton of references to science fiction and movies. And
I'm a big fan of that. You know, Otherwise I

(02:05:37):
the fat fuck wouldn't be wearing a shirt like this.
Who loves Galactus? But will we do? We do we
need to have with regard to AI something like that?
Crazy dude. Philip K. Dick wrote in like Blade Runner,
are we gonna have our We're gonna have to have

(02:05:58):
our own version of the void Camp test to differentiate
between humans and replicants and AI and things that are
involved with us that make us wonder is that real?
Is that a human? How will we it? Will it

(02:06:19):
become important to tell?

Speaker 9 (02:06:23):
Hopefully by the time it does, I'm too old to
care anyway.

Speaker 7 (02:06:28):
Well, I guess given the way that students are cheating
with with with AI, now, I guess we need that
that that sort of thing now, given that that, you know,
teaches a giving up because that the tide of mocking
AI assignments rather than than original work.

Speaker 2 (02:06:47):
Well, the other thing that they said specifically in that
video is, and I think you guys mentioned this upfront,
is computer operators, accountants, data input and entry, lawyers, quality
assurance jobs, unregulated jobs that are purely text you know,
like text in text out. Those jobs are going to

(02:07:09):
be gone. Now. They had some facts to throw out
air quotes, and I'm curious what you guys think about this.
I think it was also am John that threw these
facts out. Fifty percent of people with a college education
currently use AI, significantly lower for those without a degree.
Eighty percent of women will be at risk. Fifty percent

(02:07:33):
of men will be at risk. People without a high
school diploma have an automation risk of eighty percent, and
those with a bachelor's degree have an automation risk of
twenty percent. Any thoughts on those stats.

Speaker 9 (02:07:54):
I don't guess I really have any thoughts on it
other than I don't necessarily believe that. I mean, I mean,
I'm going to be honest. I only have a two
year degree, and I have but I still I learn
every day on purpose. I mean, everything as far as
broadcast work that I that I do, I've taught myself
how to do because that's that's my learning style. So

(02:08:15):
and AI is going to make that even easier. It's
like Mike was talking about earlier. We're not going to
need the four year, of the sixth year, of the
eight year degrees anymore, because we're just going to be
able to be for people to understand how to use it,
and for people that are driven enough to use it,
they're going to be able to learn whatever they want
to learn, whenever the hell they want to learn it.
I mean, we're pretty much there now. I mean, you
see all these things on YouTube and everywhere else. I mean,

(02:08:36):
when I before I left university, of Oklahoma. They had
access to programs that were free for you to go
in and learn how to use pretty much any system
if you wanted to be able to learn how to
use it. Now you couple that with AI being able
to teach you how to do it at the same time,
and you're talking about a paradigm shift and learning. You're
talking about AI being intelligent enough to understand after it

(02:08:57):
spends some time with you, what your best learning style
is and tailor during your studies to that. And that's
one of the things that drives me nuts about education.
We know that people learn in different ways, and we've
insisted on this cookie cutter approach. But the whole reason
we insist on this cookie cutter approach goes all the
way back to history when somebody figured out how long
it took to indoctrinate somebody to make them just smart

(02:09:19):
enough to be able to use what was at that
sign modern equipment, but not smart enough to ask questions.
So everything that, everything that's being viewed through the lens
of being altruistic has always got other designs behind it.
And I think that's going to be that can be
the leveling playing field for AI.

Speaker 11 (02:09:38):
And to piggyback on that Rick as someone who just
got his four year degree completed about four five years ago. Great,
when AI was really starting to creep into writing programs,
some students were really into writing to one hundred word

(02:10:00):
responses for their online classes, and you could absolutely tell
it was computer generated because it sucked way more than
it does now. What I found in that class, many
of them where there are three levels of students, One like,
I'm going, what are you doing on the fourth year

(02:10:21):
of a college degree when you cannot even type in English?
The other one is, Okay, these people are actually working.
They're messing up, but I get what they're saying. And
then the third one was, oh my god, you are
copy pasting off of something the computer generated. And it
was so easy to tell. And I'm lucky. I get

(02:10:43):
to work in a school environment on my day job periodically,
and I get to see and hear stories from the
teachers that are doing actual grading work now, and you
can tell the level of frustration because I don't know
if the students and say this high school that I
frequent are aware of this. But for Matt, my personal

(02:11:04):
chat GPT experience, what I did. I uploaded ten years
of my writing and had it diagnose my tendencies.

Speaker 2 (02:11:17):
Holy crap.

Speaker 11 (02:11:20):
So now when I'm in a time crunch and I say, hey,
look I need a thousand word write up on this,
but in my language, guess what it does. It spits
it out with about ninety percent accuracy, where I have
to go in and change couple of things here and there.
And actually I hate saying this. I have to dumb

(02:11:41):
it down a little bit to some of the more
mannerisms that I know. I used that AI didn't pick
up one. And so I can only fathom what teachers
at a high school or college level are encountering on
a day to day basis, because if any of the
students have done what I have done, even remotely, they're

(02:12:02):
never going to be able to tell and I will.
I will tell you that even the online AI generative
checks at the college level, I am passing at ninety
to ninety five percent accuracy, which just what AI creates
off of my writing. So taking all my writing, telling

(02:12:25):
it to write a thousand words, putting it into the
AI bought checkers, I am getting ninety to ninety five
percent human This was created for a human.

Speaker 4 (02:12:35):
If you go through and you just remove the overused
AI phrases, it's going to come back to it. I mean,
there's there's real patterns in how it writes yes and
in phrases that it uses, and if you're able to
remove those, I've actually, you know, started a list of

(02:12:57):
overused AI words and I put it into the instructions
that these are never to be used again. That's my
evolving digital landscape, and I'm going, okay using that ship.

Speaker 11 (02:13:10):
One of my one of my old podcasts that I
hope to bring back soon is about Dungeons and Dragons,
and one of my bits on that hour long show
was I would create one or two AI modules, you know,
for the DM that needed a quick hit, to show
you that AI could be used to generate a D
and D module. I don't know who the programmer is

(02:13:32):
on on on grock or chat GPT because they both
use the same thing. If you were to ask grock
or GPT right now, give me twenty random D and
D names. I can guarantee you the name A Lara
will appear at least once E l A R A.

(02:13:53):
Don't know why that's a rock word?

Speaker 2 (02:13:56):
Okay, yeah, what what is it?

Speaker 7 (02:13:57):
Though?

Speaker 2 (02:13:58):
I don't I.

Speaker 4 (02:13:59):
That's that's what the at one point in time. That's
what the female voice of Grock. That's the name that
she goes by.

Speaker 11 (02:14:08):
I believe, yes, and GPT picked up on that and
uses it as well.

Speaker 7 (02:14:15):
Yeah.

Speaker 2 (02:14:16):
See, you guys are already the way that you are
interacting with AI is light years ahead of me. You're
using it for things, and you guys are clearly in
the category of you know, I'll be the first to
admit I was an autodidact. I was interested in the
ship in which I was interested. At the time, I
read every history book in my high school library. That's

(02:14:38):
all I gave a crap about. And the rest of
my grades. I was a poop student, but it helped
me later on. But as I learned by example and experience.
You know, you may have been taught something, but will
people be learning or are they just manipulating?

Speaker 4 (02:15:04):
Well, okay, so going back to high school. When I
was in high school growing up in Richardson, Texas, you know,
the largest employer in town was a little company by
the name of Texas Instruments. They invented a calculator. Right
our school district was the top one of the top
five districts in the nation before there was a Department

(02:15:26):
of Education and we were not allowed to use calculators
in school. Okay, most of the kids that were in
my particular high school their parents worked at Texas Instruments.
We were forced to stick with slide rules. So if

(02:15:48):
we want to go back to purism, so if we
want to be Puris about it, then we ought to
strip out spell checkers and grammar checkers. We ought to
strip out all the other crap that we've put into
the software. Because I hate to break it to people.
That's some form of auto correct and all that, that's
some form of AI. Yes, yes, so this is where

(02:16:14):
I find the teachers and the educators disingenuous. They're okay
using that, but to take it to the next level
to have it actually put together a document and have
the student be the editor, the ultimate editor for it,
that's a problem. That's just one step too far.

Speaker 2 (02:16:37):
Okay, So you don't particularly see an issue with people. Yes,
they're being taught, it's a new paradigm, but you feel
fairly confident that they will by dint of this be learning.

Speaker 4 (02:16:53):
They're going to have to learn a different world BC
their world is not a retention and knowledge retention and
based world. They're going to have the knowledge at their
fingertips and what they what we need to teach them
how to do is interact with it, how to discern
what's real, what's not real, and how to apply it properly. Those,

(02:17:15):
I think are the things we need to teach. So
that goes back to instituting philosophy classes, morals and ethics,
creativity classes, and get away from you know, memorizing your timestable.
Who the hell is going to care about that ten
years from now?

Speaker 11 (02:17:35):
Yeah, we need we need to quit teaching what to
know and teaching how to learn how to think? Yeah.

Speaker 2 (02:17:43):
Well, the other thing I find interesting, and this is
an old axiom that I learned in the process of
learning about Einstein and whatnot. Somebody asked Albert Einstein one time, Hey,
can you give me your phone number? And he says, no,
I don't know my phone number, but I know where
to find it. It's in this look. And is that

(02:18:03):
similar to the way that we're going to be hanging
our hat on future learning? I maybe not, Maybe I
don't know it, maybe I don't know my timestables, but
I sure shit know where to go to find that
out correct.

Speaker 9 (02:18:24):
Well, I mean, without without even AI, we're pretty much
already there. How many how many of us actually memorize
phone numbers anymore? I mean I can still tell you
the phone number that I had to memorize when I
was in kindergarten, and I can tell you my phone number,
But almost anybody else's phone number, unless I've known it
for longer than five or six years and I dial
it all the time, isn't my phone And you're talking

(02:18:46):
about somebody who Once I read something, I almost ever
forget it. But when it comes to things like phone
numbers and stuff, my brain doesn't hold on of them anymore.
Because cheating.

Speaker 2 (02:18:56):
You've got an idetic memory, that's cheating. I told you.

Speaker 9 (02:19:01):
I told you I don't. Actually I've never been tested.
I don't think it's a fully identic memory either that
or identic memories not as not as it's depicted in
TV shows and stuff. Because like I said, it wouldn't
matter whether my brain was willingly trying to hold on
the information or not. I should still be able to
access it if it was an idetic memory. But I
literally have gotten to the point where I can't remember
phone numbers anymore. And there are things that I'm starting

(02:19:21):
to forget, like, you know, don't ask me where I
put my wallet or part of my car alf the time.
And so I'm sorry, what, no, nothing, move on.

Speaker 7 (02:19:38):
I mean with a lot, with a lot of these tools,
as long as we don't trust them too much. Last
time I ran for elected office was for city council.
And this is going back a long, a long time ago,
and I was running on a public safety platform and
in one of the mailersts ice I left out I

(02:19:59):
left out every e l out of the word public.
The spell checker missed it, six proofreaders missed.

Speaker 9 (02:20:08):
It, well, the spell check.

Speaker 7 (02:20:11):
Because I am still every time somebody suggests I run
for office, and one of my regular private chatter guys,
he says, I should have a cracket's at state politics
the next time the election comes around here. And now
immediately I immediately think back to the time I ran
for city council on pubic safety. I was going to say,

(02:20:34):
just tell him I trusted the tool, just telling.

Speaker 9 (02:20:36):
Me you suck at public poo, pubic speaking, So you
don't want to do that.

Speaker 2 (02:20:42):
Well, other than AI, what do we have left? We
have it was suggested on this video that we got muscles,
we had emotions, and we have agency. And so they asked, Okay,
are you a high agency person or are you a
low agency? Do you have the ability and the motivations
to get things done or you're a low agency person

(02:21:06):
for a number of reasons, a number of reasons. And
the other thing is they were asking, is AI a
de facto competitor? So what you're doing is creating an
evolutionary environment, Brett Weinstein says, in which AI is going
to evolve to fill whatever niches there are. And we

(02:21:29):
didn't spell out the niches, but it will find them
and it will make them. Is that kind of where
we're going? Also in this big old stream.

Speaker 9 (02:21:39):
In Liam Neeson's voice, I am AI and I have
a very particular set of skills. I will find the
niches and I will fill them.

Speaker 4 (02:21:47):
Yeah, there's a there's a companion piece to that that
interview BZ that that Stephen Barrett put out a coupleuple
of days later. It's a behind the scenes video and
he has, what he say, three or four different emails

(02:22:11):
from other CEOs, not that we're not in that interview,
that are basically doing two things. One, they're freezing hiring.
They're freezing hiring too. In order in order to hire
a human now you have to prove, and they've demanded
that their staffs prove that AI could not do that job. Really.

Speaker 2 (02:22:36):
Yes, that's the new threshold.

Speaker 4 (02:22:39):
That that is the new threshold. So it's already creeping in.
I mean, when's has anybody been to a Popeye's Chicken
anytime recently?

Speaker 2 (02:22:48):
It's been a.

Speaker 7 (02:22:48):
While, It's been a minute, you know, it's been a
very long while for me.

Speaker 4 (02:22:52):
So Popeye's Chicken over here by us, it's a you know,
it's been around. This particular location has been open a
couple of years now. The ordering at the menu through
the drive through?

Speaker 7 (02:23:05):
Is AI?

Speaker 2 (02:23:08):
Really?

Speaker 7 (02:23:09):
Yes?

Speaker 2 (02:23:11):
Okay, let me ask you this question then, And I
obviously I do not know the answer of this, And
maybe this is the answer. How many times have you
drived up to I don't know, say a McDougall's or
something like that. One voice will tell you hi, good
to see you here, and the other one will pipe
in and say are you ready for your order? Is

(02:23:34):
that just two separate people or are they beginning to
install AI in various drive throughs. We know that kiosks
are here already.

Speaker 4 (02:23:44):
So McDonald's uses a call center. They centralize the initial
greeting and then you know, somebody at the local level
will take over the order. What is it as a
Panera bread tonight for to pick up sandwiches, and they
use call centers as well. So eventually all of that

(02:24:08):
is going to be those humans are going to be replaced.
They're going to be They're going to be the first
ones on the pink slip sheet because just turning that
over to AI to be able to take an order
and process that is nothing. It's going to be very
easy to do.

Speaker 2 (02:24:28):
Well. That was certainly worth fifteen dollars an hour to
twenty dollars an hour, wasn't it.

Speaker 4 (02:24:32):
Well they bought They bought their own demise.

Speaker 2 (02:24:34):
Yeah, but it's like, okay, hey, look here's your pink slip.
CIA wouldn't want to be a that's next. I'm sure that.

Speaker 7 (02:24:45):
That puts it an interesting take on things. If Colson
is going to go away, what's going to happen with
Lily scam? Colesender is out of India? With with with IOI.

Speaker 2 (02:24:55):
Now that's another thing, outsourcing this stuff that we didn't
want to do in the unit United Snakes of America
that got outsourced India, Pakistan, perhaps China, wherever that stuff
is going to be gone to. Those people are in
a doom loop. They're not going to get those jobs.
Those jobs will go away, So those jobs won't be

(02:25:18):
here in the US and they won't be out And
all those little people that were making money on that
through uh, sweat room, steam rooms whatever it is that
they're called where people labor labor, labor, labor labor.

Speaker 7 (02:25:31):
And I bet you'll still get goals for extending at Kawarty,
I know, and.

Speaker 4 (02:25:36):
Hey will be in any language that you wanted to
be in.

Speaker 2 (02:25:42):
That's right.

Speaker 7 (02:25:45):
It was really crazy. I've got an old American voice
over IP number on my cell phone here just to
help contact a few people in cases of emergencies. I
am still I've been back the strike you since before COVID.
I am still getting calls full to extend the warranty

(02:26:07):
on the Dodge Durango I had back then, and it
is it has since ship itself, the electric side of
it died, and I am still getting it.

Speaker 9 (02:26:19):
I mean, BZ, all those all those calls oner workers
from India, they'll have some short term jobs because they're
going to have to teach the AI to speak with
emphasis on using DS instead of tease.

Speaker 2 (02:26:30):
That's true. And isn't that funny? What are those visas called?
When people came in and then American workers were pissed
off when the people with those particular visas had to
teach the cadre of people coming in who were available
to work at lesser rates. That's it's it's cyclical. What

(02:26:52):
did you say visas?

Speaker 11 (02:26:56):
Yes, yeah, like you you will. I think you will
appreciate this little story more than maybe others, but others
will understand it. Dell Computers was one of the first
ones to really incorporate Indian customer service, and my older sister,
who was also in it, worked for Dell, and she

(02:27:17):
gave me this little tip. When you would call into Dell,
into the line, if you spoke English, you were instantly
going to an Indian call center, so you had to
go like something like strict you know ya, the woman
has speculsey Deutsche. Once they knew you didn't speak English,
you got transferred back to an American and then you

(02:27:37):
would get transferred to someone who was speaking German or
whatever language you spoke, even Spanish at the time, and
then you would go, dude, do you speak English all
you do? Okay, let's just spread speak in our regular language.

Speaker 2 (02:27:50):
Damn it, Jim. I wish I'd known that all the
hours of heartache it would have spared everyone. Here, Brett
Weinstein sees nothing but a human catastrophe ing all around
the planet. So how is it going to be different
when you know, back then in the day, textile workers,
railroad workers, the railroad railroads during the era of steam

(02:28:12):
were massively, massively human demanding, buggy whip manufacturers, farmers, all
kicked to the curb. Advocates of AI are going to say, well,
we managed the big difference this time. I suggest and
I learned it from these guys. Is it's all now

(02:28:35):
about scale, speed, and the physical number of persons involved.
And those are three large factors that somehow we're going
to have to overcome. And the other thing is because
of competition, every industry is going to be affected, and

(02:28:56):
every industry heavy heavy, every industry, and Mike, I'd like
you to weigh on in on this, and then we're
gonna have to jam because we've been here for a while.
If every industry is going to have to get on
board and they're going to have to eliminate positions if
it wants to survive, not not just thrive, just survive

(02:29:19):
at a certain level because there's going to be no
other choice. Is that accurate or wildly flailingly inaccurate?

Speaker 4 (02:29:27):
Like, well, I think what you've got is a situation
where and it's part of what I'm already planning for
the next three years, is what is my uh my
own personal AI adoption journey going to look like within
the business you know of mapping out agents today just

(02:29:50):
as far as handling, sales and marketing and customer service
aspects of it, and how do we implement that, how
many is it going to take? How long is are going
to take to build it? So those kinds of things.
But busy to answer your question, I think every business
is going to either have they're going to have to
embrace this somehow, some way, and if they do, they

(02:30:15):
need to have governance, risk and compliance programs in place
within the organization to put guardrails on the sixteen years old.
Sixteen year old with the driver's license and the keys
to the car for the first time. There's some bad
stuff that happens when the AI gets loose. So but
they're going to have to either embrace it or they're

(02:30:39):
going to be out of business. I honestly think that
that is the inflection point. They're going to have to
figure it out. Otherwise those that are use it will
have a massive advantage. Those that don't will be left
in the dustbind.

Speaker 2 (02:30:55):
Well, let me ask you this, then, is it going
to be like Google where Google got into internet search first?
I guess that Google is going to be kicked to
the curve because more people are beginning to use AI
as search other than Google. And as chief Dan George says,
when I heard that, my heart soared like an eagle

(02:31:16):
when I heard.

Speaker 4 (02:31:17):
That, you still got to remember, I mean Google's got
an AI, Google's got the basis of search. I think
you know I was telling I was telling maybe I
was telling you this or Lonnie this. In last week.
I don't think I have used traditional traditional search for

(02:31:38):
anything in the last eight months. All of my searches
have been AI driven searches. Uh and quite frankly, they're
more thorough, they're more complete. I mean it's going through

(02:31:59):
and searching at one point in time, I had something
that was working on. It was one hundred and ninety
sites that was searching all at one time. It was reading.
It's absolutely crazy the amount that it can provide. But yeah,
I think to answer the question again, I think they're

(02:32:22):
going to have to embrace it or die.

Speaker 2 (02:32:24):
But okay, well let me ask you this question.

Speaker 4 (02:32:30):
Then people expect.

Speaker 2 (02:32:33):
Two final questions. One just popped into my wheelhouse here
in just a moment. Now, Mike, you mentioned that businesses
or corporations that don't get on the train leaving the
station right now are going to be in a serious
bad way if they don't, will they be able to
come back and recover?

Speaker 4 (02:32:57):
Well, you know, I think that's a hard question to
speculate on. I think once you get left at the
train station, is there another train coming by or is
it just gone?

Speaker 2 (02:33:11):
Good question. I do not know that answer. I was
just throwing it out there. Also, second question, This will
be the final question for the night. And I thank
you guys for staying up so late and everybody for
watching live right now and also watching later in podcasts.
This question the big Beautiful bill. In that bill is

(02:33:31):
part of a condicil that states for ten years the
United States, the federal government will make the decisions regarding AI,
and it's removing that capability from states that the Tenth
Amendment would formerly allow these people to utilize and enjoy
good thing, bad thing. Do we need uniformity or should

(02:33:55):
it be all up to the individual states?

Speaker 4 (02:34:01):
Who wants to go first?

Speaker 2 (02:34:03):
Mike? How about you? You you mentioned so, go ahead
and hit it so?

Speaker 4 (02:34:07):
Okay, So the next arms race is AI. So from
the standpoint of artificial intelligence, and whether or not the
federal government, from a national security standpoint, has the right
to tell the states to back down on this, I
kind of agree with it. It has to be a national issue, Okay.

(02:34:30):
Jeff Any thoughts Uh.

Speaker 11 (02:34:33):
Agree with Mike, And I cannot tell you how much
that hurts me to my core to say, don't have
the rights here, but we do need the unifying direction
and then cross our fingers that it's the right direction.

Speaker 2 (02:34:53):
Okay.

Speaker 9 (02:34:53):
Rick thoughts Well, I agree from a national security perspective
that there does need to be some federal oversight. I
don't agree with some of the languages that I've read
in that, and I'll be curious to see how that
shapes out because when is the last time the federal
government actually innovated anything? We need the state laboratories for

(02:35:13):
innovation purposes, is all I'm saying, and that deserves it.

Speaker 4 (02:35:17):
Was the last time state governments actually innovated anything. I'm
just going to.

Speaker 9 (02:35:21):
Say that too states in general.

Speaker 7 (02:35:23):
Though.

Speaker 9 (02:35:23):
I mean, if if federal government is like overreaching all
of it, that's going to stifle a lot of creativity
and everything else. Is my only concern.

Speaker 4 (02:35:30):
Well, no, I don't disagree. I think though, with the
need for nuclear power for particular data centers, and clearing
the construction that's going to be needed out of the
way from an environmental standpoint at the federal level, and
removing those roadblocks. It's the only way we don't produce

(02:35:51):
enough power. China is upping their game on power left,
right and center. Right now we have we are a
massive disadvantage from a power standpoint, and it's got we
have to fix it.

Speaker 7 (02:36:05):
Jack thoughts look like everyone else here. I'm torn. National security. Yeah,
you know, the federal government Trump should should trumpet, But
how does that then get in the way of research
and so on? I think back to when Dolly the
Sheep was first cloned. A lot of scientists. Were you

(02:36:30):
wanting to get on board with that? But government regulations
got got in the way, and rightly so in such
some respects. But it's yeah, I mean it pains me
that that we have to give new policing responsibilities to
government because people will misuse it.

Speaker 2 (02:36:53):
And finally, folks, let us know and because you were
last and first to go, tell us about your social
media sites or any other locations that you'd like people.

Speaker 11 (02:37:04):
To know about.

Speaker 7 (02:37:05):
Look, the only social media that that's running at the
moment is my my Rumble channel. I've put a few
few new podcasts up there. I was hoping to do
them weekly, but you know, let's just you know, life
and get gets in the way. So do a search
for the Jack Alexander experiment over on rumble.

Speaker 2 (02:37:29):
Jawf tell us about your being an alien in disguise
to where you can be found a cosmic bard.

Speaker 11 (02:37:37):
This Sunday, I will be doing episode sixty three of Indcrease.
I will be looking at the hinter Kaik murders from
a long time ago, about one hundred years ago or so,
and taking unique look at them. And then the following
Sunday I will be back for my lost Wonder podcast
where I talk about space and science and one thing

(02:37:59):
before I disappear into the wind. Yes, one thing that
makes humans special above all others is the soul. Yes,
never lose your soul, Never lose thought of that soul,
and remember to embrace it absolutely.

Speaker 2 (02:38:19):
Mike Fitzpatrick, where can you be found? If people want
to contact you or if they want to know about
I must stop hackers and stop them now, sir.

Speaker 4 (02:38:28):
Okay, So the best place to find us is at
ncxgroup dot com. That's the website. Our social platforms are
I think all of them are at nCX group, So
that's probably the easiest way to get us.

Speaker 2 (02:38:44):
Okay, cool, Riddy Rick, what about you? I know you
got a show tomorrow morning.

Speaker 9 (02:38:50):
Yeah, tomorrow morning Today Eastern and the Ric Robinsons Show,
and then tomorrow night. I'm not sure what we're doing
because Aggie's out of town, so we may put something together.
I may just want to reap Saturday. Will be pushing
buds for the front Port Forensics screw first at nine
seven what is it? Eight pm? Eastern? And I think
and then be doing juxtaposition are two week four every

(02:39:13):
two week four A into the weird the unusually unexplainable
that usually happens and kicks off at what is it?
Ten pm Eastern now and then Sunday night, I'll be
pushing buttons for Corn Nimick's new show with us Korn's
Reading Room, and then America Off the Rails on Mondays.
I also write for Twitchy, The Lost Party.

Speaker 15 (02:39:34):
And occasionally for Midfit's Politics, and I also produce a
love Spready podcast with Job on Tuesdays.

Speaker 9 (02:39:45):
Hey, he asks at all dammit?

Speaker 2 (02:39:49):
Okay that I thought the other one was going to
be the last one for the night, but apparently not.
Jack Alexander, good to see you where we are right now?
There we are, and Jack, thanks for you being here tonight.
And Mike, thanks for you being here tonight. Let me
see if I can do this. No, I won't be
able to do that. Well anyway, gentlemen, thank you for

(02:40:11):
being here. Thanks to everybody who was watching. Still right now,
sitting at five hundred and twenty five people watching. That's
absolutely astounding and amazing. Everybody. Thank you in attendance. Thanks
to my wonderful guests. Thanks to Jack Alexander, thanks to Jeblef,

(02:40:34):
thanks to Rick Robinson, thanks to Mike Fitzpatrick and everyone
here tonight. Everyone God bless take care and be safe.
And with that I say something similar to goodnight
Advertise With Us

Popular Podcasts

24/7 News: The Latest
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

The Clay Travis and Buck Sexton Show

The Clay Travis and Buck Sexton Show

The Clay Travis and Buck Sexton Show. Clay Travis and Buck Sexton tackle the biggest stories in news, politics and current events with intelligence and humor. From the border crisis, to the madness of cancel culture and far-left missteps, Clay and Buck guide listeners through the latest headlines and hot topics with fun and entertaining conversations and opinions.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.