All Episodes

November 17, 2023 63 mins

What if we told you that your favorite Beatles melody could be revived by Artificial Intelligence? That's right! In this stimulating discussion, we delve into the fascinating world of AI, its implications, and how it impacts various facets of life. From President Biden's executive order on AI safety to the use of AI in creating a brand-new Beatles song, we've got an intriguing blend of tech talk and music chatter that you wouldn't want to miss.

We don't stop at the intersection of AI and music; we also explore its potential in redesigning the judiciary system and the necessary safeguards against bias. With the advent of AI in forensic analysis, crime forecasting, and predictive policing, the conversation takes an interesting turn. We also discuss the commitments of 15 leading companies to promote safe AI development. We promise, you're in for an insightful discussion!

Finally, we examine the repercussions of the executive order and its influence on the legal sector and workforce. Could AI be the affordable court-appointed attorney of the future? What measures do we need to protect the workforce in this AI-driven world? We also contemplate the international implications and the necessity for a global consensus on AI safety and security. Wrapping up, we steer the conversation back to music, discussing the Beatles' new song and the potential of AI in creating meaningful art. So, are you ready to join us on this exciting journey through the universe of AI?

Support the show

Let's get into it!

Follow us!

Email us: TheCatchupCast@Gmail.com

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
So, yeah, we're glad to have you back and we got a
good discussion lined up forthis one.
But you know what I want to dobefore?
I made some updates to thestore while you were gone.
Oh, we got all the products andI just want people to know
about them Because I don't wantthem to miss the opportunity for

(00:22):
these good, Clean products.
You know what I'm saying.
We made a sale last night.

Speaker 3 (00:29):
Oh man, that's what I'm talking about.
That's what I'm saying.

Speaker 1 (00:33):
We got the.
We'll catch up with you nexttime.
T-shirt.

Speaker 3 (00:36):
Mm, hmm, that's a dope one right there.
I love it.

Speaker 1 (00:39):
Yeah, and, by the way , available in multiple colors,
not just what you see hereMultiple, multiple colors.

Speaker 3 (00:47):
And.

Speaker 1 (00:47):
I want to emphasize to everything here is unisex,
hence why you see Homegirl hererocking the joint discussion.
You use a unisex T-shirt.
Mm hmm, not influenced at allby the Supreme T-shirts and this
joint discussion unisex hoodie,which is probably going to be a
scope.
And here, after much ado, isthe let's go ahead and get into

(01:10):
a T-shirt.
Oh man, right here.
So those are really the newadditions.
We got the hat, got the ketchupshirt, the hoodie, the beanie
and the long sleeve.
Now here's the thing about thelong sleeve and these and these
phone cases in this.
Now, now we still have these,if you're interested.

(01:32):
We got cartoon Us.
Right here Cartoon netgator, ifyou're wanting to protect your
breath.
The thing about these it showsthe stock.
Now, when I was going throughour inventory on Printful which
is who we work with for this itsaid all of these were out of

(01:55):
stock.
But you can see here it showsthat they're in stock.
So I need to verify this.
If any of you try to buy thesethings and it doesn't work out,
please let me know.
I know.
Some of it with the phone casesis probably just because these
are dated phone cases now andthey need to be updated.
But for the mug, specifically,and this long sleeve T-shirt, if

(02:17):
you have any trouble, pleaselet us know and we'll get that
fixed.
But, yeah, these are the newnew right here.
We'll catch up with you nexttime.
Let's go ahead and get into it.
My personal favorite and thesejoined the discussion shirts.
So then, hoodie and hoodie.
Yes, thank you.

(02:39):
So we want you guys to check allthat out is linked wherever
you're listening and whereveryou're watching, thanks to a
great link we have called linktree.
It takes you to everywhere thatyou would need for our show.
So everywhere and anywhere,everywhere and anywhere.
So I wanted to point that outoff the bat.
But next I want to get into ourtopics, man, so let's tell

(03:04):
people what we're talking about.
Yeah, number one Continuing withour focus on AI and the
difference that that makes inour world already Something that
this mostly may mark a year ofconsistent conversation about AI

(03:24):
.
Yeah, it was not too low.
Yeah, I mean it was not toolong after this, if not this
exact time when chat GBT cameout a year ago.
Just kind of wow.
But you know we've we've wantedto focus on it because two
reasons we're very interested init and then two I know for me

(03:48):
this has been a huge benefit.
We hope for you as well.
Gain this informationsyndicated and broken down,
rather than hearing the horrorstories you hear on the news or
the concerns of, you know,people who don't understand this
type of technology.
It's really been beneficial forus.
We hope it's beneficial for youas well.

(04:10):
And what that said, thepresident did something I I know
Denson wasn't here for this one, but we shared a former Google
developers concerns about thegrowth of AI, not again, again,
not necessarily like barred or achat GBT type of thing, but

(04:33):
more foundational type of AIthat could run an entire
organization.
His main concern, I rememberhim saying, is that could build
neural pathways as it learnsthrough different inputs.
That would prevent other peoplefrom prevent even the
developers from knowing what itsnew thought processes were or

(04:57):
being able to manage it or shutit down.
And so these are genuineconcerns, absolutely, and I
think that they're fair to have,especially when you hear
developers being the ones thatare concerned.
Exactly, yes.
And so this is really a followup to that President Biden
issuing an executive order toprotect the government from AI.

(05:18):
You love to hear it, but alsoyou would love to hear protect
the public.
Yeah, exactly, that'd be great,but yeah, so we're going to
break that down what that means,what that looks like, those
protections and such and soforth.
And then number two, these guysright here.

(05:41):
Can we shift it?
Come on there, we go Almost alittle bit.

Speaker 2 (05:44):
Those guys right there, the Beatles maybe you've
heard of them.

Speaker 1 (05:48):
They released a new song today and I like it a lot.
I really do.
And on top of that is somethingthey started to work on the 90s
, but they couldn't finishbecause of how they were able to
get the money, and theycouldn't finish because of how
the demo they received fromsomething John Lennon had

(06:09):
written before he was killed Wasmixed together.
Well, ai has actually helpedseparate those two audio inputs,
if you will, which was reallyjust one piano and vocal, into
one mic, and has helped separatethem into two different audio

(06:33):
channels, which is mind blowing.
So we're going to talk aboutthat a whole lot more man, so
let's go ahead and get into it.
What's going on?
Everybody, I'm John and I'mDenison.
This is the catch up.
All right, before we get intoour topic, I want to remind you

(07:06):
guys the three that's right,there's three of them Best ways
to support this show.
Number one you could have madethree out of that, to be honest.
Number one leave us a review.

(07:28):
Wherever you're listening,wherever you're watching, if
you're on YouTube, if you're onFacebook, if you're on Apple
podcasts or Spotify, there's away to leave a review.
It helps us out, no matter what.
It helps us know what we'redoing well.
It helps us know what we canprove on, but also tells the
algorithm to put us in front ofmore people, and that really
helps as well.
What is up, darian notificationgang.
Good to see you, man.

(07:48):
Number two the second best wayto support it oh, there you go.
See, there you go.
Let's go to right there.
Follow us on the live stream.
We are live streaming onFacebook and on YouTube.
We do it every Thursday nightand it allows us to have a

(08:09):
direct conversation with youguys about whatever it is we're
talking about.
That's the biggest thing for usis hearing your input in real
time, and even if you can't jumpon the live stream with us, we
do want to hear your inputregardless.
Please let us know what yourthoughts are.
We're very thankful to begetting hundreds of plays a week
over on the live stream, and wewant to hear your thoughts.

(08:32):
We want to know what yourthoughts are on the topics as we
discuss them.
So please let us know whetheryou're live with us or not.
And then number three therethey are Three of them.
So, circling back to what we ledoff with here, if you want to
support the show monetarily.
We got some really cool merch.

(08:53):
We want you to check it out.
It's linked wherever you'relistening, wherever you're
watching and it gets directly toyou.
Like I said, we had an orderlast night.
Things already shipped.
Man, it's on the way out thereready for it's coming and they'd
be printing it down in, likeVenezuela.
I think it's already on the wayto check some of my Venezuela.

(09:18):
I don't remember which one thisone was, but anyway.
So it comes quick and we gotsome good merch and we think you
guys will like it.
So please check it out.
Yeah, so, with that said, let'sgo ahead and jump into our
topic here Again.
So I'll actually start this offby asking you in a question, so

(09:41):
not having the details laid outyet, which I will but for Joe
Biden to issue an executiveorder to protect from possible
AI risks, what would you say?
You would hope to see thoseprotections include?

Speaker 3 (10:02):
I think one of the biggest things would be deep
bakes, right, I think.
I think that's probably one ofthe biggest things that needs to
be out there to protect notjust the government but also,
you know, the United Statescitizens, because, as AI gets
more and more advanced and is,like you kind of already talked
about or hinted about, one ofthe topics that we're going to

(10:25):
talk about, you know, it's ableto do amazing things that you
know us, as people probablyweren't aren't able to do, where
we're taking the same amount oftime to do deep fakes as one of
those right to be able to beable to create not just video

(10:48):
doubles but audio doubles ofsomeone and then having them be
able to do like you know, say,or do like crazy things is
incredibly concerning,especially when it comes to
elections, right, especiallywhen it comes to, like, how the

(11:12):
government operates and worksand stuff like that.
Like, I think that is probablyone of the biggest things that I
would want in there.
Another, another big thing thatI would like to see in that
type of order would be maybe therestriction of of how it models

(11:37):
are used.
I don't think that AI is is farenough to say, you know, fully
automate certain things, but Ithink, like when it comes to
stuff with the defense of ourcountry, I think I would want to

(11:57):
make sure that there's someonewho is in the seat, like the
driver seat of that, like Iwouldn't want to have that whole
thing automated.
Sure, that could be a process,you know a possible thing, but I
wouldn't want that because Ithink that would be a very, a
very big thing that can causeproblems, right, because there

(12:19):
are certain things that logicisn't going to correctly solve.

Speaker 1 (12:26):
Yeah, well, so you have.
You know, I think I broughtthis up a few times, but you
have certain movies, like eagleeye, that were way ahead of
their time.
Right, but what thatessentially was was a artificial
intelligence that wasprotecting internal and external

(12:48):
security for our country.
But it again, it developed itsown neural pathways and has very
own approach to doing that.
You know it was not good andyou know that movie came out in
the way to row nine didn'tresonate, but now it absolutely
would, right, yeah, because ofthose things, and to your point

(13:09):
what?
you're saying to autism.
I just thought would be wayahead of these kinds of actions
like social services.
Oh, and it's completely down onthis.
You know how, so you canactually understand the
situation and know it asfunctions when you're really at
risk.
That your friends, that theycan imagine these things.
I think that that is so.
I'm still a fan of StephenColbert.
I was watching the late showlast night and they talked about
this executive order and thenused AI to have a fake

(13:34):
conversation with a fake JoeBiden, and there were some parts
it was very obvious it was fake, but there were other parts
where, out of context, it couldhave been sold as being him and
that was just for a comedic bit,and so, if you took that from a
malicious intent, it is aconcern and it's something that

(13:57):
needs to be addressed, without adoubt.

Speaker 3 (13:59):
Yeah, exactly yeah.
I mean, it's a terrifyingthought process that the
information that you receive isnot good right, and making sure
that we develop technologies tobe able to detect that kind of
stuff is vital.

(14:20):
I think, especially since we'regoing into an election year, it
would be very vital that stufflike that doesn't happen.
Yeah.

Speaker 1 (14:33):
I agree.
Well, let's dive in here.
So this is being called alandmark executive order.
So, off the top, ai safety andsecurity new standards for the
development and deployment of AIto ensure safety and security
this is all high level stuff andof course, we can jump in, but

(14:53):
I just bullet pointed this out.
So I think for that, I thinkit's just to make sure, maybe
even address those concerns ofwhat that developer was saying
from Google, making sure thatthis is manageable to ensure all
that.

(15:16):
So, again, if you need me todive in further on any of these,
let me know.
Yeah, but developers of highrisk AI systems are going to
share safety test results andcritical information with the US
government.
So that is what that would looklike and include more.
And the National Institute ofStandards and Technology are

(15:38):
going to develop rigorousstandards for testing AI systems
before public release.
I'm not against that stuff atall.
Fortunately and I think thisgoes back to something I
mentioned on that episode acouple weeks back, but you and I
have talked about too multipletimes is that if we left this up

(15:58):
to Congress, if we left this upto the House and the Senate, it
would never happen.
Yeah.

Speaker 3 (16:05):
Yeah, it would take too long.

Speaker 1 (16:07):
Yeah, it would.
So the executive order is theway to go and I don't think
there's anything against it.
In fact, now the House and theSenate are both debating bills
that would go along with this,so it took this executive order.
I think that that's a good moveand a solid way to go, because

(16:30):
we've seen what has happenedwith social media without that.

Speaker 3 (16:34):
Yeah, exactly Without those guardrails.
That's all we're trying to getright, You're just trying to get
those guardrails in placebecause it matters.
It matters and it makes a hugedifference.
We don't want it to be the wild, wild, wild.

Speaker 1 (16:51):
You don't, and so largely because that movie is
not that great once you getolder.

Speaker 3 (17:00):
Hey man, hey man Loveless was was doing his stuff
, man Loveless was trying bro.
Exactly he was going for itHalf spider?

Speaker 1 (17:08):
No, but no.
So here.
Okay, here's my thing, right?
So these protections willinherently inhibit development
of AI, right, yeah, yeah, itwill.
Because, and with the goodintention of safety first,

(17:33):
obviously.
But you know, chat to you,right now at the bottom.
It normally tells you when it'slatest update is.
I don't know, by the way, I'mnot trying to claim I know this,
but I don't know if this isbecause of this executive order.
But now what it says is chat,gpt can make mistakes, verify

(17:55):
important information.
You know what I mean.
That's new.
Normally just tells you whenthe latest update was yeah, yeah
, exactly.
So that is interesting.

Speaker 3 (18:06):
More of that warning.

Speaker 1 (18:08):
Yeah, so you have a very long thread with chat GPT
that has a lot of differentinformation in it and what I
remember is that thread used tomaintain in its memory the whole
time.
But now if you log out or ifyou refresh the page, it will

(18:29):
only go back as far as the lastthing that you had interacted
with it and that thread.
Even if that thread is, youknow, a thousand messages long,
if you refresh it it will onlygo back to the last time you
refreshed.

Speaker 3 (18:47):
Oh, that's such a limiting factor.

Speaker 1 (18:50):
It is, and I think it does it because of these type
of stipulations.
Now, yeah, I can see it.
So it doesn't carry too muchpersonal information and all
that kind of stuff, right I?

Speaker 3 (19:03):
can see it, and it's interesting because it does make
me wonder, like, what thesetype of rulings will be doing to
what we talked about a fewweeks back with the AI that
Facebook is using, right, or isdeveloping with these
personalities.
Right, with these celebritypersonalities, because you know,

(19:26):
if you're in their case, right,if you're trying to create an
AI personality off of anexisting person, you'd want to
make sure that thoseconversations flow and continue
to have context within that, andI wonder if this will hamper
some of that right.

Speaker 1 (19:46):
Yeah, yeah, I don't know.
I would agree, though, that itprobably will, and that's okay.
I don't think, you know, forthe average person, the average
person, do they need more thanwhat Chad, gpt or Bard offer?
You know, I don't think so.

(20:06):
But also, you know, ai, which Ilearned this through an
interview I watched with Neil deGrass-Dyson has been integral
in developing medication towhere it is now, as far back as
2015.
And it is largely why whichcould you imagine if they shared

(20:28):
this when people were afraid offreaking out about MRA COVID
vaccine is largely why there'san mRNA COVID vaccine and why it
was able to be developed soquick.
You know, because mRNAtreatment was initially
developed for cancer vaccinesand stuff like that, and so they

(20:48):
were able to adapt it to fitCOVID and, yeah, I think that
was sent people over the hill.
Yeah.
Which I'll double check mysourcing on that, but that is
what I remember is it had a role, it continues to have a role in
a lot of medical advancements,treatments, as it should, and,

(21:11):
you see, these things withoverall scientific developments
as well.
I have no problem with that.
But I do agree with you,denison and again we have a lot
more to jump in here too, soI'll start flying through it but
I agree with you that thereneed to be things that protect
us from AI integration intonational security, government

(21:32):
leadership, police and our ownprivate information as well and
the media, so that we're notbeing misled down a road that AI
has projected for us.
I mean, I can see that.
And again, that's not to saythat's necessarily a bad thing,

(21:53):
because take, for example, Iwatched an interview the other
day, also on 60 Minutes, wherethe security leaders of five
different countries, includingours, which was FBI Director
Christopher Wray.
He said that China is thebiggest national security

(22:15):
concern that we have ever faced,and it's because of their use
of AI and things like that.
Which was interesting of him tosay that.
I'm not here to argue thatpoint one way or the other, but
it just shows.
You know, back in 2016, theywere trying to manipulate our
election through social mediamanipulation, right, and so if

(22:38):
there was a way to block thatand protect us from it, so we
develop our own opinions ratherthan those of rogue nations, I
think that's fantastic.
Yeah, so I'm sorry.
No, you're all right.

Speaker 3 (22:54):
I think I think that's very true and I think
that's very important.
That's the reason why I doenjoy the.
I think these protections areso, so, very important and I
also I feel that we can benefitwith a little bit of a slowing

(23:19):
down of the development of AI.
I think there's a lot of reallyreally good benefits in that
slowdown.

Speaker 1 (23:31):
Yeah, just make sure it's all done safely is where
you're coming from.

Speaker 3 (23:35):
Yeah, safely as well.
As I feel like we can refinethings a little bit better,
right, because I think right now, how Silicon Valley has done in
the past in general is likeanytime there's a new product or
breakthrough or whatever likethat, they try to push it out to
the market as quickly aspossible.

(23:56):
It's the whole push it out andfix all the broken parts later,
just get it out.
And so I feel like with the newadvancements and changes and

(24:16):
features, I've just been in thatget it out moment and I feel
like it is causing us moreissues than we should have with
these types of tools, especiallybecause of how powerful they
are.
I think they just need to nowget to a point where they come

(24:37):
out correctly, right, moremature, and I think this bill
has the opportunity or not thisbill, but this executive order
has the opportunity to help that.

Speaker 1 (24:49):
Sure, I think we know too it's not going to be
perfect.
The interesting side to theaspect of this is an AI
developer could develop an AIthat misleads this group that is
meant to oversee this kind ofstuff.
Yeah, that's true, but that istrue with almost any industry,

(25:10):
so I don't think it deserves ahigher point of concern than
anything else does.

Speaker 3 (25:15):
You know what I?
Mean.

Speaker 1 (25:18):
Well, let's go into privacy protection here.
So this emphasizes the need forbipartisan data privacy
legislation, which I think isprobably what's going through
the House in the Senate rightnow Accelerate the development
of privacy preserving techniquesusing AI and evaluate the
strength and evaluate andstrengthen sorry privacy

(25:43):
guidelines for federal agenciesregarding AI risks.
So it's more of a on the backend kind of a thing to make sure
that everything's in place forboth protection and then any
sort of legal issues that comefrom that.
Advancement of equity and civilrights, a part of a broader

(26:05):
objective to ensure AI advancesequity and civil rights, which I
think we can probably take asecond to touch on.
That one.
Yeah, this is something I sawthat was very quick.
Again, this was this wouldactually align with an example
of the fear mongering I see on alot of mainstream media, but I

(26:28):
do remember it being a concern.
So it is interesting that's inthis executive order that AI
analyzes how America is morewhite privilege.
You know what I mean.
And then would continue toreciprocate that, rather than

(26:50):
giving free opportunity or openopportunities to all you know
what I'm saying.
Yeah, so I think that'sprobably where that comes from.

Speaker 3 (27:00):
Yeah, I mean, I would imagine so I mean as well as
just in general, like AI, Ithink there is still kind of a
bias in certain ways, especiallybecause of, I mean, in general
right, any works that we do aregoing to come from a more bias

(27:22):
standpoint.
But I will say that this has theopportunity to kind of help
take it outside of that and makeit and open it up to be a
little bit more broader, alittle bit more like it's trying

(27:42):
to achieve more equitable todifferent communities other than
just one community or whateverlike that.
Sure, we've talked about acouple of times where there has
been plenty of issues when itcomes to facial recognition
right, ai that is being used infacial recognition cameras, that

(28:08):
it has been proven that thereare that a lot of facial
recognition softwares strugglewith identifying, and this
particular study was black facesas opposed to white faces.

(28:30):
It was able to distinguish thedifference between white faces a
lot more than it could than itdid for black faces, and so I
think this also can help thatright, create better policies,
better writing of the code, aswell as just kind of opening it
up for more opportunities forpeople of color to be able to

(28:56):
really participate more into thedevelopment than I think that
they have had beforehand.
As well, as the informationthat those AIs push out, like
you said, is far more equitableto everyone in that just one
specific minority group becauseit's fishing for, or one

(29:20):
specific group because it's kindof catering towards the
perceived majority or whatever.

Speaker 1 (29:31):
Right, well, yeah, I completely agree with everything
you said.
I was able to find some moreinformation on this particular
one.
So first three things hereprovide clear guidance to
landlords, federal benefitprograms and federal contractors
to keep AI algorithms frombeing used to exacerbate
discrimination still broad, butI think you can probably see

(29:55):
what the intention is there.
I don't know how I get, how thebenefits programs would use AI,
but the others are not.
It's so broad, there's so manydifferent ways.

Speaker 3 (30:08):
Yeah, I know for the when it comes to landlords and
stuff like that.
There was a relatively recentscandal a few months back about,
like real pages, the propertymanagement company using an AI
to change how rent prices were.

(30:33):
That was.
One of the problems is that itwas making it to where there was
less competition.
But then they also found thatin that same program that it
would be less favorable tominority individuals.
Minorities then it would be tosay white individuals when it

(30:57):
came to pricing and all sorts ofother stuff like that.
So as well as approvals andstuff.
So I can really see it theretoo.

Speaker 1 (31:05):
Yeah, no, that's a good catch.
I'm glad you added that it alsosays here address algorithmic
discrimination which is whatyou're really into there through
training, technical assistanceand coordination between the
Department of Justice andfederal civil right offices.
Best practices forinvestigating and prosecuting

(31:27):
civil rights violations relatedto AI.
So that's interesting Havingthat on the back end too, to
address that properly and thenensure fairness throughout the
criminal justice system bydeveloping best practices on the
use of AI and sentencing,parole and probation, pretrial,
release and detention, riskassessments, surveillance, crime

(31:51):
forecasting and predictivepolicing and forensic analysis.
Now we're getting into one ofJohn's favorite AI-oriented
movies, which is not really AI.
If I remember right, they werelike these three aliens I could
see the future or something likethat, but we were in a

(32:13):
transition.
This from 2002 to now.
It would be AI, and that isMinority Report, which is a
great film.
Well, I find it interestingabout how this is written by the
White House.
It's written in a way so thatit doesn't strike fear into
people, because I don't reallyknow how AI would benefit in

(32:33):
sentencing, parole, probation,pretrial release and detention,
because we already have judges.
I actually, when it comes tothe judiciary system, yes, you
have your liberal judges, youhave your conservative judges,
but what they do is they base itoff the precedent of years of

(32:57):
what came before them.
You know what I mean, and soit's not.
Of course, we don't alwaysagree with what the decisions
are, but they're not necessarilyjust up their free base in
whatever their final decision is.
You know what I mean.

Speaker 3 (33:11):
Yeah, yeah.
But I think what this is reallygoing towards is for that
future right Of, where there's apossibility that AI just
becomes far more involved in thejudicial system than we would
expect, right.
So maybe right for us.

(33:33):
We feel that AI is going to bea little bit more equitable or a
little bit more less bias whenit comes to decision making, or
at least have it as a secondopinion.
Like the judge rules this, orbefore the judge rules it, they

(33:57):
put in all the information ofthe case and what happened and
stuff like that, and then the AIspits out a ruling of what it
would say, and then the judgecan say, okay, well, based on
what the AI says and other stuff, I can see a future where
they're like okay, well, I'mjust gonna go with what the AI
said, because that seems to bethe most fair deal, Something

(34:20):
like that, and so I could seethe reason why you would wanna
have protections on these typesof things.

Speaker 1 (34:25):
Yeah, you know it would be interesting if there
was an AI analysis, eitheralongside or in place of a jury.
You know, interesting, yeah,that'd be an interesting thing.
I imagine it wouldn't be inplace of, but maybe in addition
to that'd be really interestingto be able to synthesize all of

(34:48):
the information that youreceived during the trial.
What would that look like?
Some of this stuff, which isvery good, but it's, I think
we're years out from thathappening, but it's good that
we're ahead of this.
You know, it's not reactionaryfor once, it's precautionary.

Speaker 3 (35:08):
Yeah, well, I mean, I will even say that sometimes in
some ways, especially how fastthings have been moving lately,
honestly, this is probably righton time, exactly.
Well, that's a great pointBecause, yeah, that code that
they're writing right now isprobably already in the process
of being written.
Yeah, being worked on.

Speaker 1 (35:29):
Yeah, no, that's a great point.
I mean, that's probably why itmade it in the freaking
executive order.
You know what I mean?
Yeah, and where the minorityreport relation comes in is the
fact that builds up to this Riskassessments, surveillance,
crime forecasting and predictivepolicing and forensic analysis.
Now, forensic analysisabsolutely would benefit from AI

(35:53):
, without a doubt.
Oh yeah, oh man, I mean, if youcould go have an analysis of a
crime scene and be able to havetrajectories of where bullets
came from and what gun they weretied to and all that kind of
stuff, the crime forecasting andpredictive policing, that's
something that needs protectionright off the bat, because then

(36:15):
you do waver into potential youknow, racial issues, to say the
least.
But to further, maybe betterexplain where my mindset is
minority report, the idea wasthese aliens were able to
predict what crimes that wouldhappen before they happened, and

(36:38):
so people were getting arrestedfor stuff they didn't even do.
You know what I mean.
Now I think that that's adystopian future that maybe we
would never reach to aninstitutional level.
But making sure the protectionsare in place that we never do,
I'm very much on board with.
Yeah, I am as well.
I am as well.

Speaker 3 (37:02):
I think it's just super important.
Yeah, I agree.
So okay, I'm gonna clip throughthese next few here.

Speaker 1 (37:10):
Innovation and competition innovation,
competition from innovation andcompetition.
The AI domain, with voluntarycommitments from 15 leading
companies to drive safe andtrustworthy AI development
that's great.
I don't think you really haveto incentivize any competition
in this industry right now, butto incentivize it on a safe
level with 15 companies I'm sureinclude Google and Microsoft

(37:38):
that's a great thing.
National and economic securityaddressing AI threats to
national security, economicsecurity and public health and
safety.
Development of a nationalsecurity memorandum to get the
best results possible.
National security memorandum toguide the military and
intelligence community in safe,ethical and effective AI use

(38:00):
that's a big one I think weshould look at real quick.
Okay, so let's see hereNational Security Council and
White House Chief of Staff aredeveloping this memorandum.
This document will ensure theUnited States military and
intelligence community use AIsafely, ethically and

(38:22):
effectively in their missionsand will direct actions to
counter adversaries' militaryuse of AI.
So here's the thing.
That's very broad.
Right, that's not specific AIin the military.
I would go as far and bold asto say that's probably been used
for like 15 years.
You know what I mean?
Mm-hmm, you know Because.

(38:44):
Why do you say that, john?
Well, I say it because we spendthree, yeah, over three times
more on our military than Chinadoes.
Mm-hmm.
You know what I'm saying.

Speaker 3 (39:00):
Yeah, I mean even than any other nation, honestly.

Speaker 1 (39:03):
Well, they're the second closest you know, and
everyone else pales incomparison to what we spend on
our military.
So there are a number of thingsthat we don't even know that we
have.
It just wouldn't surprise me ifAI was part of that.
But also, with all that saidand all that secrecy, I don't
know how you police that forethical and safe use of AI in

(39:27):
the military.

Speaker 3 (39:28):
Yeah, yeah, I mean hey, our Congress is already
having a hard time getting factsabout different spending on
stuff like that Absolutely theDOD does.
So, yeah, it'll be interestingto see how this works.

Speaker 1 (39:47):
Yeah, yeah, I mean there's a number of things I
mean even just as well I don'tneed to go down that path Just
the stuff that I've seen ourmilitary having a presence
around Israel and Gaza right now.
You just look at that andyou're like we have nothing to
worry about.
You know, from a security, fromour own safety standpoint.

(40:09):
Darian Chamin on the commentshere.
Good to have you on thecomments, man, thank you.
He says in regard to the AI andthe judicial system, I can see
it causing a rift because AIwould end up taking the roles
away from the human paralegals.
Could be a good thing, but itcould go the other way too, and
of course, I don't disagreewhatsoever.

(40:30):
I do think that taking thelawyers out of the courtroom in
place of AI, I kind of feel likethat would be one of the last
things that would happen, not tosay it wouldn't happen.
What I mean is I don't thinkthat'd be the first thing that
would change, you know what Imean.

(40:52):
That's just my opinion.
But with that said, yeah, Idon't know, I think right now,
yeah, if that were to happen,I'd be like, oh no, like I don't
want to be argued for oragainst by a robot.
You know what I'm saying.
But is it working alongsidewith those things?
Is it being replaced by them?

(41:13):
I don't know.
What do you think, man?

Speaker 3 (41:17):
Honestly, I could see it.
I could see it.
I could see a world and thismay be, I think this is where
Darian's also going where Icould see a world where you
always have a right to anattorney, right, but who are you
to say that your attorney hasto be a human?

(41:41):
Yeah, I mean I could easily seeit to being where the court
appointed attorney is an AI andthen you have to pay good money
for a real person, or where itcould be vice versa, you know
you could.

Speaker 1 (41:59):
Yeah, I actually really On the first approach you
took to it, though I actuallyreally like that idea, that idea
that you know if you are in thecourt and you can't afford your
own attorney, is their job togive you one.
But it what if that was an AIinstead?

(42:21):
I think actually that wouldEqualize the field.
You know, yeah, it could, itcould right.
Well, again, not now, but maybefive years, ten years from now.
Yeah, I kind of like that idea.
But, darian, with all that said, you know, I know we're we're
kind of making our argumentshere and thankfully that's why

(42:43):
this executive order is so Goodand timely is to protect people
from misuse of this kind ofstuff, if it does get to that
level, you know.
But my initial Opinion is thatAI in legal cases would just be

(43:03):
used to analyze evidence.

Speaker 3 (43:05):
You know, but I mean, that is a job of paralegal.

Speaker 1 (43:11):
That's true.
That's a good point.

Speaker 3 (43:15):
That's like one of their biggest things.

Speaker 1 (43:17):
Yeah, yep, yep, that's a good point.
Look, we're on the same pagethen, so that's a good point.
I think, of course, it willstart out with working alongside
, and that's the biggest thing,and we both have said this we
don't want to see jobs getstripped from people, especially
on the mass level that theycould with AI.

(43:38):
We don't want to see that.
If anything, we want to seework alongside, which I think is
the proponent of many pro AIpeople.
Is you work with it, not bereplaced by it?
Yeah, so.
Hopefully that can be maintained, because of course we know

(44:01):
these are made by businessesthat are looking to make money,
and you don't make money Morethan if you replace an entire
human with your AI, you know, orat least in certain situations,
probably, yeah, okay, let's Igot two more here Consumer and
worker protection coincidentally, establishing standards to

(44:27):
protect against AI enabled fraudand deception and standing up
for consumers and workersaffected by AI.
Let me double check this onehere.
So here's what this says fromthe White House's website AI is
changing America's jobs andworkplaces, offering both the
promise of improved productivitybut also the increased dangers

(44:50):
of workplace surveillance bythat's a good point too bias and
job displacement To mitigatethese risks.
Supporting the workplaces Tomitigate these risks.
Support workers ability tobargain collectively and enforce
, and workforce training anddevelopment as accessible to all

(45:10):
.
The president directs thefollowing actions Develop
principles and best practices tomitigate the harms and maximize
the benefits of AI for workersby addressing job displacement,
labor standards, workplaceequity, health and safety and
data collection.
These principles and bestpractices will benefit workers
by providing government or byproviding guidance to prevent

(45:33):
employers from undercompensating workers, which is
something we talked about before, evaluating job applications
unfairly or impinging on workersabilities to organize.
That is actually kind of themain thing, and I got another
bullet point to read off here,but that's kind of the main
thing.
I'm gathering from this ispro-union, yes, and producer

(45:59):
report on AI's potential labormarket impacts and Study and
identify options forstrengthening federal support
for workers facing the labordisruptions, including from AI.
So that's, that's all good.
Now I think that stuff needs tohappen, but Maybe it doesn't
have to be the United are theauto workers union.
Maybe it doesn't have to bethis union, blah, blah, blah.

(46:21):
Maybe you have one nationalunion that anyone At any
organization can be a part ofmm-hmm, that protects from this
stuff, right, yeah, maybe it'snot even called a union, maybe
it's just a Federally mandatedprotection.

Speaker 3 (46:39):
Yeah, yeah, I could see that.

Speaker 1 (46:42):
Yeah, because, again, I think you know you, you look
at it from outside, looking in,you have AI.
You already have robots thatmake cars, right.
Yeah, so you complete theentire production with AI, right
?
Mm-hmm.
Why wouldn't you do that?
Of course you could.

(47:02):
You pay for it one time and youdon't have salaried workers in
there.
I mean, financially would makesense.
So there has to be protectionagainst that.

Speaker 3 (47:13):
Yeah, exactly, exactly, yeah, yeah.
It just makes too much sensefor most businesses.

Speaker 1 (47:18):
Yeah, absolutely so.
The last one here internationalleadership, advancing American
leadership globally, and AI,safety and security, say an
example for the private sectorand governments worldwide.
Obviously just a generic thingto say to be the forefront of
this for the, for the world,mm-hmm, but all interesting
stuff there, man, and I thinkthat there's a lot of Ground

(47:42):
that is covered there.
I think there will be a lot ofground that has to be developed
further, but and I think itexplains something we mentioned
earlier as to why the house andthe center are still working on
this For more detailed bills, Iwould imagine, yeah, but, but I

(48:04):
think this sets good groundwork.
What are your thoughts on allthis?

Speaker 3 (48:07):
No, I think, I think you're right.
I think this is amazinggroundwork for everyone to go
forward and I think it gives usa better opportunity,
opportunity.
To Pardon me for transparencystory.

Speaker 1 (48:24):
It's common some point.

Speaker 3 (48:28):
But it's.
I think it gives us a betteropportunity for transparency in
AI, because I think one of thebiggest things right now is that
Most, if I'm not mistaken well,I shouldn't say most Some of
the AI tools like oh, open AI isa chat.
Gbt does a pretty decent job ofbeing relatively open and

(48:49):
honest about like their theirmovements and their updates and
stuff like that.
But I think this bill or notbill gosh darn executive order
Opens up the door for far moretransparent transparency as well
, as you know that that pump thebrakes moment right of just

(49:12):
being able to create Much, a farbetter environment for
Everybody, right, because I'msure, yeah, sure, technically,
most of these were focused in ongovernment entities, but A
double dose.

(49:33):
Oh yeah, sorry I do, I doapologize, but um, but yeah, I
think it just gives a far betteropportunity for.

Speaker 1 (49:44):
Yeah, no, I agree, some of it, as we, as we saw
it's already underway to bedetailed more like that memory
memorandum for national security, our legal system, all that
kind of stuff, the job stuff.
I mean it's we live in a freecountry and so it's not
something you can let literallyjust executive order out.

(50:06):
But yeah, to pave the way forthat legislation to come down I
think is a great idea and Ithink it's something that,
nationally, you know it has tobe the focus right now.
It's Probably Not somethingyou're gonna hear on the
campaign trail between now and ayear, from roughly a year from

(50:27):
now, but you know you havethings like so in which, not
right now.
My three big things have,coincidentally, been the focus
of the campaigns, which has beengreat.
It's been our economicdevelopment, it's been our lack
of funding for police, it's beenthe homeless community growing,

(50:49):
not having the resourcesavailable to them, and so that's
been the focus and that's beengreat to hear.
This is the stuff that needs tobe focused on a national level
and it's the stuff we need tohear you know what I mean.
Yeah.
So I mean, I am glad to hearthat and, yeah, I think we had a

(51:11):
good discussion on this.
I do want to mention, becausewe went longer on this than we
expected we would, so I'll makethis a quick mention.
So, talking about the Beatles,I'm not going to play on here
because we'll definitely get thelive stream banned, but they
have a new song out.
What did you say?

Speaker 3 (51:29):
I was just saying we'll get copyrighted.

Speaker 1 (51:31):
Get copyrighted man Copy copyright, so it's called
Now and Then, and it's very cool, and the reason I want to bring
it up is because of how AIbrought this thing to life, and
for people that are concernedabout AI, they're also concerned
about how it will impact artand music, right, those type of

(51:57):
things and so I think this isimportant to share because it
shows how it can be done with nominimal to none of negative
impact.
So back in the 90s, the Beatlesreleased two new songs based

(52:18):
off of demos that John Lennonhad had, that he had written
before he was killed, and theyfinished up and they're okay,
but they didn't have availableto them what they have now, and
this one is genuinely a verygood song.
So they took what John had done, which he basically set one mic

(52:38):
over a piano, an upright piano,also angling toward his face,
so it's getting the piano andhis voice at the same time.
What's amazing about what I'mgoing to tell you guys is the
crossover frequencies on pianoand vocals obviously depend on

(53:02):
where you're playing, but wherehe was playing pretty much the
same.
So early 2022, peter Jacksondirected, I believe, even
produced, the street piecedocumentary of the Beatles
recording the Let it Be album,and I didn't know this at the

(53:23):
time, but he used AI to be ableto take everything that was
being captured in the room micsof the studio, because now
everything was mic'd at the timefor recording, but they were
just sitting there writing andit was able to capture all of it
, split it out differently andthen mix it and yeah, mix it

(53:44):
properly.
And so Paul McCartney hearsabout this and they're able to
do the same thing with this demofrom John Lynn.
They split his vocals and thepiano separately, so now it's
two separate tracks and when youhear it you'll probably be

(54:06):
amazed by how well it did it.
I mean, it's very impressive.
And so then Ringo was able togo in and play drums, paul
McCartney played bass, he hadsome guitar, I think, and he did
vocals and harmonies, and thenthey even left some guitar parts
that George Harrison had donein the 90s before he passed away

(54:29):
in 01.
It's very cool and I really youknow I have this poster behind
me, I know right there, but Iwouldn't say I've been directly
impacted by Beatles music as faras my writing and stuff for a
while.
So I say all this to you guys,not as a fan, even though I am

(54:53):
one, but not as this megaT-Swift type of fan.
Right, I say it just out ofrespect for the craft and what
they pulled off, and it reallyis a beautiful song.
And I think what's really cooltoo, man, is they changed the
industry, they changed musicback in 1964, right.

(55:18):
And, almost 60 years later,they're still doing things as
they have done, sporadically,off and on, throughout all that
time.
They're still doing things thatare changing the music industry
or leaving a lasting impact.

Speaker 3 (55:34):
You know exactly it's very mind-blowing, it's
mind-blowing.

Speaker 1 (55:38):
Yeah, it's very cool, and I don't know if you have
anything else you'd like to addto that.
You definitely can.
But I think a lot of people'sconcerns are like you know, oh,
I'm going to hear a newso-and-so song, but it's not
really going to be them, it'sgoing to be AI pertaining to be
them, right, and I think what'sgreat about it is the Beatles

(56:02):
have just shown how AI can helpmake good music, but also make
it human at the same time.

Speaker 3 (56:07):
Yes, exactly, it opens more doors than what
people feel like it's closing.

Speaker 1 (56:15):
Yeah, yeah, exactly Exactly, and I realize not every
musician out there is.
You know they're not in their80s and trying to finish a demo
they worked on 30 years ago.
But think about I know you andI both love this track on

(56:36):
Drake's Scorpion album.
Don't matter to me, he took ademo that was unfinished by
Michael Jackson.
Imagine if they had thattechnology and use that
technology and there's several,several, if I remember right.
There are several dozenunfinished Michael Jackson
tracks that they just couldn'tsplit the files apart to make it

(56:57):
a finished track.

Speaker 3 (56:59):
Stuff like that you could you know, yeah, you really
could, and it would be prettyamazing.

Speaker 1 (57:04):
Yeah, yeah.
So that kind of stuff isexciting.
I think it's very cool.
I just wanted to mention itreal quick.
So, yeah, man, I think we had agood discussion on this.
Always great to have people inthe comments.
Darian, thank you for chimingin and thanks on the
notification gangs as well.
Remember, and do remember.

(57:26):
We want you to leave a ratingreview wherever you're listening
, wherever you're watching.
That is the quickest, easiestway to help us continue to grow.
Number two if you're listeningto this on audio, jump on the
Facebook or YouTube live streamwith us and let us know your
thoughts in real time or onrewatch as well.
We live every Thursday night.

(57:47):
And number three check out.
Check out.
Our shop is linked whereveryou're listening or wherever
you're watching.
Right now we got good, clean,ketchup podcast merch.
So, with that said, we're goingto bump out.
But I want to bump out withthis new track dude, not the

(58:10):
Beatles track, all right.
A new podcast intro slash outro.
I may shorten the intro.
I wanted to get your take on it.
We'll talk about it after wejump off.
Okay, but this is not somethingI did.
Just to be clear.
This is something that we hiredsomebody to work on and it is

(58:34):
great.
So please let me know.
If you can't hear it, here wego.
You hear that, all right Good.
Now I'm leaving the intro.

(58:56):
This is just so.
Another piece of Uh huh.

(59:50):
Mm-hmm.

(01:00:13):
Yeah, right back to our theme.
At the end.
I Love it honestly.
You guys, you guys don't knowthis, but I have a full Mysterio
set up here.
I hadn't actually listened toit through that until right now.
Hope that's a half as good foryou guys as it did for us.

(01:00:38):
Yeah, that's very cool.
I actually don't think I'mgonna change anything with that.

Speaker 3 (01:00:43):
What do you do yeah?
No, I don't think so either.
I think it's really really good, Perfect really.

Speaker 1 (01:00:52):
Yeah, I mean we can adjust our timing as we do,
because we have a very, veryrhythmic intro that we do
because we've had the same introfor so long.
Now we know exactly how long wehave to intro ourselves, but we
can change it as needed because, man, it's just such a vibe.
The guy did say he was gonnagive me other Drum beats that

(01:01:16):
would come in like differentoptions.
He hasn't done that yet, but Idon't know if he needs to.
I love that groove.

Speaker 3 (01:01:22):
I agree.
Yes, I really, I really do.

Speaker 1 (01:01:24):
Yeah, he's a really good job sweet, so Daring, like
three thumbs up, let's go, man.
Thank you.
Thank you, bro.
Yeah, we're glad you like it,man, we're.
And it's still.
It's not a done deal.
So if you guys are listening orwatching this and you have
other thoughts, less than yourthoughts, but I Think it's a

(01:01:45):
cool one, I like it.
So, yeah, I dig it, man.
Yeah.
So, with all that said, thankyou guys, so much for listening,
thank you for watching, andwe'll catch up with you next
week.
Oh you.
Advertise With Us

Popular Podcasts

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.