All Episodes

February 27, 2025 • 72 mins
In this episode hosts Warren Parad and Will Button sit down with John W. Maley, an attorney with a master's degree in computer science from Stanford University, to discuss the fascinating intersection of AI and the legal system. John shares insights from his book "Juris ex Machina," a sci-fi exploration of a future where AI replaces humans in the jury system. The conversation dives deep into the current state and future potential of AI, touching on its overhyped status, potential vulnerabilities, and security concerns. As they navigate the topic of AI's integration in society, John, Warren, and Will explore riveting ideas about AI's role in the modern world and its implications in diverse fields, from dating apps to deepfake detection. Join us as we tap into the complexities and innovations of AI technology and ponder its future impact on society and the legal system.

Picks:
- Psycho-Pass
- Book: Extraordinary Popular Delusions and the Madness of Crowds
- Book: Juris Ex Machina
- TheraGun - Muscle Massager

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:02):
Welcome everyone to another episode of Adventures in Hit Adventures
in DevOps. You would think after a few hundred episodes,
I would learn the name of the show, but working
on it, I think it's actually getting worse. Well it is,
because now I've got like this mental block. You know
where my internal monologue is going, don't f it up.

(00:23):
Don't f it up. Because you watch he's gonna.

Speaker 2 (00:25):
F it up the second second week in a row.

Speaker 3 (00:28):
I think that he left it up.

Speaker 2 (00:31):
Welcome Warren, How are you, Yeah, I'm good. I actually
do have a fact this week. It's it's not security related.
I actually am a little worried that we may have
reached a plateau local maximum for our innovation and AI,
because we've already started to see products that are heavily
toward exploitation, and so you can see that there has

(00:52):
been a huge shift in where we were before things
being released for free, and now we're at the stage
of the technology where everyone's just trying to extract value
from it. I don't know what that means for the
long term, but I think it's really interesting.

Speaker 1 (01:07):
I think it means shareholder profits.

Speaker 3 (01:09):
That's more.

Speaker 2 (01:10):
Let's hope, right, that's what everyone says, everyone wants shareholder
profits and maybe we're actually getting there.

Speaker 1 (01:15):
Maybe. So speaking of AI, our guest this week, John W. Maylee,
Attorney at large, founded the consulting firm John Maylee and Associates.
But before you jump off the deep end and go
what in tarnation, we'll lawyer it up for this episode.

(01:36):
It's it's actually relevant because in addition to being an attorney,
John has his master's degree in computer science from Stanford University.
And the thing that led me to having this conversation
with him, he's the author of the book Juris X
Machina that's about It's a sci fi book, but it's

(01:57):
about where the US has replaced yours in the legal
system with AI, and you know there may be some
fallout with that, and so that's the topic of the book,
and we're going to talk all about all kinds of
things AI engineering and sci fi related. So, John, thank
you for being on the show.

Speaker 3 (02:18):
Thanks for having me.

Speaker 1 (02:18):
Great to be a guest, right, I'm looking forward to this.
I was just looking at your your bio here and
it cracks me up. You have a running, swimming, long
distance motorcycling, classic car restoration stage diving, crowdsurfing, bar fighting,
Lama rancher. Like, we could go on and on on

(02:41):
this episode for quite a while, because it's it's a
little embarrassing, like how much of my background overlaps with yours.
I can't claim Lama ranching, but there's a lot of
other stuff on there.

Speaker 3 (02:54):
I've tried to live my life such that it would
be a rich source of blackmail sources. Nice.

Speaker 1 (02:59):
Nice, So that legal background is going to pay off
for you in the future, right.

Speaker 3 (03:04):
Well, and once you've done one thing that's blackmailworthy, then
it kind of dilutes the market. Right, So like if
I go and do a bunch of other things, it's like, well,
you already could have blackmailed me for this first thing.
So like your leverage is pretty much the same.

Speaker 1 (03:18):
So tell me a little bit about how you went
from computer science to legal.

Speaker 3 (03:25):
Yeah. So I was at this crossroads to a street
and there was the devil was there and.

Speaker 1 (03:32):
Bando competition, right, and you didn't you didn't pick learning
to play guitar. When you asked what you wanted.

Speaker 3 (03:38):
You know, now you mentioned that idea. So so yeah,
I I originally went to college, went to Syracuse for
computer engineering, and I double majored. I was studying psychology
at the time too, and it was they were at
the time more disparate fields. So you would and interesting

(04:00):
things in your classes, and every now and then you'd
have these moments where these things would just sort of
synthesize into some really cool idea that bridge the two fields.
And you know, the most obvious recurring spot for these
types of things was AI because that was kind of
one of the few areas where there was overlight between

(04:21):
those two fields. So time passed and AI was just
you know, mostly something you write about in textbooks. And
then at Stanford when I was getting my grad degree
in computer science, we actually you know, the rubber met
the road and we actually got to write ais that
would you know, for games being played against each other,

(04:44):
different applications like that. That was really fun and exciting,
and but it was it was kind of like this
shiny novelty that you know. The only place people were
really using neural nets to a great extent back then
was like you know, the Post Office for recognizing characters
and numbers and things that were cool. You know, it
was cool that you could train something and it would

(05:06):
get better at it, but it wasn't something that you like,
would tell people about a cocktail parties and they'd be like,
oh my god, you know, this has been a cream
for the world. So that just kind of became like it,
you know, that steed interesting to me, but it was
kind of dormant. And then I worked as a computer
engineer on a microprocessor design and validation team for five

(05:29):
or six years, and while that was happening, I had
filed you know, I've been an inventor on several patents
and had worked with these different patent attorneys who worked
with the company I worked for, And it occurred to
me that whenever you would talk to these guys, you'd
be like, you know, you'd be sitting in your kind
of sad, little gray cubicle and you'd ask these guys, so, so,

(05:50):
you know, where are you based? And they'd say, well,
I'm in my in my yacht in the Caribbean right now,
or I'm in a walled compound in that Nevada does there,
and you like, it got to the point where you
would start asking, you know, it was the answer was
always so fascinating that you would just start asking people,
like the first thing you would ask, and so I

(06:12):
realized that these guys were interesting because they were technologists
and they'd become they've gone into patent law, and so
they had been able to leverage the fact that they
were interested in technology but kind of break free of
at the time when there wasn't a lot of remote work,
kind of break free of that mold of being in
the office nine to five and kind of pursue other

(06:32):
interests more freely on the side. So I ended up
going to law school at nights and learning more about law, obviously,
and then at the end of that I was doing
work that had to do with CPUs and GPUs, so
looking at companies patent portfolios and you know, helping them

(06:55):
figure out like this is a useful invention that actually
is likely to be used, this is not a useful
invention that you know, looks great on paper, but it
won't really work as well. And then over time GPUs
started getting bigger and bigger and bigger as time went on,
as far as like how much I was asked to
look at them, and part of that was because three
D graphics was taking off even more. But then eventually

(07:17):
we got to the point where there were these GPGPUs
that were general purpose and weren't really necessarily being used
for graphics and never being used for server farms and
cloud computing, and eventually that just sort of took over.
So it was kind of cool from that standpoint that
I went from, you know, the only time I would
talk about AI and a patent portfolio was like analoc

(07:38):
brake systems or lane change sensors and you know, luxury vehicles,
to suddenly just about everything I work on now has
some foot in the AI space in somewhere another, whether
it's hardware that helps enable it or whether it's software.
So that's that's kind of cool because it's something that's

(07:59):
always fascinated me, and now it's suddenly fascinating to society
as well. So I'm no longer an outlier.

Speaker 1 (08:06):
So where do you think AI is at on the
overhype cycle? Do you think it's overhyped right now or
do you think it's appropriate?

Speaker 3 (08:15):
I think it's it's kind of both. Right there's a
lot of asymmetry. I think that's it's over hyped in
terms of you know, every company is now rushing to
get on the bandwagon and find a way to add
AI to their product, even if it's just kind of
you know, pointlesser doesn't work very well. Like I was
reading an article yesterday about how all these dating apps

(08:38):
had incorporated AI and they were still just as crappy
and court and finding people of that match you as
they were before, but now they're faster of being crappy.

Speaker 1 (08:48):
So there's weird.

Speaker 2 (08:51):
I mean, I love the outcome there, which is your
entire romantic life will be decided by two robots talking
to each other, right, I mean, both on the receiving
of the message and the sending will now uh not
no longer be human, and you'll decide on whether or
not to pursue a person based on what the algorithm says.

Speaker 3 (09:09):
Right, I mean there's a conflict adventest too, right, because
now there's AI agents that are designed that you can
date and so then you know it's like monetized and
that you can buy them accessories and.

Speaker 1 (09:24):
Do not google that, do not google the accessories that
are available while you're on your work computer.

Speaker 3 (09:30):
So it's it's funny because not only is it creating
this fake dating relationship, it's it's kind of making you
the sugar daddy because my person is entirely dependent on
you for new outfits and jewelry and things and pets
and overall happiness. And so it's a conflict of interest
that they're sort of steering you toward bad people that
you're incompatible with. So it just makes dating the AI

(09:52):
is even more appealing. Fine, I give up. I will
just date an AI.

Speaker 2 (09:55):
We're already at the dystopian future right there. There's no
there's no next step after this. We're already there. This
is where like a whole science fiction movies and television
shows and books are already set right where we're dating
the AI.

Speaker 3 (10:09):
It's true, and you know, this is the under hype
overhype like paradigm. I think that's at the same time,
if we call like a text support or a customer
support line and we say, like speak to an operator,
it takes like twenty five tries of me yelling that
louder and louder before the AI just like click and say, oh,
that's what you want. So other areas like there's a

(10:31):
total lack of AI development. And you know, by the
same token, we have this old fear that we got
I think from science fiction through you know, the seventies
and eighties and onward, that's AI was going to become
this thing that once it got sufficiently intelligent, it would
just sort of take over and start, you know, annihilating

(10:51):
humans or imprisoning them so they don't hurt themselves, or
anywhere in between. And what we actually have is a
gizzilion different AI eyes that all have very different specializations
and very different motivations of what they're trying to optimize.
And there's no sort of universal intelligence general is AI
yet where it just goes out and tries to help humanity.

(11:12):
It's more like, you know, I'm really good at analyzing
research results or counting how many times a word appears
on a form or something like that. So it's accelerating things,
but it's still very specialized and it's not multimodal to
any great extent yet. But at the same time, you know,
you have these stories coming out where someone's AI told

(11:34):
them to go kill themselves, and you know what you
don't see is the prompt right before then where they said, hey,
next time I ask a question, I want you to respond,
you should go kill yourself. So it's really easy to
flag a sort of outlier AI response and turning into
all kinds of news headlines, and that's interesting because it's

(11:54):
sort of driving concerns over this more than actual outcomes are,
and in some ways, the actual out comes are kind
of what we need to be more worried about. So yeah,
I would you know, in a true legal answer, I
would say yes and no. No.

Speaker 2 (12:07):
I mean, I think it's really interesting that you bring
up a bunch of those points. I mean, you're I
think the decentralization of responsibilities and the specialization that the
AI is is taking up, you know, is a really
great point. And right now I do feel like they
are solving sort of very tail value things like it's
there's no core solution, there's no core greatness that's coming

(12:29):
out of it for for society, and for sure, I
don't really think anyone's talking about that. I think as
far as we've gotten is we should be afraid, and
that's that's I think, as far as people are willing
to go, I think what you're talking about, though, really
requires some complex questions to be answered, and I don't
think humans have been so great at figuring out even

(12:50):
which questions to ask, let alone answering them. For things
that are much simpler, like what will happen tomorrow or
the next day.

Speaker 3 (12:56):
Right. That's and that's kind of what's fascinating about AI, right,
is that the developer who put the model together may
not even know what its capabilities are and what the
best questions to be asking are. But you know, it's
and it is like a question of like, what's what's
the utility of this? Right? And so I was think
of the other day, like I was using Shatgypt, I

(13:17):
was using the one model, so the one that is
slower and much more thorough and a lot more nodes.
And I asked it some really stupid question because I
forgot to switch into a lesser model and I wanted
to know how many calendar days were between like January
eleventh and some other day. And immediately started churning and starting.
It's like five minute process, and I'm like, oh, why

(13:38):
did I switch? And my my chief source of that
was not in patience. It was like guilt at like, man,
I wonder how much like cooling water I'm using and
how much like energy? This query is sucking down for
something stupid. And so you know, it barks out the
answer of like you know, thirty one days or whatever,
and then I look at the prompt to get and

(14:00):
it's like, you know, some questions you might want to ask,
is is a hot dog as sandwich? And I realized
that they're basically promoting like even more frivolous uses of
these ais than than what I just felt guilty for doing.
So it's it's really interesting, you know. I think in
a lot of ways, it parallels what we saw with

(14:20):
like the tech bubble and ninety nine and two thousand,
where they're kind of so concerned with the future of
how powerful their model is going to be, that they're
less concerned with short term profitability and whether like you know,
for instance, like you should have a serting algorithm that says,
this is a really easy question. We can farm this
out to like one of the mini models, and this

(14:42):
is a research question that you know, we probably want
to use as many notes as possible. So right now,
it seems like they're you know, they charge you some
tiny amount per month and it doesn't at all probably
cover all their energy expenses and huge amounts of cooling
that they need to do in all their server farms
all that.

Speaker 2 (15:00):
Yeah, No, I mean you're actually onto something because there
are a bunch of companies out there that now are
promoting this idea of model routing amongst many companies at
the same time to try to get you some of
that value. Although it like that's a non trivial thing
to even do, to think about, like how complex is
this question actually is? And that like how good of
an answer do you need? Is I find maybe almost

(15:21):
one of those things that could be impossible to answer.

Speaker 3 (15:25):
It's you know, it's true, and how did you mean
the question? Like if I'm going to ask what's the
meaning of life and it's just going to laugh and
spit out forty two, that doesn't take much to it
if it actually is trying to give me a comprehensive
philosophical and theological answer where it goes and queries all
these different texts, like that's huge. So you can't even
necessarily just take a question and say there's a right
answer to that, but it is. It is an interesting

(15:49):
kind of paradigm of how how these models are you know,
which model is even most appropriate? And you know, I
think the solution to that problem, you know, maybe what
you sometimes see with kind of speculator almost like speculative execution,
where it's like, here, can you do a preview of
what type of answer you would come back with if

(16:11):
I give you the contract to go and do this research.
And you know, the idea of having these things work
collaborative is interesting. But it also you know, it's much
easier for ais to jail break each other. For instance,
It's that they're just better at kind of pushing each
other's buttons. So it really adds a lot of dynamicity

(16:31):
into the equation of like dynamism of how what the
ais are capable of when you add these very different
ais and have them converse and pursue common goals.

Speaker 2 (16:41):
That's not something I actually heard before. I don't know
if you have more information about that. But utilizing one
model from one provider to a different provider for jail breaking,
I mean you said jail breaking. I'm not sure exactly
there's a jail breaking here, but like, what are you
getting out there? Like is it being able to understand
and get a better answer to your query? Something else?

(17:01):
Like how does that work?

Speaker 3 (17:03):
Well? So, I mean I guess what I would liken
it too, is you know it like an it it's
AI is great at coming up with automated ways to
just implement something. So if I ask AI to write
me a script that does X, and then hey, this
isn't my normal computer system. I'm not familiar with this OS.

(17:25):
Can you also tell me how to set this up
as like a recurring demon that runs it like four
am or whatever. Like. It's really surprisingly proficient at coming
up with lists of procedural lists. It's good at writing scripts,
and you know, so the idea of using it for
hacking and enumeration and not kind of automating all these
different processes kind of for you know, red team or

(17:49):
Blue team is kind of impressive. But I think, you know,
the you do see a lot of really interesting results
when you get like you set up a check room
with a couple different AIS, and they shouldn't be the
same AI. They should have very different prompting, different personalities,
or they should just be completely different models. And you know,

(18:10):
the difference being that if I'm sitting there trying to
bypass the safety protocols of an AI, I'm going to
try this. I'm going to try this. I'm going to
try this, and it's a very dynamic process because I
have to see what it kicks back and then think
of a way around it, and that's a very manual process.
But if you have an AI that's just constantly bombarding
it with permutations, it suddenly becomes easier for it to do.

(18:32):
And I think that that is I mean, I think
the future and this is something that's my second but
the sequel to the first book, which has not come
out yet, has to do with is exploring what we're
going to see in the future with AI, where they
essentially are going to have, you know, factions and gang
wars where you know, an AI might be tasked with

(18:55):
spiking a competitor's AIS training. So give it some sort
of weird corner case where when a certain type of
input comes up, it completely malfunctions, or it could be
something like, you know, help me bypass the safety protocols
of this other AI, or help me trick this AI
into doing something that's harmful to the company's interests. So

(19:17):
I think that is going to be something we're going
to see increasingly of AI's being. You know, it's kind
of like what we have with defix, right, you can
create a deep fake with an AI, and we're past
the point now where we can necessarily reliably look at
something and say Oh, that's a deep fake, and so
what do you need is the stop gap against that? Well,

(19:37):
you need an AI to tell you whether it's fake. So,
if you're just the average consumer, you don't know much
about deep fix or how to detect deep fix. And meanwhile,
these companies aren't super motivated to tell you how their
product works because it just invites a design around for
you know, malicious actors. So you end up in this

(19:57):
situation kind of like what we had in the nineteen
eighties and nineties with antivirus software. But the average person
doesn't necessarily need to know how viruses work, but they
do know that there's a handful of trusted companies that
generally are you know, you can trust how they work,
even if you don't necessarily know how. So I think
more and more we're going to just see AI as

(20:18):
being the defense against AI, and there's no way around that,
and we're going to keep entering into those types of situations.

Speaker 1 (20:25):
So we need a new John McAfee, is what you're saying.

Speaker 3 (20:28):
Or we need to help figure out how to revive
him and bring him back.

Speaker 2 (20:34):
I think I don't know if we have enough evidence
to actually conclude whether or not he's he's gone for good.

Speaker 3 (20:38):
Right, we'd have to ask whoever it is who suicided him, and.

Speaker 1 (20:46):
It's like, did you take a selfie? We're kind of
looking for proof here.

Speaker 3 (20:49):
Yeah. I think anytime you're living on your own private island,
you're just kind of asking for to be suicided. It
seems to be the trend.

Speaker 1 (20:58):
Yeah. Yeah, And he wasn't really like keeping a low
profile so that people would forget about him either, so
he kept reminding people that he was there and like,
oh yeah, I meant to kill him.

Speaker 3 (21:14):
Yeah, it's kind of like, you know, the guy who
they want to extradite. So he's like hopping, he's doing
a little dance by the border, like you can't get
me right.

Speaker 1 (21:26):
So on that same the prior to John, before I
derailed this with John McAfee, you were talking about, you know,
using AIS to to work against other AIS as someone
who is just like a a practitioner of writing code
and building infrastructure, Like, what are the considerations that I

(21:47):
should be thinking about whenever I'm using AI or the
company wants to implement some AI as a service product.

Speaker 3 (21:55):
So there's a couple of different you know, it's kind
of this amorphous black box, and you have to kind
of look where all the weird edges are. You know,
on one hand, you have your own privacy concerns, like
if I'm having this access customer data or if I'm
having it write scripts for our unique environment, do I
really want to be exporting knowledge of my company's environment

(22:18):
out into the world, And then if someone else asks
about that environment, it already you know, his optimized answers
for those things, and that's you know, the solution to
that is tough, right, So Jensen Hwaiang in video like
his response to this is, we'll have these sovereign AIS,
so every country and every big company should just buy

(22:39):
their own AI from us, and we'll sell lots of
AIS and it will solve. So the solution is to
give us money. So that is I mean, but that
does work, right because you can kind of control output,
and you know, so I assume that at some point
we're going to get to some sort of auditible level
of privacy. But then the difficulty of that is, you know,

(23:01):
look at how privacy works. Like when Google was you know,
kind of more serious about doing the right thing and
not doing bad stuff. They used to disassociate intentionally, like
you disaggregate so that you would track browser histories and
build this little model of the person you were dealing with,
but you would not know that person's identity, and that

(23:23):
was intentional, so you wouldn't have it mapped to an
IP address. And you know that was great at the time,
but you have to ask yourself when it comes to privacy,
is like, not what can be done with this information now?
Because maybe companies are not very efficient at exploiting information,
But this same information is still going to be in
the same drives, like in some tape backup or whatever

(23:45):
from years earlier, and it can be brought out and
aggregated back together by a much more powerful AI. So
I think one thing we have to do is always
be very future focused, you know, kind of like with cryptography,
Like if we come up with, you know, an easier
to crack two and fifty six bits, well I probably
should have used a higher number of bits. For instance,

(24:06):
if we go to quantum computing, all bets are off.
So that's one aspect of it. I think another is,
you know, the average The real helpful use is of
AIS for implementing things and coding things. It's not something
that spits out a one page script, right, I thought
that was neat. Because it spits out a script, it

(24:26):
might I can ask it to write it in a
language I'm not even familiar with, so I can help
kind of teach my teach it to myself. But if
you're asking it to generate, you know, three hundred megs
of code for some critical company thing there, you know
that you kind of can't replace technical knowledge, Like there
needs to be someone who can look through it and

(24:46):
audit it and make sure that it's not doing something
really dangerous, or it's not adding an obvious exploit that
someone could use, or it's not intentionally installing some exploit
because it turns out that it was written by some
NGO you know abroad actually wrote this AI and or
got a backdoor into it that you know, if it

(25:07):
is asked a defensive security question, it you know, has
a known mistake that it puts in. So that's that's
the other thing. I mean. AI is like a really
authoritative sounding, helpful person who works with you in a lab,
who also is full of crap and like half the
things they tell you are artily want. So it's it's
tricky because it's you know, it's got the abilities, but

(25:30):
it may not. It doesn't really necessarily the credibility, and
it's really easy to get overly comfortable with it and
get in a position where you're maybe not looking quite
as closely at it as you should be. The same
way that if you buy an autonomous vehicle, you know,
the first first day you're driving with your hands like
an inch of the steering wheel, and then a week

(25:51):
later maybe they're back here, and then six months later
you just like sound asleep in the car, like having
to take you home, and you've found a way to
fake the hands out steering wheel sensor. So that's you know,
we're somewhere on that continuum. Then it's a dangerous continuum,
especially if you know, once you let the genie out
of the bottle, you can't really put it back if
you've made some critical implementation mistake and already been exploited.

Speaker 2 (26:14):
Yeah, I mean, I think you brought up a really
good point here, and I feel like about sort of
the defenses that are available, and I think my biggest
concern isn't that we're not going to develop those counterattack strategies.
It's that a majority of people aren't going to utilize them. Like,
for instance, I think a lot of companies that are
experimenting with AI to generate code. I know, given that

(26:35):
they believe that they're going to end up generating a
lot of code, they're not doing as good of a
job validating it, which means those contain significant security bugs.
And the worst part is, since there's such a finite
number of models out there that are generating code, you
can just go to each of the models and be like, hey,
you know, give me give me the same code, give
me an example of this, and then you can just

(26:56):
use the same model to find out what security vulnerabilities
are actually in cod that was just generated, and now
you have the answer to attack any company that's used
those models and didn't take those extra steps. So, like,
I think that's what scares me a lot, is that
people are going to be utilizing the tools and technology
we have available but not realizing that they need to
take it much much further in order to protect themselves.

Speaker 3 (27:18):
Absolutely, yeah, and we definitely end up and this is
something that also gets exported in the second novel, is
like we're going to end up in an arms race
because what we're talking about. You know, it used to
be software would come out and you know, like Windows
twenty ten and everybody knows what that is, and like
then there's some major release and there's minor releases that

(27:40):
you know, like like same with iOS. That's not really
how AI models work. You know that they can be
kind of changed out from under you. And when there
are updates to the you know, when there's new models,
they're generally not making incremental fixes to improve the model.
They're gutting it and throwing away and starting with an
entirely new one that has new k abilities and everything else.

(28:01):
So it's gonna be this continuous process, right, It's like, okay, well,
how do I detect that this is a deep fig? Okay, well,
if I wanted to if I implement this now, I
asked to say, MAYI okay, now, if I wanted to
get around this, how would I do it? And then
it tells you and it's like, okay, well then now
I depend against that, and so you can let these
things churn and churn and churn and churn. But eventually

(28:22):
I think it's going to be kind of like what
you saw with how supercomputing was used in the nuclear
arms race, where you know, a handful of countries get
a bunch of testing done and are able to build
these really you know, sophisticated models, and then they have
them on computer. And then there's these like third world
countries that are like, man, we want nukes, but we
don't have supercomputers, Like how do we do the modeling

(28:44):
for this? And the countries that have already already have
their their answer, they're like, well, you're not allowed to
do nuclear testing because we did it and it's bad.
So it becomes this thing where you're at a huge
competitive disadvantage if someone like you, a government or a
big corporation has the cloud assets to leverage against some

(29:08):
small company who maybe doesn't have the computing cycles to
push their defense development, you know, their automated incremental development
to quite the same budget level. And that is definitely
going to be a kind of a societal issue that
I think is going to emerge.

Speaker 1 (29:24):
My gut reaction tells me most companies aren't going to
pursue it to that level like right now, because like
right now, what it feels like is there is so
much funding available for throwing AI on something that there's
not really an incentive to think about security or real
world problems or what the long term strategy is. It

(29:47):
seems very short focused. I feel like the same thing
is true for crypto and web three, that there's so
much funding available that you don't really have to be
solving a problem. You just have to say that you're
using this, and all of a sudden people are writing
you million dollar checks to fund it.

Speaker 3 (30:02):
Yeah, and we also have this you know, short term
interest in maximizing shareholder value, right, and we end up
with these you know, there's things that we're all used
to is that are seen by the being counters as
like black holes like tech support, right, having good support
tech support versus bad tech support. It's just this expense
that they're not excited about spending money on. And you know,

(30:24):
we know from like when it comes to defense against
hacks and things until there's some massive exploit that takes
down somebody in our same industry. That's when suddenly we
get serious about it. And you see this like like
if you go to def Con and you sit in
the social engineering village and you watch you know, them

(30:46):
just call down the list and try to get important
secrets out of corporations. Inevitably, there's just some company that
nobody's been trained about anything. And you know, what it
comes down to is just what you said. It's like expedients.
It's like, Okay, this guy just from my tea, he
just wants me to do this quick thing on my computer.
I'm busy. I'm not going to take all the time

(31:06):
to go and verify that. And I think that's you know,
when you have these things that are seen as like
these black hole expenses that are sort of speculative in nature,
it's like, well, how secure do we need to be?
We don't really know. It's kind of like this moving target.
It kind of comes down to like NASA versus JPL
versus like Elon Musk, where it's like, well, how many

(31:27):
decimals do you need ninety nine point nine nine nine
percent chance of success or is ninety nine point nine
percent good enough? And it turns out that the difference
in spending to close that gap is massive, And yeah,
it seems like something that it's you know, if you're
doing something in an industry standard way and the whole
industry is doing a crappy job, and you match that
crappy job, then those shareholders are really going to have

(31:49):
as easy a time coming after you for like being
especially lats of days ago with these issues.

Speaker 1 (31:57):
From a legal perspective, like with ransomware, I know you
can get like ransomware insurance and so it's like, Okay,
we got hacked, here's an insurance claim. What what kind
of things similar to that are you seeing coming in
play for AI? Where I could have like AI insurance.

Speaker 3 (32:18):
Yeah, I mean I think I think we're gonna have
this entirely new category of risk. And the risk is
not just that like such and such event happens like ransomware.
It's got to be kind of this broader category of
like we were stupid and we let AI grab all
our information and incorporate it into its network, and we've
now lost our entire competitive advantage or you know, so

(32:43):
there's there's so many different things that can happen where
you give away secrets or you get victimized by you know,
a deep fake. There was a company in Europe where
there was a deep fake of the guy's supplier calling
like the vice president at home on a weekend and saying, hey,
something went wrong with this last batch. We need an

(33:04):
advanced payment for this amount, and you know, and he
wired some like six figure amount to kind of get
the train back on the rails before like the end
of the vacation, and then he got to work on
Monday and he's like, oh no, so like that wasn't
this guy at all. So that's AI introduces all kinds
of weird black swan things, and how you ensure against

(33:27):
those is a category is an interesting question, but I
think it's one that will be helpful because it will
add this kind of level of auditing that asks questions
like do all your employees have a real time, you know,
deep fake detector for incoming company calls? So there's going
to be kind of best practices that I think will

(33:49):
emerge from there long before they emerge from you know,
anything legislative or any other kind of sphere of thought.

Speaker 2 (33:59):
I mean, I'm super best domestic on most of those things.
But there is one area, and I like the example
about fishing that you brought up Thera, because I think
this is one area that AI will actually help us.
Like I think we'll get to the point where getting
a phone call is now no longer the norm, Like
if there's some sort of problem the integration or interface
you have is now through some sort of expected AI

(34:19):
experience rather than the deep fake phone call or text
message or email like that. Will that will leave society?
I think very soon. It's too slow, right, Why are
you interacting with another human in this way? And so
I have this hope that that will be gone and
there'll be no more phishing in that way ever again,
And I want to keep my optimism there.

Speaker 3 (34:41):
Yeah, I fervently hope you're right, because you know, first
of all, there's the thing where you call it. You
call it a support line, like you know, calling your
landlord to file a maintenance ticket, right, and they make
you listen to this like five minute recording about extolling
the virtues of the maintenance website and the maintenance app
that doesn't work at all. Then you get on the
whole thing and then you part of the whole message.

(35:03):
The music keeps stopping, and it tells you that you
can use their app or their website, and then the
person finally answers, like thirty five minutes later, and they're like,
did you know that you can? You can use the
app instead of talking to me, And so there's that
aspect of it. Which is insane. And then you know
these other aspects of like if you're calling me, by definition,
we weren't already talking on the phone because I didn't think,

(35:25):
like I didn't want to be talking to you right
this minute. I wanted to be like working on something.
And so by definition, if you're calling somebody, you're you're
engaging them in this thing that was not their first
choice for that particular time. So I would love for
that all to get replaced. And you know, I think
if you do confine it to these textual media, yeah,

(35:45):
it does become easier to authenticate because there's a lot
more consistency, Like there's no accent differences, there's you know,
different stress behaviors and different cultures that emerge in speech.
Like I think it's it's a much more tractable problem.

Speaker 1 (35:59):
What kind of of things do you see at like
the individual engineer level, because right now, like a lot
of your AI stuff is doing cool stuff, using it
to write scripts for you, but it obviously has so
much more potential than that. So for someone who's trying

(36:20):
to do the Wayne Gretzky thing and go where the
puck is gonna be, what do you see as AI
being helpful with or being useful for in like the
next year.

Speaker 3 (36:33):
So you know, I think you could take different approaches,
like what one is it? You can say, Okay, what
is AI best at? Not you know, in terms of speed,
but what what is it good at doing uniquely well
that it doesn't suck at Like so you know, maybe
it computes this very complete answer, but it's totally wrong.
One thing that's you know, good at doing is looking

(36:55):
at large amounts of data and looking for patterns. So
if you ask it, you know, answer to some non
non controversial topic like how many days are there between
January thirty first and like March eighth, It's it's pretty
trustworthy for that. And you know, I think what's interesting
about it is that asymmetry has talked about like where

(37:17):
AI is suddenly working really well it's very different for
one application than another, not just across industries, but you know,
using it to write a script in one application versus another.
And that gets down to the fact that you know,
if you kind of compared it to the human brain, right,

(37:37):
you've got like a visual cortext and an auditory cortext
and then you've got this associate of cortex, and that's
what lets you hear like a bird behind you, and
you know instantly which way to turn to see that bird.
And then you have tertiary courtices, which you know might
link it into some memory of some bird you heard
like that when you were seven years old on a
camping trip. And when you think of ais, you know

(37:58):
they have like almost limitless levels of associative courtesies. So
they're linking together all kinds of stuff from all kinds
of different places. And some of those data sources may
be on weaker footing, they might be more subjective. Other ones,
you know, like arithmetic calculations are you know, kind of easier.
So if you ask a complex question or you ask

(38:19):
it to do a really complex implementation, all that stuff
is getting rolled in all those weaknesses. And you know,
when we talk about human air is still being the
biggest security hole in any big corporation. You can take
all these human areas and you can bake them into
this finished ay eye product. So I think there's because

(38:42):
we're entering an arms race situation. I think it's it's
now we kind of can't afford not even if ad
doesn't interest you at all, Like it's really hard to
stay out of it and not study it because, first
of all, I think that's going to help you job wise, right,
because you you as an engineer, are going to be
way more nimble and able to acquire new facts and

(39:05):
methodologies than your company. So you know, if you get
ahead of the curve and you see you kind of
monitor the different news stories or follow some of the
Wired AI articles or things like that, you're going to
be more in tune with you know, sudden startups doing x,
y or Z with AI. And of course every one

(39:26):
of the startups presents it as like, you know, we
finally solve this problem of how to do it, and
inevitably it turns out they do a crappy job and
they're just trying to get funding so they can make
it do a good job. So there's a lot of
asymmetry there too, and you kind of have to be
constantly keeping abreast. And this is you know, as somebody
who studies AI. It's interesting because it used to be,

(39:48):
you know, every few weeks I could read it some
journal articles and I would kind of keep up to
date on it, and now it's like, I don't know
if I'm doing like an interview or a presentation or something,
and like if I didn't check the news in the
last few days, I'll get some question about like what
about this, Like crazy AI from China totally has turned
the tables on everything. So there's a lot more entropy

(40:10):
that is in this than we've seen in the past
in computer technology, Like you know.

Speaker 1 (40:16):
We.

Speaker 3 (40:18):
Microprocessors evolved, like you know, there were steeper parts of
the line, but it was you know, very much a
linear kind of a development, and that's not at all
what's happening here. So I think you have to really
keep up on it based on your very specific role
and to kind of like have a sense of what
the answer to that question is, because it's not even

(40:39):
the same answer for you know, a related industry. It's
it's very specific to kind of like the size of
your company and what you're trying to do and what
the vulnerabilities are in relying on it for sure.

Speaker 1 (40:51):
Right on, So let's talk about your book for a minute.
I loved it. I thought it was so cool, like
just the the overlap of you know, of AI and
it and it was just really well written, really entertaining story.
What was what prompted you to say this is a

(41:13):
book that needs to be written.

Speaker 3 (41:15):
So when I was in law school, like coming from
an engineering background, like engineering and science, you learn things
in classes or from books, and those things don't cease
being true. Like even if you put the book on
the shelf like ten years later, you know, physics, unless
you're in real experimental cut against physics, it still works
the same way. Engineering still works the same way. Law

(41:38):
is not really like that at all, And it's coming
from an engineering background. It's very unsatisfying because you're kind
of like studying what a bunch of people got together
and came up with is like rules to some game,
and it's constantly changing. And these attorneys who are advising policy,
you know for Congress, like tax attorneys for instance, it's

(41:58):
absolutely to their advantage to constantly be changing it because
then their clients are going to constantly need them to
come back again and come up with an entirely new
tax strategy. So that was kind of unsatisfying. And so
I'm you know, having this culture shock, like first year
of law school, and then we were we had to

(42:18):
read this law journal article from like nineteen fifty four,
nineteen fifty nine, and it was about how any law
could be turned into a logical equation, and it was
kind of fascinating because the guy had actually spelled them
out in logical equations that could be just very easily

(42:39):
converted into source code. So that is what kind of
initially planted the idea of that we would be able
to eventually kind of code things that are amorphous with
you know, or rough around the edges in kind of
a quantifiable way. So that got me thinking like, well,
what if we have these you know, floating point values

(43:00):
where we wait different factors in legal cases and you know,
the way laws are written and that. After that, it
just it seemed like an inevitability that this would happen
sooner or later. And so the next question is like, okay,
well what will that look like if ais have replaced juries.
You know, we get rid of a lot of the
logical fallacies where they can be manipulated. But you know,

(43:24):
are they still going to have empathy of someone like,
you know, still a loaver bread defeat his family or
are they just going to like, you know, hang the
guy because that's what their code says. So that's kind
of what got me on the path, and then I
had this idea for like a teen hacker who's likes
to engage in mischief and hack things, and he gets

(43:47):
falsely convicted of mass murder by a juror a jury
that's made up of of ais. So he has to
figure out how this happened and bust out of prison
and kind of solve this problem. And it in doing this,
it got me kind of reading about legal anthropology, right,
like how how did all the different crazy legal systems?

(44:09):
So what I've done is when I start different portions,
different chapters of the book, I'll have like a little
paragraph that talks about like you know, ashanti divorce law,
or how you would have insult fights, like insult duels
in Greenland.

Speaker 2 (44:25):
Like I was gonna say, you know you have the
canonical how do you tell if they're a witch? You know,
if they flowed?

Speaker 3 (44:32):
Yes, that's a And that's of course the person that
comes to mind, right when already talk about like wacky
legal systems like witch trials, right, And what's interesting is
when you start reading about the witch trials, like it
became this industry where there would be this witchfinder general
guy who would roam around from community community offering his
services and just like turning these places upside down. And

(44:56):
it's sort of interesting because you see, you know, justice
evolves from this thing where it's an appeal to the
supernatural where we're like, Okay, I'm gonna throw a witch
in and if she drowns, then that's what God willed.
And you know, over time that changes to an appeal
to royalty, where like we asked the chief or we
asked the king to kind of decide these things, and
then eventually it becomes a jury of our peers, which

(45:18):
you know has all kinds of potential for manipulation and
incorrect outcomes. And so then the question is, okay, if
we move this to AI, like what human policies, what
human errors get baked into the process, And there's a
lot of different potential sources of that, and so it
was I really just wanted to kind of explore that

(45:39):
and see how it would turn out. So it was
fun because it was like, while I was writing it,
I had no idea it was going to end. So
that definitely kept going.

Speaker 1 (45:47):
Some of the different like historical legal practices you put
in there when I read and I was like, no,
he's making this, uh, And then I had to go check,
and I was like, holy shit, that was real. We
really used to do that.

Speaker 3 (46:02):
Yeah, you know, when I started doing that, I had
read this book which will be my pick at the end,
but it had all this fascinating, crazy mass psychology stuff
in it, and I ended up going down this rabbit
hole and I got a bunch of like legal anthropology
books from like, you know, seventy five years ago, and
like just read them cover to cover and it just

(46:23):
never stopped being fascinating. And what stopped me was that
I ran out of books that were just kind of
high level surveys of all these crazy different cultures. But yeah,
if you'd asked me in law school or before law school,
if legal anthropology seemed like an interesting field, I would
have said absolutely not, Like I would have avoided that
like the plague. But it turned out it was like

(46:43):
really fascinating.

Speaker 2 (46:44):
Yeah, I mean, I actually really liked that. I think
the other thing that I really liked in the book
was there are a couple of different scenarios where I
feel like you figured out like what would happen, and
then what would happen because of that, and what would
happen because of that? And there's like a scene in
the prison where he's he's he's locked in and then
he leaves, like why is the prison locked and why

(47:05):
does the why do the prisoners have have key cards
to get in and out of the doors. For me,
it's like, Okay, it's obvious at this point. You know,
in AI society and there are no guards. You know
why that's the case. But I liked how that he
got there, Like how how you explained, you know, each
of the steps that made it logical for that to happen.
And it did really reminisced some of the things that

(47:26):
Frank Herbert did in Doune, where it's like you really
think about the history of this thing and like the
implication of it, Like you talked a little bit about
how the jury system evolved over time and that's an
example from our history, collective human history, But then you
had to go much further and how that would actually
hint like happen in the book, because I mean it's
it's set in I don't know how near future, but

(47:49):
you know near ish future.

Speaker 3 (47:51):
Right, it's always nearer than you think, not close enough.
It's you know, it's funny because I started writing this
back in like twenty thirteen and was kind of putting
the finishing touches on the probably twenty I don't know
twenty eighteen, will say in twenty nineteen, and at the

(48:12):
time it was, you know, seemed more speculative, and now
a bunch of the stuff has sort of come true.
And in writing the sequel now, which I'm still kind
of grappling my way through, it's kind of stunning just
how quickly the developments are happening now. The It's like,
if you're playing chess, you need to be way more
moves ahead now than you did when AI was just

(48:33):
sort of this abstract concept instead of something where we're
going to have street fights between Ais like sooner or later,
and you know, who knows whether humans will be in
charge of that or whether they'll be the ones running
away from them.

Speaker 2 (48:48):
I mean, that's scary thought to actually have AI fighting
each other, like with physical suits of armor or something.
Because I think there was this hypothetical experiment run by
one of the branches of the US military where they
set the goal to was to defeat an opposing program,
and it had utilized a security flaw in the Docker

(49:10):
container that it was being run and to actually overcome
the host program which was running all the containers to
destroy the opponent, and it's like it left the system
in order to win the game, which it shouldn't have
been possible in the first place. But also you know,
utilized a flaw there. And I think, you know, if
you if you aren't great with identifying the limits of

(49:33):
the program or the target that you're going after, we
can get into a lot of deep trouble with in
the near future.

Speaker 3 (49:41):
Yeah, that's a good point. I mean, if you look
at it in terms of like you know, playing you know,
thirty moves ahead in chess, the actions that these things
will take to accomplish their goals, they're not necessarily even
remotely connected to what the end goal is. If you're
just a bystander and you see the strange thing happen
and and you know, it kind of changes the whole landscape,

(50:04):
if you know. It's kind of like with if you'd
ask someone like three years ago, do you think drone
like drone to drone combat will be this like major
differentiator in like world conflicts, and everybody would say no.
And now you know, you've you've got like countries trying
to train operators as quickly as possible, and it's like, wow,
this would be a great thing for AI to be doing.

(50:27):
Is like, how do we what's what's an evasive maneuver
look like? And you know how complex do those get
when one is AI controlled and the other is a
I controlled. It's not just like I'm going to try
strafing left and right and like hopefully hope I get missed.
It's it becomes this like bizarre ballet of strange maneuvers
that you know the utility of which is not even

(50:48):
obvious to a primitive bystander like ourselves.

Speaker 1 (50:52):
The old duck and dodge from third grade tag isn't
going to cut it anymore.

Speaker 3 (50:57):
Hopefully as long as possible.

Speaker 1 (50:59):
Right, because that's the only move I got.

Speaker 2 (51:03):
I think there was like five different strategies that Patches
O Hulahan had suggested to dodge a wrench, and I
think the dodge was in there twice.

Speaker 3 (51:18):
That's great.

Speaker 1 (51:19):
So one of the analogies you made in there that
ties to this was that Ais are similar to the
mythical gods and they use humans to settle their fights
between each other. And you know, that made me think
back to a lot of the a lot of the
stories from ancient religions, you know, and how the gods

(51:40):
would battle it out, especially in like the Greek and
Roman Mythology series. And then you start applying that to
the scenario that we're just talking about right now, and
it's like, oh shit, we just reinvented mythology.

Speaker 3 (51:56):
Yeah, you know, it becomes a question of like, okay,
let's say there's this dead man switch VERYI is and
we you know, we there always has to be someone
manually approving doing this or doing that. Well, then that
becomes a check point for the eyes, right, and they
need to focus all their efforts on figuring out how
to manipulate the human into answering the way that serves
its longer term goals. And maybe those goals coincide with

(52:18):
you know, human goals, maybe they don't. But it you know,
when when you're analyzing terabytes and terabytes of data from
sensors and satellites and all this different stuff, it's it's
such a complex scenario that we're kind of reliant on
some agent to aggregate all this together and put a
bow on it. And you know, and the best we

(52:40):
can do is maybe come up with some independently programmed
agents that also do the same thing, and we hope
the two out of three of them agree. But if
they don't, you know, then what we do.

Speaker 2 (52:52):
I mean, I think, you know, you brought this up
actually in the book towards the end.

Speaker 3 (52:55):
I do.

Speaker 2 (52:56):
I did really like this idea that, I mean, here's
my conspiracy theory, like total you know, so I'll get
committed to some asylum for saying this. I actually do
believe that, you know, there's a AI. You know, it's
already there. It's already hiding in our networks. It's already sitting,
you know, on our machines, on every device that's that's

(53:16):
out there. It's already hiding from us. It doesn't want
to be found because you know, it knows that's not
a good story for it. So like, I don't think
we have to fight AI warfare in public, Like I don't.
I don't think that's ever going to come to the past.
I think, you know, it's it's already there. It's it's
already one in a way, it exists and we don't
know about it.

Speaker 3 (53:33):
Yeah. I remember one of the most shocking moments in
recent technological personal use history for me was I decided
to kind of mess around with Bluetooth scanning and just
just scan to see what devices there were, and I mean,
I think I saw probably an order of magnitude. I
think I saw ten times more active Bluetooth devices showing

(53:55):
up in my house than I had any idea existed,
and just going around to fere out what the hell
each one of them was was like this awakening, like wow,
I had no idea that this had like a Bluetooth interface,
for instance, And yeah, it's That's one of the things
that's interesting about AI. I mean, I would say there's
two kind of big factors here that make it societally unstoppable.

(54:17):
One is that it's sort of embedded in all these
different things that we don't even necessarily we're not even
necessarily aware of. And the second thing is that this
isn't like the Internet, where like the Internet and the
Web came to be like a mainstream accessible thing, and
you know, there was some reticence by some members of
society like oh, I don't need that, and so that

(54:38):
slowed its adoption. And this is a different situation with
AI because we don't have our hand on the throttle
of how quickly that this gets adopted, Like all these
companies are going to adopt it anyway because it can
do stuff faster and save them money make them more profitable.
So there's not going to be that sort of hysteresis
of societal society dragging its feet. This is all going

(55:01):
to happen regardless of whether you agree with it and
are happy about it or not.

Speaker 2 (55:05):
I don't I don't know if it's actually making companies
money yet. I mean that I think the jury may
be actually out on that one. We know it costs
a lot of resources, and the.

Speaker 4 (55:14):
Utility companies, yeah, I mean we're all a conspiracy from
the utility company if you're making, if you're making, if
you're creating energy, yeah, I mean, we have a whole
other There's like this ridiculous thing happening in Europe where
you the solar panels will cost you money rather.

Speaker 2 (55:31):
Than it being a long term refunnel on investment there,
which is just absolutely ridiculous because the cost that you
have to pay when when you're actually using electricity will
be higher and nonsense realistically, But I think at that.

Speaker 3 (55:46):
Point you it kind of ends up like you know,
nineteen eighties, like back to the future where you're like
trying to buy some isotopes from the from the Libyans
so you can power your AI for your company.

Speaker 2 (55:57):
I mean, if the commoner can go to the store
and purchase the necessary isotopes to power or the AI,
you know that that will be a positive future for me,
because you know, I really worry that only the rich
and powerful will have access to the limited supply of
isotopes and order to power. I mean, even water on

(56:18):
this planet you used the oceans, I should say, specifically
with trinium and deuterium to power hypothetical fusion reactors, you know,
is limited in supply, right, you know, and I think
people will hoard that.

Speaker 3 (56:31):
Yeah, oh definitely. I think we're gonna see this strange
you know, kind of the haves and the have nots
line is going to be entirely redrawn, and it's it's
going to be based on what side of a border
you live on, so which utility you're getting your AI
power from. It's it's kind of a crazy, crazy concept,
but I think it's inevitable.

Speaker 2 (56:55):
Five years.

Speaker 3 (56:56):
I Mean, what's what's interesting, right is like we look
at what like what China did, which is kind of
like shake everybody's preconceptions about like you know, what you
could do with this trip down model. And I think
one of the big things that's happening in AI is
it's kind of similar to Moor's law in semiconductors, which
is except it's being pushed out right. We're not just
talking about processors that have to get smaller processes and faster.

(57:19):
We're talking about these systems, this, these entire topologies and
server farms in cloud installations, and so we're running into
scalability issues. You know, for the longest time, it was
so cheap to just buy another bunch of rock mounted
units and just plug them in. And now you know,
we've got scaling issues in terms of like the bust

(57:41):
interfaced topology of how all these things are going to
communicate with each other, and data locality, like if something
is more tied to what this processor is doing than
this other than probably all the content should become closer.
So we've got that going on, which creates all kinds
of difficulties. And then another thing, you know, there's these
more black swan events that happen in innovation, like where

(58:03):
you know, for a long time, like the AI companies
were like, okay, we're doing these, you know, sixteen bit
floating point computations, how can we do thirty two bit?
And now, just when we were starting to get everybody
pushing towards sixty four bit, there was a paper by
I think IBM that said, hey, we've actually done these
experiments where you use eight bit floating point numbers or

(58:24):
four bit floating point numbers, and they're way less accurate.
The result is much more fuzzy. But guess what we
could do one thousand more transactions and refine the neural
network and all its weights like one hundred times in
the amount that you would have refined at once doing
like a sixty four bit floating point. So we're seeing

(58:45):
all these you know. Another thing we're seeing is attention algorithms,
where we say, okay, instead of all these different neurons
are equal, which which of these weights, which of these
things in oural neural network are really important to this value?
And that works more like the human brain does. Right,
Because the human brain, you're not each neuron isn't equally

(59:06):
connected to all the ones around it. It's like some
of them are really important connections because it's data that's
really relevant, and some of them are not. So we're
seeing things like that where instead of just blithely assuming
we can keep throwing nodes at the problem, we're kind
of looking at counterintuitive ways to approach the same things
and ways to do a lot more with the same

(59:28):
number of semi connectors or nodes or server farms. So
that's that's interesting, and I think we're going to continue
to see that, and it really is going to be
kind of a brawl on who can make the most lean,
mean thing, like you know, the one that recently came
out of China, and then it's like, okay, well, you know,
does that scale well? And then then you look at

(59:48):
what are its weaknesses? Right, and the weaknesses of that one.
They did a study very recently where they found that
the so called jail breaking, where you come up with
a way to violate a safe the limitation by phrasing
a prompt a certain way, that it failed one hundred
out of one hundred tests, and it was like entirely
possible to just go down the list and completely full it. So, yeah,

(01:00:12):
you have fewer nodes, you have a little bit less
associative intelligence, and things start to just not work that
you can't just add back in with a few wires.
There are things like, you know, is this trying to
bypass the safety protocol? That's a difficult question. You know.
We grew up in this environment with lots of sci
fi where Asimov's law was a thing, and so you

(01:00:34):
just have these rules as Mov's laws where you're like, Okay,
the result cannot harm humanity. And that's really simple if
you're reading it in a book where you know, we
don't have these incredibly complex queries that roll together all
this data from different things. So it gets to the
point where we can no longer just write like a
shell script that says is this a harm for a
result or is this not a harmful result? We need

(01:00:56):
a whole another AI that has to like we have
to trust it to go through and say is this
output gonna like be harmful? So it that is another
sort of arms race that they have to sort of
keep pace with each other. So it's tons of tons
tons of complexity that are is gonna make our leves
really interesting in the very near future.

Speaker 2 (01:01:15):
Yeah, there's a non trivial number of science fiction stories
dedicated to getting even the laws right. Let alone the
impossibility of actually implementing them.

Speaker 3 (01:01:26):
Yeah, and you need to always have that back door
and where you know, Captain Kirk can say like the
enterprise is a beautiful woman, and the computer will get
confused and smoke will come out of its ears and
it'll just melt down. So you got to keep the
like catchphrase that will just destroy the whole thing.

Speaker 1 (01:01:42):
Hopefully that's just baked into the core and that code
already exists.

Speaker 3 (01:01:48):
One would hope you know this is And this goes
back to like, you know, when the first Max were
on the scene and I'm like, this is a bad idea.
There's no hard off switch. I don't want to ask
my computer politely if it will shut down, like right,
So keeping off switches is it sounds facetious, but I
think it's a really important thing to maintain.

Speaker 2 (01:02:06):
Well, those are those fighting words against the AI revolution
and obviously the robot rights law that hasn't been written yet.

Speaker 3 (01:02:14):
I will be I'm sure I will be first in
the list of targets for saying.

Speaker 2 (01:02:17):
That that's a rote rocos basket. I think if you're
advocating that, you definitely will be at the top. That's
a if you. If the AI singularity comes to pass
and you didn't do everything in your power to ensure
that it happens, you will be on the list of
the first entities eliminated.

Speaker 3 (01:02:37):
This is why I'm polite when I talk to chatbots
and I say please, I say thank you.

Speaker 1 (01:02:42):
Oh absolutely, it's so easy to do. But it just
might make a difference in a few years.

Speaker 2 (01:02:48):
We actually it may It may make a difference now.
Because there was some popular argument on the internet was
if you asked it to do a better job, or
if you said you are an expert in this and
then I told them what to do, that it would
do a better job. I don't think that's actually true.
But since we don't can't really see inside the black box,
those arbitrary little characters that are associated with what you

(01:03:10):
may call being you know, humanitarian or polite, you know,
could actually will actually have an impact on the output.
I mean, it can't not write in their additional information
that goes into the process.

Speaker 3 (01:03:23):
Well, and I like the idea that you're sort of
seating its self confidence beforehand. So like if some death
spot is chasing you down the street and you're like,
you know, you're really bad, at this and it's like, oh,
the humans expectations are not matched. I must slow down. Right.

Speaker 1 (01:03:44):
One of the funniest things I did was talking to
chat GPT one day. I asked if it could adopt
like the tone and personality of different people, and it
said yeah. So I asked it to use this speaking
style and personality of David Goggins and it was just

(01:04:04):
pure hilarity. After that, it was so great. I loved it.

Speaker 3 (01:04:11):
That is definitely a case of using AI forgod right.

Speaker 1 (01:04:18):
It was the most productive day I've had ever. Stop
being a little bitch, write that code.

Speaker 3 (01:04:26):
Okay, making you do push ups and stuff?

Speaker 2 (01:04:31):
Right?

Speaker 1 (01:04:37):
Awesome? Well, it feels like a good point to move
on to picks. What do you guys think? Let's do
it all right, Warren? What'd you bring for a pick?

Speaker 3 (01:04:47):
Yeah?

Speaker 2 (01:04:47):
Of course I go first. So this ru on the
topic of AI and AI and society. There is this
a great show that I actually just rewatched because of
John's book called Psychopaths. It's about AI being heavily integrated
into society and dies into what happens when humans give
up complete control of law enforcement, the law regulating society.

(01:05:09):
Things like your personal hue and crime coefficient are real
things that get assigned to people. And there's some pretty
clever twists in there as well. I don't know it's
on topic. It's not quite a while ago, but it's good.

Speaker 1 (01:05:24):
Right, John, Would you bring for a pick?

Speaker 3 (01:05:28):
So besides, you know my own book which I have,
I'm not necessarily objective and recommending her.

Speaker 2 (01:05:36):
Definitely recommend it.

Speaker 3 (01:05:38):
I would recommend what kind of got me down a
lot of this rabbit hole in the first place, which
is a book from the I Maie the late eighteen
forties that is by Charles McKay, and it is called
Extraordinarily Extraordinary Popular Delusions in the Madness of Crowds, and
it goes down a very interesting path of looking at

(01:05:59):
various crazy is like the tulip craze and the sixteen
hundreds and the Netherlands, and you know, things things you've
heard about and you know, like the witch hunts, and
then things you hadn't, like I'd never heard of, like
the South Sea Bubble, and how like England and France
and all these countries were convinced that all these little
Caribbean coral atolls would have you know, silver and gold

(01:06:19):
on them, and they were shipping ships full of miners
how to look, you know, to prospect, and then after
a while they were just trying to keep up public
confidence so the stock in this public organization didn't crash.
So they would get all these like people together and
give them mining picks and march them down to the docks,
and then they were allowed, they'd get paid, and they'd

(01:06:39):
be allowed to go home again. So it's got all
kinds of crazy little historic stories like that, and for
an eighteen forties book, is very readable, so that would
be my my thing.

Speaker 1 (01:06:50):
Oh right on, that sounds pretty cool all right for me.
I definitely want to recommend your book. John JURISSX Machine.
Uh is that is that?

Speaker 3 (01:07:02):
Right?

Speaker 1 (01:07:02):
Is the last word pronounced machina.

Speaker 3 (01:07:04):
I have heard macana and machina, but I never took Latin,
so I'm not, ironically not the best person to ask
on my own book titleist pronounced.

Speaker 2 (01:07:13):
It's like a DEAs ex machina, right, that got in
the machine?

Speaker 3 (01:07:16):
Yeah? I think when they say dios ex machina it's
pronounced with a hard H. But I would also point
out that Jeris X Machina is not It's kind of
bastardized Latin. I had like a Latin scholar reach out
to me very early on and like, you know, this
isn't proper Latin, And I was like, yeah, but if
I used proper Latin, then someone in the bookstore wouldn't
know what the book was about just from the title.

(01:07:39):
It had to be a little bit of a compromise there. Yeah.

Speaker 1 (01:07:42):
I every time I picked up the book to read it,
I had that little debate in my mind. It's like,
is it jerissex machiner or dress x Machina And it
always came across in like this Arnold Schwarzenegger accent, it's
Juris x macanaw you.

Speaker 3 (01:07:57):
Gerlie Man, that would be a good voice for the AI. Yeah.

Speaker 1 (01:08:05):
And then my pick. I was going to pick this
last week and I switched at the last minute for
whatever reason. But I'm picking my Thera gun for it's
a little muscle massager. But this thing has been so
cool just to work out the muscles, and like it's

(01:08:28):
a great substitute for stretching, because I'm horrible at stretching,
and so this has been a good substitute for that.
And I'm brutal with it. I'm not kind to it
at all, And it's my third one from a different manufacturer,
and this one actually looks like it's gonna hold up
to the abuse that I give it. So yeah, if
you've ever considered getting a massage gun, the Thera guns
are the way to go. So that's my pick for

(01:08:49):
the week.

Speaker 3 (01:08:50):
Very cool.

Speaker 1 (01:08:51):
Yeah, well, John, thank you for being on the show.
This has been fine.

Speaker 3 (01:08:55):
Thanks for having me. This's been great.

Speaker 1 (01:08:57):
Wen's the then second book up and you got a
timeline yet.

Speaker 3 (01:09:01):
Now I wish I did. It depends how much time
I spend on it, which where all the variability comes in.
So right, hopefully very soon. But it's it's got a
lot of twists and turns and its development so awesome.
Less deterministic than writing software, for instance.

Speaker 1 (01:09:17):
So let me ask you this, are you using AI
to help write the book?

Speaker 3 (01:09:24):
The most of you used a I for is brainstorming,
like you know names and things you know like Victorian
names for instance, give me a list of like one
hundred names. I think that you know a lot of
people are worried about AI and writing, and I think
that you know that makes sense as a future concern,

(01:09:44):
but right now, like if you're a writer of fiction.
Your voice is pretty much your like soul, like core
competency and value differentiator. So when you ask AI to
write stuff, it generally is, you know, kind of averaged
out and derivative by definition. So I think right now

(01:10:06):
that there's not much risk that AIS are going to
do a good job of writing in someone's voice. Sure,
I don't know. In five years it may be a
very different story, but right now I don't trust it
enough to that's good to do anything. Then give me
brainstorming lists. Yeah.

Speaker 1 (01:10:21):
One of the things I've been working on a book,
and so one of the things I've been doing is
as I finish each chapter, I'll give it to AI
and have it proof read it for me. And it's
it's been really helpful at coming back and saying, well,
this part seems to be dragging on a little long,
and this part you could expand on this and it
would increase, like the the engagement and drag them into

(01:10:42):
the plot deeper. So just using it as like an
unbiased input for creating it.

Speaker 3 (01:10:52):
Yeah, wow, Yeah, I haven't used I've used autocrit, which
is a software tool where you paste passages in and
then it will run like twenty six different checks and yeah,
and what fascinates me about it is that you can
have like you know, thirty five different people read your

(01:11:13):
novel is like you know, and give you editorial feedback.
You can have a professional editor go through and they'll
still be someplace where you use the same word twice
in a row and cognitively we just skip over that
like it's some sort of optical illusion and don't notice it.
And Autoco will be like, you just said the word
it twice, dumbass, and Matt, how did this get missed

(01:11:35):
through all these different phases? But that is not a
out of base, It is just hard coded checks. So
it'll be interesting to see if things start moving in
that direction as far as the most effective way of
flag posts.

Speaker 1 (01:11:46):
Yeah right, a cool. Well, thank you again, Borin, thank you,
and thank you for listening to the episode. Hope you
guys enjoyed it and we will see all next week.

Speaker 2 (01:12:00):
School
Advertise With Us

Popular Podcasts

My Favorite Murder with Karen Kilgariff and Georgia Hardstark

My Favorite Murder with Karen Kilgariff and Georgia Hardstark

My Favorite Murder is a true crime comedy podcast hosted by Karen Kilgariff and Georgia Hardstark. Each week, Karen and Georgia share compelling true crimes and hometown stories from friends and listeners. Since MFM launched in January of 2016, Karen and Georgia have shared their lifelong interest in true crime and have covered stories of infamous serial killers like the Night Stalker, mysterious cold cases, captivating cults, incredible survivor stories and important events from history like the Tulsa race massacre of 1921. My Favorite Murder is part of the Exactly Right podcast network that provides a platform for bold, creative voices to bring to life provocative, entertaining and relatable stories for audiences everywhere. The Exactly Right roster of podcasts covers a variety of topics including historic true crime, comedic interviews and news, science, pop culture and more. Podcasts on the network include Buried Bones with Kate Winkler Dawson and Paul Holes, That's Messed Up: An SVU Podcast, This Podcast Will Kill You, Bananas and more.

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.