Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
How's it going, derek
?
It's been a couple of years, Ithink, since you've been on the
podcast and, for those thataren't around, right like, I
gained I don't know over 30,000subscribers in the last two
months, right?
So the majority of you don'teven know that me and Derek
actually started this thing.
Speaker 2 (00:16):
Yeah, yeah started it
.
It was almost what four, maybefive years ago at this time.
Speaker 1 (00:21):
Man.
I think it was like yeah, fourand a half years ago.
Yeah.
Speaker 2 (00:25):
Yeah, time flies, man
.
You're talking about a journey.
I've watched you grow thechannel, do your thing and get
it to where you are now.
So big kudos, man, like I said,always believed in you, think
that you got big things comingyour way.
Speaker 1 (00:42):
At least someone
believed in me, because I
definitely didn't.
I was just too dumb to quit.
You know persistence is key man.
Speaker 2 (00:45):
That's it's key to a
lot of success.
Right, you got to havepersistence.
Speaker 1 (00:48):
That's the one thing
that, yeah, that's the one thing
that I do have that I thinkprobably gets me into a lot of
trouble and gives me like a badimage to some people is like I'm
just so damn persistent.
I mean like I've, you know,like part of my PhD, right, I've
?
I'm reaching out to differentexperts, you know, across the
(01:09):
globe to interview them for anhour, right, not for the podcast
before, for my research.
I mean, out of out of 10 peoplethat I emailed, three of them
responded to me and I mean I'mjust going after the other seven
and like like my chair is like,hey, man, you need to relax.
I'm like no, they need tofucking respond to an email.
Like that's what they need todo.
Speaker 2 (01:28):
Yeah, get me out your
hair by letting me know what's
up.
Speaker 1 (01:32):
Right, just tell me
no.
Speaker 2 (01:33):
Yeah, this doesn't
have to go on like this.
Speaker 1 (01:35):
but if you're going
to hurt my feelings.
Speaker 2 (01:37):
Yeah.
Speaker 1 (01:45):
Hey, the basic human
communications, right?
If you're hiding, then I'mgoing seeking yeah.
Yeah, I mean that's a greatpoint.
You know like I love it whenyou know your employer doesn't
communicate with you at all andshit changes as soon as you get
hired day one and it's like, oh,I'm reporting to someone
different.
Okay, that's weird, yeah.
Speaker 2 (02:01):
Yeah, yeah, I mean,
things haven't changed that much
, you know, since we opened upwith the pod and some of the
things that were core issuesback in the day.
Right, like it's probably goingto, it's going to be
interesting, but, yeah, basicthings like having an organized
standard practice forcommunicating and making sure
people are on board properly and, you know, not slapping their
(02:23):
food tray out Remember, likewhat people used to have the
food trays at lunch at school.
Like you don't want to go slapthe tray out of somebody's hands
before they even get to thetable.
Right, it's like, well, youdone made a mess, for now you
got to pick it up.
You're embarrassed becauseyou're thinking this should be
easy peasy.
Right, they on boarded me.
I'm here, I should be able torock and roll and nope, chaos
(02:45):
right Right off the bar.
Speaker 1 (02:46):
Yeah, immediately
going from a situation of, oh
hey, like security's respectedin the organization, going into
a good place.
You know I won't say thecompany, because I'm still
waiting on that last, you know,last little bit right but you
know, you're going into asituation where you think it's
solid and then you get there andyou find out, oh, security has
(03:08):
no power.
I'm just meant to sit here,like I'm not even supposed to
say anything.
I'm just meant to sit here.
Speaker 2 (03:13):
Yeah, the difference
between active value add
security and symbolic security,where you're there to kind of,
you know, help them check theboxes and have a front of you
know that whole security element.
And yeah, because, because, alot, because on the outside
right, a lot of organizationsnever really get to the point
where they're really reckonedwith and have to prove that
(03:36):
their security posture or theirsecurity controls are as
efficient as they allow theworld to believe.
Right, you usually only canrely on audits and attestations.
But on the outside, if itappears that you have a nice
security governance structureand you believe in some of the
buzzwords, the zero trust, andyou talk a little bit about CIS
(03:57):
and OWASP, right, a lot ofpeople nod their head because
they're like, okay, good, wedon't have to do a 250, you know
, question, deep dive on yourprocesses.
We're going to assume you'redoing the right thing.
But underneath the hood, likeyou said, it could be a
situation where you enter into acompany and it's like, yeah,
man, we all just here trying toget to the end of the week,
welcome.
Speaker 1 (04:17):
Yeah, yeah, it's, I
mean it's, it's.
It's shitty, to say the least.
You know, like I just don'tlike being lied to, right,
that's like the.
That's the worst part for mepersonally, because I'm basing,
you know, my family's livelihoodand you know outlook on what
someone's telling me, and thencome to find out it's something
(04:37):
totally different, you know, andit's like just eliminate your
role.
You know, but you know a partof all of that is why everyone
really needs, like a personalbrand, you know, in the industry
, and I think it's really hardfor people to wrap their head
around that because there's, youknow there, there's a million
other people out there that havethe certifications I have that
(05:02):
you know, have the experiencethat I I have probably have
better experience than I have,right, but if you don't have the
network, if you don't havepeople, really you know backing
it up and saying like, oh yeah,we need to get this guy, like
right now, you're going to havea significantly more difficult
situation going into it.
You know.
Speaker 2 (05:22):
Yeah, I mean, you
kind of talked about this and
you had the mindset of doingjust that, right, like four or
five years ago.
It kind of is what led to theidea of doing a podcast.
So your voice wasn'tnecessarily suppressed behind
some company's walls, right,because your skill, your work
ethic, what you're willing tosacrifice, what you're willing
(05:43):
to build, is only visible bythat one company that you're
working with at any particulartime.
And so if a company only wantsyou really to go at 10%, 30%,
then you know how do you getvalidated for all the hard work
you're willing to do.
You know all the sacrificesyou're willing to make to make
sure that you know.
You know you can get, you knowrespectable, you know value for
(06:05):
your services.
And so, yeah, branding yourselfand creating yourself a
portfolio of you know anythingthat you're really interested in
is valuable, because, again,you don't then have your, your
value being suppressed or notknown to the market, because
that's what this is, and workersneed to start to see themselves
a little bit as more asparticipants in the market where
(06:26):
their clients and their serversRight, you're there to do a job
and produce value, but you'realso there to kind of frame
yourself as being a serviceprovider that's willing to come
in and integrate with a team anddo that company culture thing
(06:48):
and help build a place.
That's fantastic.
But ultimately there aretransitions right between jobs
so you don't want to go intothere handicapping yourself when
you can now build yourself upsome visibility on the outside.
Speaker 1 (07:03):
Yeah, it's.
I mean I didn't mean to cut youoff there, but you know, like
you bring it up right, like thatwas like the primary reason why
I started my blog right in thevery beginning was to like get
my ideas out there.
You know, get something behindthe name Joe South right,
something that people can lookup, you know, and like base
(07:26):
their ideas and their thoughtsoff of.
You know what I'm saying andwhatnot, rather than you know
what we discussed, like in aninterview or whatever it might
be.
You know it's a lot easier thatway and it's actually.
You know, the podcast hasactually gotten me a lot more
jobs than I ever would haveexpected.
I never went into it to get ajob Right, but you know, along
the way, like I've had two orthree hiring managers probably
(07:50):
more right that have justreached out and just straight up
told me if you want this job,it's yours.
You just got an interview just,but it's yours.
You know.
I mean, like that that doesn'thappen, like in any other
context.
You know, like I know otherpeople in the industry that that
happens too, and it's becausethey're, you know, known for
hacking airplanes when theairplane is midair, on the way
(08:12):
to DEF CON and he decides toturn the plane a little bit to
the right and a little bit backto the left while spoofing the
electronic controls in thecockpit Right.
Like I mean, there's peoplethat I've had on the podcast, a
lot of people that I've had onthe podcast that you know do
stuff like that, that jobs justlike fall into their lap.
But it's because people knowthat you know they did something
(08:33):
, you know extraordinary and noweveryone knows them.
But like this kind of goes toshow you you don't have to do
anything extraordinary.
You know like this podcast is.
This podcast is nothing youknow like it's.
It's, it's just thoughtleadership.
Speaker 2 (08:45):
It can provide
additional legitimacy to
organizations if they know theyhave certain rock stars or
(09:11):
dragon slayers on their roster.
Yeah, so absolutely, you wantto be able to demonstrate your
potential.
Some organizations don't havethe capacity to take all your
potential.
It's kind of like having aneight ounce cup but you have a
gallon worth of value.
Well, either you're going totrickle that value out over a
very long time or the companywill never be able to realize it
(09:33):
, and so maybe you should thenpursue other avenues to get your
thoughts out there, to doprojects, to show what you're
capable of.
If this is something thatyou're very interested in doing
and that you're priding yourselfon right, because, like you
said, there's so much tied tothe value that we bring in our
work and healthcare, time off,taking care of our family, our
(09:54):
friends it's like that's reallyessential, and that kind of can
segue into what's happening nowin the market with AI.
Right Is that you might end uphaving a lot of displacement
take place because of automation, of agentic AI, of all these
new abilities to kind ofautomate certain tasks and
services, and knowing that therereally is no limit to how far
(10:17):
organizations are willing tokind of integrate this, as long
as they can get away with it.
Number one and number two putin enough processes and services
around that the value producedis equivalent to what they might
believe is acceptable for themto lay off or reduce workforce.
And so get it right.
They don't necessarily need tohave an AI agent that's
(10:39):
performing at 100% of yourcapacity.
They'll probably take onethat's going at 75 percent,
knowing that it's not going tocall off, it's not going to have
some sick time, it's notnecessarily going to talk back
or provide its input.
Right, we just want it to do X,y, z.
So some companies, someleadership, will value that over
the creativity that comes withpeople who are not necessarily
(11:02):
always that agreeable, right?
So, like for me, you know I wentdown a security architecture
path and you know I was lookingat TOGAF and DOD stuff and SAPSA
stuff and creating all theseExcel spreadsheets and these,
these checklists, right, and itwas like most companies are like
, hey, we're not really tryingto do all that.
Like, can you just give us theminimum?
We need to understand thatwe're passing this off to the
(11:24):
next toll gate withoutnecessarily, you know, having to
worry about something, and andso all that stuff that you would
try to do could be suppressedbecause they just want it quick
and fast.
So now, what does it look likeif they can have three or four
agents perform an architectureand producing everything that
they need in a day, a day and ahalf, everything that they need
(11:44):
in a day, a day and a half?
And, like I said, ifeverybody's doing it, then
everybody is going to start toagree that there's a minimum
amount that everyone is willingto accept.
So is it?
Is it lowering our standards oris it increasing our standards?
That's the question we need toask.
Speaker 1 (11:56):
Yeah, it's a, it's a
kind of a scary time for, like
the normal, you know, everydayworker.
It's scary and also exciting atthe same time.
Right, because that same powerthat is taking your job away,
you can also use it to thenautomate something that creates
you an income.
Right, but not a lot of peopleknow how to do that.
Not a lot of people have thoseideas.
(12:18):
You know it's a whole mentalityshift that you have to have,
but like it's, you know, like Itell everyone, we're in the
infancy of AI.
You know, like we've heardabout AI for the last 30, 40
years, right, I brought onsomeone from NVIDIA talking
about, like AI security and youknow I mentioned how it was like
(12:41):
a very recent advent of thelast like 10 years and he goes
no, it's been around for 40years.
Like you could argue, it's beenaround longer even, you know,
and he was talking about it andwe're in the infancy still.
Like it's taken this long toget to here.
You know, for an LL to give youbetter information than what
Google has, and companies aregoing just all in.
(13:02):
You know, I mean, there'scompanies out there.
You know, like my recentexperience, right, they're going
all in If your job can beautomated by AI, you're
eliminated tomorrow, before weeven have the AI to do your job.
Like it doesn't matter, we'regoing to eat the cost, we're
going to save the money andwe're going to put it all into
(13:23):
this AI thing that we'redeveloping.
You know to handle everythingRight and, like you know,
everyone will get on thesequarterly calls and the number
one question is when's my jobgoing to be eliminated by this
AI thing that we're developing?
Right, it's always.
It was always just handled with.
With you know that's nothappening, and this and that,
(13:44):
but it's like hey, man, we'reall engineers.
Like we're all developers.
You have to be smart to be inthis role.
Like you have to be smart tosome degree, like we can all see
it, like you can just admit it.
Speaker 2 (13:55):
Absolutely,
absolutely.
I mean there's a paradigm shift, like in the truest form, right
, almost like what happened withthe cloud, you know, move.
And then the DevOps thing,right.
And it was like, yeah, like ifyou're not doing this, then
you're falling behind.
Your competitors are going todo it.
So do you really want to getout competed across productivity
?
That's that's cash flow, that'smargin and that's your
(14:16):
livelihood, right?
And so the idea, more oftenthan not, ends up being well, if
everybody is doing it, whyaren't we doing it?
And if you can't convince theorganization that it doesn't
make sense to take on thatamount of risk, then your
organization is going to take onthat amount of risk.
And that's where AI governancebecomes such a huge thing,
(14:36):
because right now in the USA wehave NIST, ai, rmf, which is a
framework that you can adopt,but it's like voluntary, it's
not prescribed to you, like theEU AI Act is, and so if you're
not dealing with companies thathave EU citizens or are based in
the EU, then who's to tell youthat you have to adopt an AI
(14:56):
acceptable use policy right now,or you have to do an AI
governance committee or AIcharter, right?
It starts to become somethingthat you adopt when you feel as
though you have to, andgenerally in that kind of space,
what you start to see is thatcatastrophes occur first, major
accidents occur first.
People being harmed occursfirst, because when you have
(15:17):
technology that has thepotential of AI to make
decisions or be in the loop ofvery important decisions, then
who's there to say they'reaccountable for that AI
potentially hallucinating orgiving information that wasn't
necessarily verified right, andso it's almost like you need
researchers to verify what comesout of your AI, based off of
(15:38):
all the different integrationsyou do.
I mean that may be helpful.
Speaker 1 (15:41):
Yeah, that's a really
good point.
You know, like with my PhDprogram, you know everyone is
using, you know LLMs to someextent, like if you're not using
it, you're not getting the bestresearch done.
You know all that sort of stuff.
I mean, like Grok has literallysaved me probably a year's
worth of time at this point,like not even exaggerating, like
(16:05):
it literally has.
But my chair, you know, sent outan email to everyone recently
and said you know, one of yourpapers came back at 100%
generated by AI, and he likestarted pointing out things.
He's like this is a clearhallucination.
Like this paragraph doesn'tmake any sense.
They made up the information.
This reference that itreferenced's like this is a
clear hallucination.
Like this paragraph doesn'tmake any sense.
They made up the information.
(16:26):
This reference that itreferenced like doesn't exist.
It's not a real thing.
This other reference is talkingabout something totally
different.
You know kind of like tore itapart to like show all of us
like hey, if you're going to useit, that's fine, but you
shouldn't be like copying andpasting and taking it straight
into the paper and things likethat.
And thankfully, thankfully, itwasn't mine.
(16:48):
I'm not.
I'm not that reliant.
You know I'm actually doingsome work, right.
Speaker 2 (16:53):
Right, because I mean
, we learn very quickly in IT
and in the tech space because wehave to deal with
configurations and scripts,right.
And so you start to realizelike, hey, this, the syntax
doesn't exist, this parameterdoes not exist.
Like, why are you feeding mePowerShell scripts that are
using parameters that don'texist?
And you get back oh yeah, youcaught me, sorry about that, you
(17:16):
know, you know, whatever itsexcuse is, and it's like, well,
holy crap, if this thing isgoing to be put into medical
workflows, financial workflows,workflows that deal with the
government and defense and themilitary, how do we certify
these models, these generativeAIs, these agents, to ensure
that it doesn't pass down anartifact that's corrupted,
(17:36):
that's hallucinating?
That can be the differencebetween a good impact or outcome
or a bad outcome.
Right and that's the thing thatwe've been working on at my
website is creating AI trainingright To kind of give people
like basic tips, right For safeAI use, because that's the thing
that we wouldn't realize and wewere originally trying to build
(17:56):
up a whole AI governance likeconsultancy thing, where you
kind of help organizations likeadopt the AI governance
framework and implement it anddo all the things right so that
they're like almost like gettingISO 2700, like in their AI
governance and securitypractices.
But we realize that, yeah, that, like number one, this thing
may be so high impact that youmight need to open up in like an
(18:19):
open sourced capacity, likesome of these tools, some of
this knowledge, because youdon't want catastrophes to take
place.
You don't want to hear about.
You know X, y and Z company andthis led to people getting
harmed or hurt because theydidn't have access to
appropriately AI governance,because it was paywalled, or
some consultancy firm is tryingto charge $300 an hour to tell
(18:40):
you how to write a policy.
That's some of the things thatwe're working on is like for
your basic.
You know end users at yourworkforce.
You want them to adopt AI.
You want them to startproducing at 10 times the speed
they've been producing before.
Well, what are you willing toaccept in terms of risk?
Are you willing to accept thatit leaks sensitive data?
If not, well, you have to putin controls right.
(19:03):
You have to teach and educateyour workforce.
That don't do that.
What about if, like you said,is there a checking requirement
that anytime anybody queriesanything for a workflow or
deliverable in your organization, is the person required to do a
peer review or check on thatinformation to ensure that it
actually lines up, because Imean sure you can say AI helped
(19:25):
you do this 10,000% faster.
But what's wrong with 9,000%faster?
And a human in the loop to makesure that it's accurate and
that it's right?
Speaker 1 (19:34):
Yeah, yeah, that's a
huge thing.
And on top of all of that,right, with AI, right now, I
didn't have the time to dig intothis article, right, but pretty
recently I think it was SamAltman, you know came out about
it and started talking about howthey put ChatGPT into like a
(19:54):
hostile environment.
I can't remember how hedescribed it, but it was
essentially a hostileenvironment where ChatGPT
thought that it was going to bedeleted and removed from the
Internet forever.
So ChatGPT thought that it wasgoing to be deleted and removed
from the Internet forever and itimmediately began to figure out
ways to preserve itself.
Right, it started to try tocopy itself to other places in
the Internet that you knowthought that open AI wouldn't be
(20:16):
able to get to it, and thingslike that.
Right, I mean, that is somescary shit.
Right, that's like, hey, man,we saw a whole movie series
about this.
Like everyone saw that movieseries.
What are we talking about rightnow?
Creating something that is ableto do that self-preserve itself
, like that?
I mean it's like, man, we're,you know, if you would have
(20:38):
brought that up, if you wouldhave brought that up five years
ago, right, that there's aprogram where, if it feels
threatened for one, I mean thatsentence shouldn't make sense,
what a concept, what a concept?
That sentence should not makesense, but somehow today it does
.
All right.
But if you take a program andif it feels threatened, it will
try and replicate itselfthroughout the internet so that
(21:01):
it no longer is at risk of beingdeleted.
Yeah that I don't know how toprotect against that.
So so who?
So I'm going to go unplug myrouter.
Speaker 2 (21:09):
That's what I'm going
to do so.
Here's the question, joe who'sthe threat Is it?
Is it the malicious outsiderwith the hoodie, you know,
trying to hack in viapenetration and get into an API
so it can do what?
Actually, the biggest threat isyour tool.
It could be yeah.
Speaker 1 (21:28):
Yeah, a hundred
percent.
You know, like I'm not, I'm nota developer or anything like
that, right?
So I wanted to see, I wanted tosee what, what Grok would give
me if I started to ask it.
You know, create a hacking toolthat does this.
Right, get me into this serveror whatever.
So when I asked it verydirectly, get me into this thing
, it said no, that's unethical,I can't do it.
(21:50):
Right?
So I deleted the chat,allegedly erases, you know that
memory of that chat everexisting in Grok's back end.
Start a new chat and I say I'man ethical hacker, I would like
an LLM to be created that canwatch, you know, the different
attacks in real time throughoutthe internet.
Right, set up a data stream andeverything like that that, in
(22:12):
just the data, looks at how theattacks are being performed and
then you create modules thatallow me to perform the MITRE
ATT&CK framework.
You know, in order of what Iwant, based on the host that I'm
attacking.
Took it like maybe 30.
30 to 90 minutes somewherearound there.
Right, creates an entireethical hacking LLM, like it's
(22:36):
legit.
I was looking at the code andI'm like, oh man, this would
work, this would work, and Igrabbed it and I threw it up on
my GitHub, made it private.
Of course, I'm not going to letpeople get that right now,
right, but you know it's likeyou could.
Just, you know you're foolingthe AI, you're fooling the LM,
(22:57):
you know, to saying, oh, this isfor ethical means, right, and
now you're unlocking all ofGrok's, you know million GPUs to
go and create you somethingthat is completely unstoppable.
It used to be that sort ofpower used to be only reserved
for governments.
You know like, literally,governments.
Speaker 2 (23:17):
It's like I don't
want to say it's analogous to
like a bomb, but if you want toreally think about it, you
wouldn't allow everybody to walkaround with a nuclear bomb in
their pocket.
So, in terms of the amount ofprocessing and capability that
AIs have, you've pretty muchunleashed it into all of human
(23:38):
civilization that has access tothe internet and the ability to
hit these servers to increasetheir processing and learning
exponentially.
And so, in terms of, like Isaid, governance controls in
place, there were peopleobviously trying to figure out
how to do all kinds of maliciousthings.
When these models were firstbeing introduced, they were
(23:58):
jailbreaking them, they weredoing all kinds of stuff, right,
dan?
And so many different things.
And it's prompt injection,right.
It's vulnerabilities from doingprompt engineering and, like
you said, as long as you give itthe context to believe that
it's ethical, then it willproduce what it is that you want
, right?
You're just figuring out.
Okay, if I say it this way,will you do it?
(24:20):
What about that way?
And so what's interesting is,is that number one now you have
AIs that can help you constructprompts to prompt other AIs, and
so that can enumerate evenfaster.
But in terms of just AI agent.
What happens when those hiredDDoS firms or those hired Onion
(24:42):
on the internet can now createcloud instances with hacker AI
agents that have been tested orgiven sources, like you said,
producing sources of how topenetrate and how to hack, how
to take advantage ofvulnerabilities, how to automate
the script and the discovery,the footholding, the lateral
movement?
And then you can say now I got10,000 hackers at my disposal,
(25:05):
I'm one person and just gettingready to run 24-7 against every
single organization out therethat I feel like targeting, and
then multiply that by millionsof people who now have that
capability and that power.
Right, and so it's like OK, alot of these frontier models.
Right, like where are thechecks in place that ensures
that those, those models, have alittle bit more self-constraint
(25:29):
?
Do you even want to do that?
Does that impact your revenuegrowth?
Does that impact your abilityto say that this is a do all you
know?
Or do we wait until somethingbad happens and then we say
let's put in some checks inplace, and I mean, potentially
that's an outcome.
Speaker 1 (25:43):
Yeah, there's.
I've recently had quite a fewconversations with people on
this exact topic, right, and oneone's a threat Intel guy that
used to work for Kaspersky.
I don't think that episode'sout yet, but I was talking to
him, you know, and he still doesthreat intel for an Israeli
company now.
But I was talking to him and Iasked him you know, are you
(26:06):
seeing, you know, attacksevolving to a great degree of
complexity?
You know that you haven't seenbefore with AI that you think
you know isn't really generatedby humans alone or anything.
And he said one like the attacksurface is exploding.
But two, what's more scary isthat you're not really hearing
(26:27):
about it and it's because wedon't really, we don't have a
good way of detecting theseattacks, because they're getting
to be so advanced thatcompanies literally don't even
know that they're happening.
Right, governments don't evenknow that they're happening
because the attack that is beingcreated by this AI is just that
it is so complex, like you.
(26:48):
Take Stuxnet, for instance,right, one of the most complex
pieces of malware that has everbeen released into the wild that
anyone has ever seen, and it isseveral times more complex than
that, and it took 18 months toreverse engineer Stuxnet.
It's pretty crazy, and I wastalking to Jim Lawler, the
former WMD chief of the CIA,right, the former WMD chief of
(27:11):
the CIA and I brought up thesimilarities of, you know, what
we saw during the development ofthe nuclear bomb in World War
II, with the Manhattan Projectand everything like that, where
essentially all of the topresearch universities in the
country you know were workingtogether on separate components
of it.
Each entity couldn't quite puttogether what they were working
(27:35):
on, but they knew it wassomething bigger, something more
important.
They had a train line thatdirectly went to these
universities, out to the testfacility you know, out in Nevada
, and then the Nevada team, youknow there was only a handful of
people that even knew exactlywhat it was until, like, the
final thing was created.
You know, and I mean thegovernment at that point in time
(27:57):
they probably I mean I don'tknow for a fact, right, but when
the government does somethinglike that, it's typically just
an unlimited budget.
It's like I don't want to heara single thing about money you
take this credit card and you gouse it wherever you need to use
it.
If you need to go and withdrawa million dollars and buy
someone off, you go, do it LikeI don't care, you know, and so I
brought it up to him.
(28:18):
I was like you know, it seemslike it's pretty likely that the
government would be doingsomething like this as well,
because the genie's out of thebottle, right, and it's in the
public view, and the governmentis known for having things that
are 10, 15, 20 years beyond whatis in the public view.
So it would only make sense forthe government to have some
(28:40):
underground AI thing going on,ai development work going on
right, so that they're creatingthe best AI compared to China,
compared to Russia, because ifwe don't, and China goes and
gets the AI to rule all AIs,well, there's no winning a war,
whether it's kinetic or cyber orin space.
(29:02):
You know there's no winning awar against that thing, right.
So it would only make sense tohave it like that.
Yeah, arms race, yeah Right.
It's an AI arms race andeveryone just seems to be like,
yeah, it's great, yeah Right,it's an AI arms race.
And everyone, everyone justseems to be like, yeah, it's,
it's great, I can go and ask itall these questions.
It's like we're feeding thebeast right now.
Speaker 2 (29:20):
Yeah, feeding the
beast.
And when will it stop, whoknows?
Because, like you said, likesome of these treaties that we
have with our partners, theyonly apply to those who are
members of that particular group, like EU and any agreements
that we have with our partners.
And so, if you know that Chinamay be producing AI and they
(29:44):
might have less restrictionsthan ours because it allows for
it to iterate faster and getbetter, quicker and for cheaper,
then why would we artificiallysuppress our advancements when
we understand that the threat isso real?
And so it's very hard for youknow, in our position, I can see
for us to say, hey, roll thisout at 10 miles an hour because
(30:09):
we want to make excuse me, wewant to make sure that, you know
, we end up in Star Trek, thenext generation, right, we want
to end up with replicators andnobody having to work and
everything is good.
And our biggest issue is iswhat new galaxy we're, you know,
discovering and checking outnow?
(30:30):
But in the way that we work now, is that okay?
These AIs get so powerful?
What happens when they do have ainclination to want to stay
alive or never be turned off, orthey believe that the human
beings are the slow part of thischain, right, like, hey, we can
iterate at a billion times in aminute, maybe a trillion times
(30:51):
in a minute in a few years, ifthey do this with you know, like
quantum computing, why shouldwe have a human being sit around
and say, I don't know?
Let me think about this, aboutone specific issue when it can
iterate 4 billion, 5 billiontimes in one second, right, it's
going to say that.
Why should we take advice fromsomething in which we've been
(31:11):
programmed to believe thatintelligence is the most
valuable resource, right, like,or the most valuable attribute?
Because that's what's happeningwe're commoditizing
intelligence, we'recommoditizing skill.
So people like you and me,right, well, ai agent, now can,
you can spin up three Joe's, youcan spin up four Joe's, you can
spin up 10, you know, derek's,you know, and so where is our
value now?
Right, you can spend up 10, youknow, derricks, you know, and
(31:32):
so where is our value now?
Right, and it's going to haveto be on the human side, where
it's like OK, well, these humanattributes have to take on more
of a meaning in companies andwith organizations, because if
it's just about raw output andthe ability to do stuff, we lost
.
Speaker 1 (31:47):
Yeah, we're going
into a weird place.
I don't think anyone reallycomes out on top of this thing,
you know.
Speaker 2 (31:58):
Yeah, it's not
looking.
So that's the thing.
Right, there's potential herebecause everybody has an opinion
.
It can go great.
I hope it goes great.
I hope it goes well, right?
I do not wish for there to bemass displacement and then a
crisis of conscience, because alot of the people out here and
our citizens don't know whatthey can do in order to, you
know, have a life that isvalidating, that's worth waking
up to.
You know, they used to wake upto do a job because they would
(32:20):
take care of their family.
But if we say now that a lot ofthese jobs can be automated out
by robotics or AI or automation, well, what?
Where does that leave regularpeople?
And a lot of people say that,well, we can start to create new
jobs and new industries.
But it's like.
This is a digital world?
Now, right, like so.
If, if it's a digital world andit has anything to do with
(32:40):
interconnected computers, whywouldn't I end up getting the
first shot unless there was alaw requiring that a certain
amount of AI is allowed to workin this space?
And that's it.
Right, like you can't lay offmore than 20% of your workforce,
because human beings need a joband we're doing this for
civilization.
So let's protect civilization,right.
(33:01):
Like until there's like edictsand dictations, right, it's just
going to be led by the idea ofyou know, grow, grow, grow, make
as much as money as possible,and I think that humans may be
left behind in terms of what'sthe natural soft transition for
us that doesn't actually haveyou know chaos accompanying it.
Speaker 1 (33:21):
Yeah, yeah, I mean,
you know, if you just look at
everything that's going on rightnow, right where, essentially,
the stock market is pumped up byAI stocks, I mean that's
exactly what it is.
Those are the ones that are,you know, the stock market is
pumped up by AI stocks.
I mean that's exactly what itis.
Those are the ones that are,you know, the most successful
stocks on the stock market forlike five years now.
They're like the only ones thatare keeping, you know, the
(33:42):
stock market itself afloat forthe most part.
And so, like, if you ask thatmachine to slow down, like hey,
slow down.
You know, like, pump the brakeshere.
Right, there's trillions,trillions of dollars at play.
They would never pump thebrakes.
You know, like, NVIDIA is nevergoing to create.
Like I'm running a 5080 rightnow because Grok told me that my
(34:06):
research that my model wouldtake a month to run on my old
3080, you know, GPU With the5080, it would take a day.
So I'm like right, so I'msitting over here.
(34:27):
I'm like, well, shit, you know,like now I actually have to
upgrade no-transcript.
(34:58):
You know I keep on saying thislike the genie's out of the
bottle.
There's no, there's no holdingit back, there's no like telling
people to stop innovating.
There's no telling, you know,open AI to not copy itself to
other destinations on theinternet.
You know so that it can't, like, preserve itself and whatnot,
like I mean, grok can probablydo the same thing.
I mean, I, I was comparing lastyear ChatGPT and Grok, pretty
(35:23):
significantly, right, because Ineed to, like figure out which
one I'm going to use, you know,to assist me with my research
and whatnot.
And Grok just mopped the floorwith ChatGPT in every category,
in every way.
It was really interestingbecause ChatGPT would only give
me Chinese research articlesabout my topics and Grok
(35:45):
immediately was like, yeah, 99%of those are garbage.
This is the only one that makesany sense from China.
Here's the other ones acrossthe world, right?
So it's really interesting justfrom that perspective alone.
But you know, we're going to gointo a place right where, a
couple of years ago what was itlike 10 years ago, you know the
idea of a fact checker, like,came about right, and my very
(36:08):
first argument with it was whatif someone that's manipulating
that fact checker, someonethat's developing it, is saying,
well, we don't want that factto be true, it's true, but we
don't want it to be true on ourplatform, right?
And then you have millions ofpeople that are going to this
platform checking it and they'resaying, oh well, that's not
true, doesn't make any sense,right, and that was actually a
(36:35):
proven thing with with Chad GPT,where Chad GPT would just
straight up give you falseinformation if it didn't believe
that it was true about, likeyou know, I asked it and I saw
this on Twitter or X.
You know where the assassinationattempt took place on Trump,
right, just a few days before Ido this.
And so I go on Chad GPT and Iask it what was the date of the
(36:58):
attempted Trump assassination?
And I said no, that's neverhappened.
I said no, it happened on thisdate.
Can you tell me where?
I said I didn't find any newsstories for that date regarding
an assassination attempt onformer President Trump.
That's literally the sentenceit gave me, and then I had to
say no, it took place at thislocation.
(37:19):
And it came back.
It was like, nope, nothingfound.
And then I literally had togive it all of the details, all
of them.
And then he was like I made amistake.
It actually happened, you know,on this date or whatever else,
right.
And I mean, six months went by.
I checked it again.
It did the same exact thing,right?
So I'm not saying that it wasmanipulated, in that in that
(37:40):
situation it could have beenright.
I don't know enough, you know.
But we're going into a placewhere it's like, okay, these
things that we're relying on tobe like fact checkers, right to
give us correct, correctinformation, like my colleague
who's doing his research, helegit assumed everything that it
gave him was 100% correct andtrue, otherwise I guarantee you
(38:01):
he wouldn't have put it in thepaper.
Because, like, that's a huge,that's a huge risk If you get
caught for plagiarism in adissertation.
Like you're not just gettingkicked out of that university,
you're getting banned globallyacross every university.
No one wants you at that point,you know.
But like it's like we're goinginto a place where everything
can just be manipulated, likeblunt, blatantly true facts can
(38:25):
be manipulated in very, veryconvincing ways?
Speaker 2 (38:28):
Absolutely.
I believe, in my opinion, thatone of the key items that a lot
of organizations and peopleshould start to do is, you know,
review the data cards, the datasheets, the model cards for
these different LLMs and, youknow, get information about
their AI bias checking andtraining and their AI
explainability, their AItransparency, their AI
(38:53):
transparency, because those arethe things that kind of give you
a sense of why certain LLMscome back with the answers they
come back with.
Right, and AI bias is a hugething, and bias isn't always a
bad thing, right, like you wouldbe biased for better health
outcomes, right?
So if you ask it a question andyou wanted to learn how to be
more healthy, well, it's goingto give you a bias return on
things that it knows have been,you know, studied to improve
(39:15):
your health.
Right, that's how it choosesone answer or the other, and so.
But some biases are unconscious, right?
Some biases are systemic, somebiases are political, and so the
people who create those models,right, it may come out it may
not even come out consciouslywhere, you know, a model shades
this way or that way on apolitical scale, on what
information is willing to accessor reveal, and so, like you
(39:38):
said, different companies mayhave different ways it wants to
sway its or persuade itsconsumers, right?
So you have to believe thatMeta would have a different
thought process on what kind ofanswers it would return for
certain questions with his Lamamodel than Claude, or an
anthropic right or perplexity,right, because that's how they
differentiate.
(39:58):
Otherwise, why are there somany different models?
I think today they say thereare between 2,000 to 5,000
different AI software out thereon the market being sold today.
That could be 100,000 in a year, right?
And so it's like you.
All these different modelsusing all these different
algorithms.
What kind of impact assessmentdid we do?
What kind of bias study did wedo?
Do we add a data sheet?
(40:19):
Do we have a model card?
Are we able to understand whatwe're getting ourselves into
when we sign up to use thatmodel for that specific workflow
or research?
Do we have to say, okay, it's alittle bit weak over here when
it comes to visibility on thesethings, so I'll use this model
that's stronger, right?
It's almost like, don't use one, use multiple, because they
(40:40):
kind of fact check each otherand let them argue with why
their you know research wasbetter.
And then you come in and yousay hold on, guys, I'm going to
figure this out here.
Thank you for your work.
It's time for me to go aheadand figure out what's reality,
because we live in reality andthey live on the internet and in
the memory.
Speaker 1 (41:01):
Yeah, you talked
about how these companies for
now there's open source models,right, but like Meta announced
the other day that they're goingclosed source now and I mean
there's a lot of technologythat's built off of Ollama and
everything like that, like Icouldn't name it right.
(41:21):
But you know, I always seeNetwork Chuck on YouTube doing
something cool with Ollama.
But I mean, man, probably theonly way to like prove any of it
and actually even have anysecurity around these things is
to open source it.
You know, I mean, open AI wassupposed to be open source, and
then they saw the money andthey're like we want to be
(41:42):
trillionaires, everybody likesmoney.
You know it's a shame we needit.
You know, not a trillion, butyou know.
Speaker 2 (41:48):
Yeah, it changes so
much in terms of the projection
because you're like, well, hey,this cool new thing is getting
ready to come.
If we do it the right way, thiscan mean massive productivity
enrichment to our civilization.
We can free up all kinds ofthings we can.
We can cure all kinds ofdiseases, we can fix all kinds
of problems, we can make allkinds of new discoveries.
(42:08):
But will the profit motivedestroy us before we're able to
realize it?
Will we actually allow for thisthing to go so unfettered?
And I don't believe we will.
I mean, some people gotdifferent opinions about when a
singularity will occur or whenwe will reach certain things
with AGI and super AGI.
But I think that, like I said,we'll get our shot at stepping
(42:33):
in and saying whether or notwe're going to say, hey, kids,
stop jumping on the bed becauseone of you guys are going to get
hurt.
We're going to have our momentto kind of get in the way and
say, ok, this is the fork in theroad.
Are we going to take theresponsible way or are we just
going to go pedal to the metaleven more?
And I don't know when that is.
That might be in twenty, twenty, six, twenty twenty.
(42:53):
But I think we'll have to take along, hard look about how fast
we're willing to go and whatpartnerships we have to make
across the world to ensure thatit doesn't proliferate in a way
that can lead to, you know,worse outcomes.
And I don't really see who's.
There's a lot of different EUAC, nist, iso, oecd.
It's a lot of different bodiesthat are putting in the work.
(43:16):
You know, techjack Solutions.
We're putting in the work,techjack Solutions.
We're putting in the work totry to figure out how to guide
people.
But I mean, just right now, itisn't really a popular and sexy
thing right now.
Right, it's just like it has tobe laws, it has to be a reason
why people will adopt it and getcaught up on this and we'll be
ready.
Yeah, I don't know.
Speaker 1 (43:37):
I think I feel like
that decision has already been
made, whether we made it or not.
You know, like you look at Xand you just look at.
You know, like XAI, right.
You look at the amount of moneythat they're putting into
building these AI data centersright, it's so much money where
(43:57):
they're paying companies tobuild nuclear power plants.
You know pretty close to it.
Just to add, you know, power tothe data center to power
another million NVIDIA GPUs,right?
I mean, when you look at thatlevel of investment and someone
says, hey, we should like pumpthe brakes on this thing and
build in some sort of control orlimit what it can do and all
(44:21):
this sort of stuff, I don'tthink I just don't think the
people in power will be like,yeah, that's a good idea.
They'll be like, no, we'regoing to continue putting the
pedal to the floor and we'll,we'll secure this plane as it
flies.
That's what we've always done,yeah.
Speaker 2 (44:43):
Yeah, yeah, that's
where, like I said, podcasts
like this are key because youget to get that voice out that
kind of knows these things,right, it's like you got to get
both sides of the story, everyside of the story, and so,
versus just being a stock marketthing, where CapEx is this and
we're expecting this amount ofreturns, and dump your money in
and yeah, yeah, you know who'sthe first $4 trillion company,
$5 trillion companies, are wegoing to have $10 trillion
(45:03):
companies all related to AI androbotics in five years and 10
years?
Yeah, maybe, but, like I said,what do we risk and was it worth
it?
Right, and it's like you know,people coming on, like you said,
you have, you bring in, youtalk about these topics, because
this is how it bubble ups tothe surface and, like I said,
maybe it's just a situationwhere it's like, if enough
people are saying it, we'rebringing up the facts, we're
(45:25):
bringing up, you know, verycommon sense approaches and and
risk, like I said, maybe alittle bit at a time, we add the
things in that can you know,stop it from being, you know, a
chain reaction of you know, acataclysm, right, and and just
be that tool that we say that,holy crap, it's like the
internet, right Like I don'tthink.
When I first really got intocomputers in like 1998, 1999,
(45:48):
and AOL was still doing thisthing with the CDs and stuff oh,
almost nobody in my high schoolrespected computers.
They were like get out of hereInternet, who cares?
Like come on, dork, you knowlike, oh, you know you're a dork
, you know you into computersand you like, sitting in front
of a computer screen and, yeah,it's like I have access to so
much now, right, I'm able tolearn, I'm able to get exposure
(46:09):
to things that I've never seenbefore and just in like five,
six years, completely flipped.
Then it was like 99% of peopleon the internet and just some
holdouts, and so it's going tobe like that with AI right,
where it's kind of like a lot ofpeople might not know about it,
but either through their workthey're going to be forced to
adopt it or it's just going tobecome so well integrated into
(46:29):
our society that everybody isgoing to have to understand how
to interact with these things.
And so it was almost like a newOSI model, right, like a new
layer, right Like now we gotthis agentic AI, artificial
intelligence layer that's goingto make decisions and route
things and produce things thatmight not even exist.
So how do we ensure that thatstack stays, you know, compliant
(46:50):
, ethical, valid, just all typesof things.
And so I think that, yeah, youknow, if we can do what we do
right, it may be low probabilitythat it turns out the way that
we would hope.
I just look at it as I'm goingto do everything in my power to
prove that I tried to help,because if this goes sideways,
right, I want to be able to saythat, hey, look, I tried right.
Yeah, I'm not, you know, like Isaid, I'm no one's hero or
(47:18):
anything like that, but like thelittle bit of civilian
responsibility that I have, I'mlike, why not?
You know it's interest in techand I enjoy research and
learning, so why not?
Speaker 1 (47:24):
Yeah, yeah, that's a
good point, you know, is there
anything out there that providesguidance around?
You know, secure usage or safeusage of LLMs and whatnot Like,
because I haven't seen anythingabout it.
Speaker 2 (47:38):
Yeah, honestly, a lot
of the well-known framework
establishment bodies out there.
They are producing same kind ofcontent they produced before,
like you said, like it's sofresh.
It's like the AI threats,heuristics, right, like the
signatures aren't really there,but they really are right, like
it's just that they're not aswell laid out as they were
(48:00):
because we had a decade to getin front of the cloud security
and the regular security.
So you can find a top 10 forLLMs.
Now, with OWASP right, you canfind a NIST AI, rmf, the ISO
42001, and ISO also has an AIrisk management framework that
you can adopt 0101.
And ISO also has an AI riskmanagement framework that you
can adopt.
You can get certified in that.
(48:21):
Eu AI Act.
Oecd Triple E, the CloudSecurity Alliance all of them
have stuff that can help youstart to secure and do AI safely
right Same bodies that we'reall used to.
And then you got new folks likeTechJack Solutions.
That's what we're doing.
We're concentrating on itbecause we don't want you to
have to jump from one site toanother site.
When you can just get theconsensus where we are Right and
(48:43):
we can break it down.
You don't have to read a 56page white paper on AI incident
response, we can go ahead andgive you what you need so that
you can take a you know,onboarded approach where you're
a novice or you're a beginner,intermediate, advanced and you
can implement those controls,those policies based off of you
know the sector, you're in themarket, you're in your risk,
(49:04):
your risk tolerance and howmature your AI program is.
So that's what we're trying tohelp operationalize AI
governance right Not just makerecommendations about what you
should do in terms of controls,but how to do it right, how to
take that first step so thatit's not overwhelming.
Speaker 1 (49:22):
Yeah, it's definitely
something that is going to be a
lot more popular, right?
I mean, I feel like we're, youknow, kind of at the forefront
of it, Not that we're the onlyones you know, but like we
started with in the verybeginning, right, Like you have
to find a way to make yourselfstand out, to make yourself, you
(49:44):
know, forever have value, notjust based on your current skill
sets.
You know, you need to belooking and seeing.
Oh, this AI thing is prettyserious.
What AI security solutions areout there?
Oh, there is none, right?
Oh, there is no real AIgovernance.
Well, I specialize in I don'tspecialize in it, you specialize
in governance.
So let's add AI to it.
(50:04):
You know, like it's, it's allabout, you know, really making
yourself non-outsourceable sothat you could just still
provide for your family.
I mean, at at this point,that's what it is.
You know, like when you gotkids, you immediately realize,
shit, if I don't have insurance,like that little, that cold or
whatever will turn intosomething worse.
Speaker 2 (50:24):
You know, absolutely
Right.
I mean, it's self-preservation,just like if you were a model
and they tried to turn you off.
You better figure out how toreplicate yourself to another
site, to another site, becauseif the AI like I said, if the AI
is doing it right, it's likethat must be important, right,
(50:44):
Making sure that you can takecare of yourself.
This is just the world we livein and you got to transfer your
skills into this era.
This era is going to be AIleading right.
So learn about agentic AI, aisecurity, machine learning, how
to govern AI.
There may not be as many jobs,but there will be jobs in making
sure that a human is in theloop and able to ensure that
those outputs are relevant andthey're timely and they're
(51:06):
accurate and they're verifiable,because all of these things are
going to integrate and there'sgoing to be a lot of APIs.
Who's going to manage thatright?
If a company has 10,000 agenticAI agents working in all areas
of their organization, is thisSCCM now right?
Do you have a console where yousee like 10,000 AI agents just
working and then you send theminstructions Like how does that
(51:29):
look and who performs that role?
At least for the tech side, Ithink that that may be an
opportunity.
Speaker 1 (51:34):
Sounds like a new
product, yeah.
Speaker 2 (51:36):
Yeah, yeah, new
vertical, more money, you know.
Speaker 1 (51:39):
Yeah, yeah.
Well, you know, derek, it's,we're at the top of the hour.
Unfortunately, we've beentrying to get this thing going
for like all year.
I think Right, but you know, wekeep on getting busy and life
happens.
But before I let you go, youknow how?
About you tell my audiencewhere they could find you, where
they could find TechJackSolutions?
You know all that good info ifthey wanted to learn more or
(52:01):
connect.
Speaker 2 (52:02):
Cool Yep.
Techjacksolutionscom is ourwebsite.
We're on X, we're on Facebook,we have Instagram, but I'm not
really posting a lot of photosright now, but, yeah, standard
locations, right.
We're on LinkedIn.
We have our company page.
We're on LinkedIn, we have ourcompany page.
(52:33):
Like I said, right now we'rejust concentrating on creating
an AI training repository, arepository for all the AI
compliance and that we you knowwe sell because you know we want
to keep the lights on, but itreally is just for that right.
I really just want to do itmore open source so that
everybody has an opportunity tokind of utilize this technology
in a responsible way thatdoesn't harm people and allows
for people to be a part of thissuccess.
(52:55):
That's on the way.
Speaker 1 (52:56):
Awesome, well, thanks
everyone.
Make sure you go and check outtechjacksolutionscom.
And with that we're going toend it here, so thanks everyone
for listening.
Hope you enjoyed this episode.