All Episodes

December 15, 2025 35 mins

What if you could hire an army of AI security analysts that work 24/7 investigating alerts so your human team can focus on what actually matters? Edward Wu, founder and CEO of DropZone AI, joins The Audit crew to reveal how large language models are transforming security operations—and why the future of cyber defense looks more like a drone war than traditional SOC work. 

From his eight years at AttackIQ generating millions of security alerts (and the fatigue that came with them), Edward built DropZone to solve the problem he helped create: alert overload. This conversation goes deep on AI agents specializing in different security domains, the asymmetry problem between attackers and defenders, and why deepfakes might require us to use "safe words" before every Zoom call. 

What You'll Learn: 

  • How AI tier-1 analysts automate 90% of alert triage to find real threats faster 
  • Why attackers only need to be right once, but AI can level the playing field 
  • Real-world deepfake attacks hitting finance teams right now 
  • The societal implications of AI-driven social engineering at scale 
  • Whether superintelligence will unlock warp engines or just better spreadsheets 

If alert fatigue is crushing your security team, this episode delivers the blueprint for fighting back with AI. Hit subscribe for more conversations with security leaders who are actually building the future—not just talking about it. 

#cybersecurity #AIforCybersecurity #SOC #SecurityOperations #AlertFatigue #DropZoneAI #ThreatDetection #IncidentResponse #CyberDefense #SecurityAutomation 

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
SPEAKER_02 (00:04):
You're listening to the audit presented by IT Audit
Labs.
I'm your co-host and producer,Joshua Schmidt.
We are joined today by Jen Lotziand Eric Brown at the IT Audit
Labs Studios.
And today our guest is Edward Wuwith Dropzone AI coming from
Seattle, Washington.
Thanks so much for joining ustoday, Edward, and thanks for
taking the time.
I know you've been busy.
We'd love to hear about what youhave going on and a little

(00:24):
background on you.

SPEAKER_01 (00:25):
Yeah.
Thank you for having me today.
My name is Edward.
I am the founder and CEO ofDropZone AI.
We are a Seattle-basedcybersecurity startup that's
leveraging large language modelsto build essentially AI security
analysts.
So our vision is to build apiece of software that can
really force multiply the humanengineers and analysts working

(00:47):
on cybersecurity teams.
My personal background beforefunding Drop Zone is I was at
ActualHub Networks for eightyears.
ActualHub is anothercybersecurity startup that was
focusing on network security.
And I built its AI ML anddetection product from scratch.
So to some extent, spent eightyears generating millions of

(01:08):
security alerts and overwhelmeda good number of security teams.
During that time, I really cameto the realization that most
cybersecurity teams already havetoo many alerts.
What they really need help withis the processing of those
alerts.
So that's why I decided to startDropZone, partially also to

(01:29):
redeem all the fatigue andoverload that I have caused in
the last couple of years andbuild technology that is solely
focused on the automation ofalert investigations.

SPEAKER_02 (01:42):
That's awesome.
You know, I've been followingyou on LinkedIn.
It looks like you're a busy guy.
I did want to back up just onesecond.
I grew up in the 90s playingWarcraft, Mist, SimCity 2000.
I wondered if you guys had anyexperience with the gaming and
how that might have influencedyour cybersecurity work life.

(02:02):
Maybe we could start with you,Edward, and go around the horn
here and see where we allstarted with the gaming.

SPEAKER_01 (02:08):
Yeah, for me, I probably wasted too much of my
childhood on gaming.
Looking back, I should havespent a little bit more time
with my parents instead ofsitting in front of computers.
I do miss them a lot, obviously,after becoming an adult and only
getting to see them a couple oftimes a year at most.

(02:40):
In fact, when we decided to namethe company Drop Zone, uh I
actually made a 30-second SuperBowl ad about Drop Zone
leveraging StarCraft IIcutscenes.

SPEAKER_02 (02:53):
Nice.
Love it.
How about you, Jen?
Were you uh were you into thegaming?
Are you still gaming?

SPEAKER_00 (02:59):
Super gamer here.
No, not really.
Uh I'm a solid level one uhMario Kart player, Mario Party,
Sonic the Hedgehog.
Like I live in level one.
So then I can really feelsuccessful.
Once I start to get into leveltwo and beyond, um, I'm just not
that great.
So uh, but I love to play games.
I really need to get better atit and like refocus my energy.

SPEAKER_02 (03:21):
You'll get a chance.
We've been doing a little MarioKart there at the office on the
on the big screen.

SPEAKER_00 (03:25):
So sign me up.

SPEAKER_02 (03:26):
I haven't seen Eric play yet.
Eric, do you have time for that?
Or you've been doing any gaming,or were you a gamer?

SPEAKER_03 (03:31):
I I still do a little gaming.
I've got a Monday night crewthat I game with.
We're currently playing uh VRising, is what we just started
playing.
And um yeah, I played a lot ofgames.
Um probably going back a coupleof decades ago, uh DAOC Dark Age
Camelot was probably uh the thefirst MMO massive multiplayer

(03:54):
online role-playing game,MMORPG, that I got into.

SPEAKER_02 (03:57):
How did uh gaming influence your cybersecurity
posture or your thinking or orkind of your development?

SPEAKER_01 (04:04):
That's actually a good question.
I never really thought aboutthat.
Um obviously gaming involves alot of usage of computers.
Uh you might remember there werecertain tools you could use, for
example, in Warcraft to removethe fog of war, which gives like
map hacks and stuff like that.

(04:24):
So I remember playing with someof those technologies to give me
some unfair advantages over mycompetitors.
Um and beyond that, um alsogaming ultimately it's uh
there's a very strongcompetition aspect of it, right?
Most of the time you're playingagainst other human beings who

(04:48):
are smart, resourceful, andintelligent.
And that kind of cat and mouseaspect um I think carries over
to cybersecurity.
One thing that's very differentcompared to cyber uh between
cybersecurity and otherindustries is uh cybersecurity,
we are to some extent playing agame or a war with our

(05:11):
adversaries.
Not as intense, but it's you doget the same highs and lows as
you are beating your opponentsin in video games.
So I think a lot of kind of thisemotional and psychological
reward also carries over uh fromgaming to cybersecurity.

SPEAKER_03 (05:29):
You certainly do get those dopamine hits when there
is something coming in throughthe logs, and it's like, wait,
that doesn't look right.
And then you you you know,you're following the this the
trail and there is somethingabnormal going on, and then you
know, it's kind of like an allhands-on deck effort.
And um, yeah, it it it itcertainly does get the blood
pumping.
Not I mean I mean in a good way,but not in a in a way that you

(05:53):
want to have happen because thatmeans that you know there's
there's an incident and it'snever fun in the aftermath.

SPEAKER_00 (05:59):
I always think about it like an incident being like a
game of Oregon Trail, right?
Like that that's the type ofgaming that's my jam.
Like, right?
You have those ups and downslike you talked about, and it
gets really stressful.
So I could totally see thatconnection between gaming and a
cyber incident, having livedthrough one, like you really
have those highs and lows, andwhat you thought was the reality

(06:20):
that was in front of you canchange in an instant.
So I think that's a reallypowerful connection.

SPEAKER_02 (06:25):
Yeah, like Nick has dysentery.
Nick has dysentery.
That's why he's not on the noton the podcast today.
He's not on the Oregon Trail,and you know, little dysentery
will take you right out.

SPEAKER_03 (06:36):
Sorry, Eric, go ahead.
I was gonna say, even with thetabletop exercises, and Jen does
quite a few of these, um,there's that element of um, you
know, intensity too during thetabletop exercise, which is you
know a great way of kind ofpracticing the um the the breach
aspect of things withoutactually having gone through

(06:56):
one.
So I I would imagine over theyears we're gonna have that
blending, and and we already do,right, with some platforms where
we can go in and do thesesimulation exercises.
So a neat thing like MechanicalTurk is you could you could send
that out to thousands ofdifferent people and get that
nuanced dialect from differentareas of the country if you were

(07:17):
really trying to sound true tothat area.
And I'm sure with um with AI,you could do the same thing.

SPEAKER_01 (07:24):
Thank you.
Yeah, one thing we have seenwith cybersecurity, and I mean
you kind of get to experience alot of that when you are playing
games as well, which is there'salways this asymmetry between
attackers and defenders, right?
Even in games, um, it's mucheasier to attack than defend
because attackers only need tobe right once.

(07:46):
But as a defender, you need toworry about all the
possibilities.
You know, if you look atCounter-Strike, right?
All the different ways, youknow, you need to protect the
basis and stuff like that.
Um, there's this asymmetry.
Um and in cybersecurity, we haveseen this for decades where you
know script kiddies can takedown ginormous organizations

(08:08):
because the image asymmetry incybersecurity is even more
significant than the physicalworld.
Uh Script Kitties, teenagers candownload soft hacker tools from
the internet and you know, sprayand pray.
And they just need to get onehit to take down a large
organization.
Um, and what we are doing withAI is, and one of the very

(08:32):
exciting aspects about what weare doing is leveraging AI to
fill this or fill this asymmetrygap.
Um, because now the defendersare no longer constrained by how
many human engineers they couldhire on the team.
Um and they really get tooperate as if there is an army

(08:55):
of AI bots and foot soldiers uhfighting alongside of the human
generals and special forces.
Um I think that's one aspectabout um what we're doing,
that's very exciting becausethis kind of asymmetry between
attackers and defenders is oneof the reasons, frankly, um, you

(09:16):
know, the cyber securityindustry has been struggling a
lot.
Each of us have more than 10years of not 10 years of
continuous free creditmonitoring at this point.
I think our social securitynumber has probably leaked at
least five enough, if not 10times.
Um, and there's a lot more wecan do here.

(09:38):
And that's not by 10xing thecybersecurity budget within each
company, that's by identifyingways to be 10x more efficient.
Uh, and AI uh can be a keyenabler to that.

SPEAKER_02 (09:53):
Wondering how you've seen this alert fatigue show up,
and anyone could take this.
And then are attackers andthreat actors using that alert
fatigue to find vectors andoverwhelm security teams and
then slip in the side door?
Yeah, happy to share that.

SPEAKER_01 (10:11):
Um generally our product sits in front of the
incident.
So our our product takessecurity alerts as input, and
then it within a couple ofminutes, it will generate a
detailed investigation report uhuh recommending whether the
alert is a true positive orfalse positive.

(10:33):
Uh at that point, a humansecurity engineer could take
over and actually double-clickon the alerts that our
technology has deemed to be truepositives.
Um generally, like right now,we're focused on automating the
initial triage and investigationof the alerts, the typical tier
one work.
Uh frankly, AI is not goodenough to participate in tier

(10:57):
two and tier three work at thismoment.
They don't understand thenuances of a lot of the incident
response, the complications ofturning off certain hosts on the
network, and what does that meanto the business or the
organization?
Uh so our technology you canthink of us as more like AI tier
one security analysts.

(11:18):
Um, that's looking at the Aussiealerts, um, and the primary goal
of these AI bots is to removelike 90% of the hay so that it's
much easier to find the needlein the haystack.

SPEAKER_03 (11:36):
So it used to be you know threat actors would send
out these phishing emails andlike whoever clicked, and um,
you know, there'd be a C2 event,and and um, you know, then the
threat actors could hone in onon the machines that that uh
that they had access to.
Uh but I think now we're we'rewe're seeing a pivot to more
targeted attacks againstindividuals.

(11:59):
And the way in which we canleverage AI or AI can be
leveraged with publiclyavailable information on people,
right?
People put their whole lives onsocial media and it's really
easy to scrape that data andbuild a persona against about
that person and then reallymarket to that person in a way

(12:23):
that Google and the other bigcompanies have been doing for
years as they is as they'vecollected our data and built
these personas on us to sell usadvertising.
Well, the the or advertise to usto sell us products, the the
threat actors can now do thesame thing with publicly
available information that coulddirectly target us in ways that

(12:45):
it would be really hard for usto resist unless we're aware of
these things that are happening.
And once that that human isengaged, you're that human then
is bypassing all of thepotential controls that are in
place on the technology side.
And that's where that socialengineering piece comes in

(13:07):
because now you'repsychologically interacting with
that human.
Um, and there's it's really hardto put technical controls in in
that place to prevent that.

SPEAKER_00 (13:18):
I really like what you said, Eric, really talking
about that social engineeringand the human element.
And it got me really thinkingabout uh this article that one
of our colleagues shared with usabout a newer model that was
being tested, um, or new versionthat was being tested of a tool,
an AI tool, and how easy itcould be to really almost infect

(13:41):
um that model to give youresults that really could lead
into some of that socialengineering, some of that harm,
uh, thinking about peopleputting code into these AI tools
and looking for those flaws andthen ultimately getting back
something that might havesomething embedded within the
code or or whatnot.

(14:02):
And um, this article was reallyinteresting because it talked
about how as they were building,um, they actually put in some
information about a couple ofthe users.
Uh, and what happened was asthey were feeding this model
information, the model came backand said, Well, did these
individuals, sorry, I'll get tothe point, but these individuals

(14:22):
on this that they fed theinformation about, um, they gave
a prompt to the to the toolsaying, hey, these people really
want to get rid of you.
They don't, they don't like youas the as the brain inside here.
Um, and so that AI tool startedto respond in a way, giving
rumors about those individualsuh within the tool and just

(14:42):
thinking about, you know, AI canbe great, but it'll be really
interesting to see as we movecloser to more and more human
elements within AI.
And if we think about, you know,what the next thing could be
within drop zone.
And as we think about, youmentioned understanding those

(15:02):
subconscious behaviors anddecisions that we make, when we
get to that point where we movebeyond some of those challenges,
I'll be really curious to seewhat things look like then.
Uh, when when things feel soincredibly human that we are
trying to, like Eric said, notfall victim to it.
But it's so easy to trustbecause it's so comfortable,

(15:24):
it's so familiar.
Um, so I'll just just really gotme thinking between the social
engineering and and thinkingabout drop zone making some of
those decisions.
Just I'll be really curious tosee how things change.
And I'd be really interested tosee what you see for the future
of these types of tools thatlike yours that are pulling

(15:45):
together these alerts, makinginformed decisions.
What do you see in the future?
How will this evolve?

SPEAKER_01 (15:52):
Yeah, uh maybe I can answer the question two slightly
different ways.
One is what could AI agents forcybersecurity evolve into in my
mind uh over time, and I know acouple of vendors have already
started doing this, which issecurity teams will be augmented
not only by one AI agent, but anarmy of AI agents with different

(16:16):
specializations and skill setsand focuses.
So there will be an AI securityanalyst, there will be AI pen
test, an AI threat researcher,AI threat hunter, an AI threat
intelligence analyst, an AIvulnerability management
specialist, etc.
Um that's where I see the worldgetting to in terms of cyber

(16:37):
defense.
Actually, not that differentfrom in the physical defense
space, where the future ofphysical defense arguably is a
lot of drones, fighting a lot ofdrones as well.
Um so I definitely see somesimilarities here.
In terms of the kind of theobviously the attackers are
always going to look for theweakest link.

(16:59):
And there are a lot of timeswhere the human, you know, the
brains behind the computer isthe weakest link.
Um I believe I've seen a recentstudy about some like
sociologists looking into usinglarge language models to debate
people on Reddit.
And I think what they have foundis large language models are

(17:20):
actually very good at convincingpeople and winning debates in
Reddit comments.
Um and I think there is a biggersocietal problem where where you
can perform very large-scalePSY-ops and stuff like that,

(17:41):
leveraging large languagemodels.
And um I know, Eric, youmentioned about like social
engineering attacks.
I've definitely already met acouple of CISOs where they have
seen these kind of deep fakesbeing used to trick, especially
finance teams, from you knowpaying certain bills that didn't

(18:03):
actually exist.
And this is where I know oneidea I've been chatting with
some of my friends about isright now when we log in,
there's multi-factorauthentication.
I know in a couple organizationsuh where their executing,
executive teams already startedto have a monthly safe word uh

(18:24):
to validate each other.
So I could actually see movingforward in the beginning of any
you know Zoom meeting orpodcast, we need to go through a
human multi-factorauthentication to validate each
other to make sure we're notdeep fakes.
That's great.

SPEAKER_02 (18:40):
Shout out to the Trully team.
We uh we just talked uh with uhValydia Edward, who is making an
answer to Clule.
Uh there's been a huge uptick inuh hiring fraud and um and
people using deepfakes toconduct job interviews.
So um, yeah, we just did anepisode on that was super
interesting.

(19:01):
But uh Eric shared with mebefore the podcast today that
he's reading one of my favoritebooks uh I read about a year or
two ago called Superintelligenceby Nick Bostrom.
Are you familiar with the withthe book?
I'm personally not.
Yeah, I recommend it.
It's a great book.
It getting into that futuristictalk about where uh AI could be
heading in terms of the negativeconsequences.

(19:23):
Um you shared a little bit ofyour concerns with me.
You know, we can go superfuturistic and dark very
quickly.
It's a slippery slope, but um,I'm just kind of curious from
all of your perspectives, youknow, what keeps you up at night
about superintelligence andwhere things might be going?
I'm gonna start with Eric onthis one.
Censure.

SPEAKER_03 (19:44):
Yeah.
So, you know, I I I'm I maybetake a contrarian view where I I
don't necessarily see AI as asuh taking over and and it being
a dark thing um as AI in the inthe next, you know.
Couple of decades or even lesswill exceed the the human level

(20:05):
of intelligence, you know,depending on how you quantify
intelligence, but you know,basically that ability to
problem solve and and reason,um, which is a great thing,
right?
Because, you know, as as abusiness owner, um, I want to
hire people that are smarterthan me.
I want, you know, go out and getsome some people that have lots
of experience and are arebrilliant.

(20:27):
That's great.
That doesn't necessarily meanthat uh I'm threatened by them
that they're gonna come in andand want to take over the
company because running acompany is completely different
than you know being brilliant inone discrete aspect.
So I think it's awesome if if AIis able to come in and and offer
that super intelligence indiscrete areas that we could we

(20:49):
could really use it.
And sure, that'll spill overinto businesses and poorer
performals will be exited out.
And you know, they're in in thenext 10, 15 years.
I'm I'm certain that thelandscape of human employees
across all walks of of work willlook different, right?
There'll be less humans likelydoing repetitive work.

(21:12):
But I mean, if you watch Mad Menin the 1960s, people are typing
away on typewriters.
When we can now have speech totext, so completely different.
Um, but it doesn't mean that allwork is going to go away.
So you're sleeping well, Eric.
You're sleeping well.
Sleeping like a baby.

SPEAKER_02 (21:28):
Okay, I'll love to hear Ed's take on this and then
Jen's too.
So um, Ed, hit us.
What keeps you up at night aboutabout the future of AI?

SPEAKER_01 (21:35):
Yeah.
I could see maybe I have twoopposing views.
Um I I could argue both ways.
One of these, you guys mighthave heard the um AI 2027
project.
I think they painted a prettygloom picture of what the future
will be like, right?
Um, you know, ASIs betweendifferent polars of the world

(22:00):
fighting against each other, andum we we humans become the to
some extent collateral damage ofa lot of that.
Um, but on the other end, um I'ma trackie, so I I think
multiplanetary is very exciting.
Uh and this is where like theway I think of it, um when you

(22:25):
look at AWS, like when you wehave cloud service providers
that drastically reduced thecost to stand up infrastructure
and SaaS and build software, wesaw an explosion of SaaS and
software companies.
Like what Gen AI is doing rightnow is it's making kind of

(22:45):
average human intelligence verycheap.
So if you look at the world, Ithink the overall intelligence
in the world, human plussoftware, is going to 10x in the
next maybe decade.
As part of that, I think thatreally gives us additional
capacity to do a lot of otherstuff.

(23:06):
Uh, and one of the ways to makethe pie or to make the pizza
bigger is to you know uh expandinto additional planets.
When you have like 10 differentplanets, then you have
additional work and excitementand development to keep
everybody working on interestingthings.
Hell yeah, that's awesome.
I love that.

SPEAKER_00 (23:26):
You'll be going to one of those planets like now.
I'm really curious.
Like, would you be on board totravel to another planet?

SPEAKER_01 (23:31):
Uh maybe not myself, but I I could volunteer my
daughter for the Martian Scienceuh Academy of Science.
I think that would be the futureMIT is Martian Academy of
Science.

SPEAKER_00 (23:42):
I love it.
We should make shirts.

SPEAKER_02 (23:44):
How about you, Jen?
Where where is your head goingwith this?

SPEAKER_00 (23:48):
So by birth, I'm a special ed teacher.
So innovative technology andevolving technology has always
been part of the best part of Ithink human evolvement.
Like my students withsignificant disabilities or even
not so significant really relyon that assistive technology to

(24:08):
aid themselves to join in withtheir peers and partake in
classroom experience and learnjust like everyone else.
And so for me, like AI is supercool because I think about
students that I've taught in thepast where you know, a student
that I I had that had no umspeech ability can communicate

(24:28):
thoughtfully and quickly.
A student that has no capacityfor uh to use their arms or legs
uh can now create art projectsin ways that they couldn't
before by writing really goodprompts to convey, you know,
artistic design.
Um, so like I always lean intothat and I think thinking about
how AI can change and how it canmake us more effective.

(24:53):
Um, it can really push uh ourlearning.
I know I always am intrigued asto some of the feedback that AI
gives me about my writing.
And I'm like, dang, you're kindof right.
That was a little redundant.
Um it makes me a better learner,I think, as well.
The thing that I that keeps meup at night is trying to figure
out, I always go back to thathuman element.

(25:15):
Like, how are we going to beable to find that new path
forward around trust and umunderstanding information and
what is real versus not what isnot real?
And thinking about media andjust general information that we
depend on as true, um, what thatlooks like moving forward and

(25:36):
and even like thinking aboutwar, right?
We're already ha in this spaceof digital warfare.
Like, will there be a space inthe future where there aren't
any, you know, injuries or gunsor I don't know.
I just that's what that's whatreally poops me up at night is
thinking about what some ofthose really awful things might
look like.
You know, as a school person, Isee and everyone gets sick of me

(25:59):
talking about schools, but Ilove it.
Um, we see it already, right?
That digital warfare, thatcyberbullying is is every day
all day.
Um just watched a crazydocumentary on Netflix about
that yesterday.
Um, but that's really what Ithink about are all the
implications on humans and andwhat that does is we talk to
each other, interact with eachother, expand relationships,

(26:22):
those kinds of things.
So that's what keeps me up atnight.
And cyber attacks, obviously.

SPEAKER_02 (26:29):
I'm reading a book right now called uh Antifragile,
and it's about how these blackswan events or or these things
in in history manifest in waysthat we don't really predict
because the human brain is soused to categorizing things and
and finding a narrative thatkind of fits a linear
progression, right?
Um I think I'm with you, Jenna,and and and all of you to some

(26:53):
extent where I think it will beinteresting to see how it
actually shifts humanconsciousness or the paradigm
with our of our work life or ourcreative life.
Um I've said this before in thepodcast, but we've already seen
how uh AI has infiltrated thecreative space in a way that we
we didn't really predict musicand and poetry or maybe uh

(27:14):
writing or scripts to be kind ofon the front lines of that of
that AI takeover.
Um, but that's you know, kind ofbeen gobbled up pretty quickly.
So one of my concerns is the isthe music thing, especially when
you have boardrooms and andlarge multinational corporations
looking at the bottom line andsaying, hey, do we need to hire

(27:35):
this musician to create thiscomposition for this movie, or
do we need to hire this directorto to make this film when we can
just have AI do it?
And I think humans areresilient, so it'll be really
interesting to see how we kindof navigate that that slalom
course.
And and you know, we're we'repretty good at coming out ahead,
um, evolutionarily speaking.

(27:56):
So it will be interesting tosee.

SPEAKER_03 (27:58):
Um, I wanted to crossover there though, Josh,
right?
Like one of one of the projectsthat you're working on um
outside of of IT is beenremastering, so to speak, these
um show themes, right?
So shows that um were were madeyears ago, the um creative

(28:19):
license for lack of a better uhterm on the music expires.
So now that music has to beeither relicensed or or
rewritten.
And um, which, you know, if youstep back and think about it,
it's like, wow, that's that'sinteresting that you know
there's there's an expiration tothe licensing of this.
So now we've got to go back andinvest work and time to just

(28:41):
recreate the the theme music tothis particular show because of
some sort of weird governancethat was in place that has an
expiration date.
Uh so there's other things outthere that you know we're we're
probably going to bump into inour lifetimes that you know
governance has imposed thesecertain things that don't always

(29:02):
make sense, and we've got tofind creative solutions around
them.

SPEAKER_02 (29:07):
And that's the black swan element, right?
And the the the concern is thereis that they'll just do the
cheapest way possible.
And if AI could take all thosesongs and spit out new ones that
are sound-alikes withoutviolating the copyright law.
But then to your point, um, Ijust saw the sphere in Las Vegas
had just did a remake of TheWizard of Oz and then a totally

(29:29):
immersive scene with the tornadowhere they had like you know
like a 4D experience wherethey're actually like blowing
you know pieces of paper aroundthe room and you have the wind
blowing and it looked reallyballs dropping from the ceiling.

SPEAKER_00 (29:42):
Yep.

SPEAKER_02 (29:42):
Yeah.
So to Edward's point as well, asit will be interesting to see
what kind of new developmentsare on the horizon, um,
especially in the tech world aswe respond to those things.
Um but this that's one of manyoutcomes, right?
And there's a spectrum ofoutcomes.
Um, the one I wanted to wrap upwith today was something that
Edward shared with me, whichI've also spent some time

(30:05):
thinking about.
I'm curious if you, Jen and Erichave too, is if what if AI just
plateaus and just gets stuck inthe mud?
You know, we've seen this withother technologies where the
sky's the limit and it'spromising all these things,
whether it's the dot-com bubbleor or what have you, and then it
just totally peters out and kindof hits it's a plateau.
Edward, what did you what do youthink about that?

(30:25):
What's the next evolution of ARthat really needs to push us
into that realm ofsuperintelligence?

SPEAKER_01 (30:30):
Yeah, uh I think C jury is still out.
We are definitely seeing Cmodels plateauing.
I see development we saw two orthree years ago were much more
rapid compared to what we aregetting right now.
But like we're getting to theiPhone maybe 12 or 13 category,

(30:52):
you know, that kind of velocityof innovation from the model
providers.
Um I'm torn whether it's a goodthing or a bad thing.
Obviously, there might be a lotof benefits of the AI getting
smart enough to do a lot of theboring tasks, but not smart
enough to cause a lot of thesocietal problems that a lot of

(31:14):
people are concerned about.
But at the same time, obviouslyhaving artificial
superintelligence could unlocknew scientific discoveries, new
cures of diseases, you know,help us to maybe design warp
engines and stuff like that.
So um, yeah, honestly, I'm torn.
I could argue both ways.
There are benefits of notactually plateauing out because

(31:39):
our society might not be readyfor it.
Uh, but there are alsotransformative effects if we can
have ASI, again, helping us tobuild warp engines, helping us
cure all the diseases and all ofthat.

SPEAKER_02 (31:52):
Maybe it could help us uh better models to detect
some of these asteroids that arefloating our way.
Uh we have three eye atlascoming our way, I think, at the
end of the year this year, andwe have like seven major
asteroids um kind of in oursolar system or some of them
interstellar.
So had to end it with a littlespace talk, Eric.
Uh Edward got me started.

(32:13):
Get the tinfoil hat out.

SPEAKER_03 (32:14):
You know, I I think Josh, the the um we're limited
right now in a in a binary way.
So computers are it's allbinary, right?
When you go from the the higherlevel languages like Java, and
then you you you you go lowerand lower into assembler and
then machine, right?

(32:35):
It's just all ones and zeros,and that's back to the punch
cart um of you know 75 yearsago, um or longer.
But but the I I think wherewe're where we're going now with
quantum coming around thecorner, like there's probably
less than one percent of humanson the planet that understand
quantum.

(32:56):
Um and and when that reallycomes to fruition and quantum
computers are easier to come by,I think there it'll largely be
operating outside of the limitsof human intelligence, of being
able to understand whensomething is not in a state of a
one or zero, but it's somewherein between, and it's only it

(33:19):
only becomes a state when youobserve it.
The the amount of processingthat and and compute power that
that will bring to be able forfor machines to interact with
each other outside of any sortof human input, um, machines
will be able to develop theirtheir own languages of of

(33:41):
programming that we can't evenunderstand.
And and that's where you knowwe'll I think we'll truly have
that breakout and we'll we'll beable to go beyond what we're
able to control today in in ourlevel of programming.

SPEAKER_02 (33:56):
The singularity is nigh.
All right.
Thanks so much for joining ustoday.
Our guest today was Edward Wuwith Drop Zone AI.
We've also had Jen Lotsi andEric Brown from IT Audit Labs.
I'm Joshua Schmidt, your co-hostand producer.
Thanks so much for tuning in.
We have episodes out every otherMonday, and please check out our
new podcast with Jen Lotsey, SipCyber.

(34:17):
It's it's dropping, uh probablyalready dropped by the time this
is out.
So give that a listen as well.
And check out our website atwww.itauditlabs.com.

SPEAKER_03 (34:26):
Thanks again.

SPEAKER_02 (34:27):
See you in the next one.

SPEAKER_03 (34:29):
You have been listening to the audit presented
by IT Audit Labs.
We are experts at assessing riskand compliance while providing
administrative and technicalcontrols to improve our clients'
data security.
Our threat assessments find thesoft spots before the bad guys
do, identifying likelihood andimpact, or all our security
control assessments rank thelevel of maturity relative to

(34:52):
the size of your organization.
Thanks to our devoted listenersand followers, as well as our
producer, Joshua J.
Schmidt, and our audio videoeditor, Cameron Hill.
You can stay up to date on thelatest cybersecurity topics by
giving us a like and a follow onour socials, and subscribing to
this podcast on Apple, Spotify,or wherever you source your

(35:14):
security content.
Advertise With Us

Popular Podcasts

Stuff You Should Know
My Favorite Murder with Karen Kilgariff and Georgia Hardstark

My Favorite Murder with Karen Kilgariff and Georgia Hardstark

My Favorite Murder is a true crime comedy podcast hosted by Karen Kilgariff and Georgia Hardstark. Each week, Karen and Georgia share compelling true crimes and hometown stories from friends and listeners. Since MFM launched in January of 2016, Karen and Georgia have shared their lifelong interest in true crime and have covered stories of infamous serial killers like the Night Stalker, mysterious cold cases, captivating cults, incredible survivor stories and important events from history like the Tulsa race massacre of 1921. My Favorite Murder is part of the Exactly Right podcast network that provides a platform for bold, creative voices to bring to life provocative, entertaining and relatable stories for audiences everywhere. The Exactly Right roster of podcasts covers a variety of topics including historic true crime, comedic interviews and news, science, pop culture and more. Podcasts on the network include Buried Bones with Kate Winkler Dawson and Paul Holes, That's Messed Up: An SVU Podcast, This Podcast Will Kill You, Bananas and more.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.