All Episodes

December 26, 2024 44 mins

Send us a text

The episode reflects on the intertwined impacts of AI, privacy, and cybersecurity throughout 2024. We explore trends around ransomware, consumer trust in data privacy, and what to expect in 2025 as technologies evolve and regulations advance.

• Discussion of AI's ambiguous role in cybersecurity 
• Rise of ransomware incidents and implications for critical sectors 
• Notable trends in consumer preferences regarding data privacy 
• Legislative developments in data protection regulations 
• Predictions for 2025 focused on quantum computing and cybersecurity strategies

Support the show

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:01):
All righty, then.
Ladies and gentlemen, welcomeback to another episode of
Privacy, please.
I'm Cameron Ivey, alongside myfriend and co-host and host, mr
Cool.
Mr Gabe Gumbs, I got upgraded.
I got upgraded.
I got upgraded.

Speaker 2 (00:24):
And the crowd does not actually go wild, it's dead
silent over here it's deadfreaking silent over there.
There's no crowd.
If there were, they'd bethrowing tomatoes.
They'd be like boo boo Boo thisguy, you suck.
We want the frog.
Points to anyone who gets thatreference.
It's a Muppets reference.

Speaker 1 (00:45):
Speaking of the muppets, the, did you see the?
Uh, the long-awaited very,almost kind of sad that the, the
, that contract ended with themuppets, so they're kind of up
in the air, I think.
Um, it was either with nbc orwhoever they were with for the
longest time.
And yeah, but good news forMuppets fans they are removing

(01:09):
Aerosmith Rocket Roller Coasterand putting Muppets in its place
, because Aerosmith isn'trelevant.
But apparently their contractat Disney is still with Muppets.
Fascinating, fascinating.

Speaker 2 (01:22):
Bonus news for Muppets fans the election is
over, so all of your.
For Muppets fans, the electionis over, so all of your favorite
Muppets have either won or lost.
But either way, that's mine.
It's just, I think the.

Speaker 1 (01:34):
the only winners are the, the two old guys, Um, I
forget their names.

Speaker 2 (01:38):
Muppet one and Muppet two.

Speaker 1 (01:43):
No, don't.
They.
Don't they have names, or no,don't know.
The two, the two that arealways in the balcony that are
talking crap, oh yeah yeah, yeah, those are.

Speaker 2 (01:51):
Those are the points.
Points to cam.
Those are the ones.
That's that scream you suck, wewant the frog.
I forget their names.
I forget their names, but yes,the two old dudes in the balcony
.

Speaker 1 (02:01):
You suck, we want.
I wish that that's having themfollow along your life would be
pretty fun for them, I would I'dbe okay with that I'd be okay
down with that, yeah for sure,those two grumpy muppets
following me.

Speaker 2 (02:14):
It almost feels like like your life is like that when
you work in cyber security andprivacy anyway, like those two
muppets are just following youaround all year round yep just
telling you how much you suckand how much they want the frog
instead.

Speaker 1 (02:26):
Gabe, that's your internal voice Telling you that
you're not good enough yet.

Speaker 2 (02:34):
So you keep going Saying the quiet part out loud,
yeah.

Speaker 1 (02:38):
Speaking of.
I mean this is kind of offtopic, but I don't know.
I think, um, there's been sucha weird thing about bashing men
this year, if I'm not mistaken.

Speaker 2 (02:53):
And you know what that needs to come to an end it
felt like that, yeah, they werebeing attacked.
Yeah, it definitely felt likethat.
I tell you, though, sometimesI'm, I'm, I'm, I'm skeptical and
like who's driving thesenarratives right?
Like true, because although I,I agree with it, it's like I, I
read it and I'm like but whotold you where you get this?

(03:14):
Because none of us told youthat.
I mean, we were all faking it,but we didn't say this is true.
I know the men.
We didn't say we didn't say.

Speaker 1 (03:21):
But the weird thing is is like I saw a lot of music
too, a lot of a lot, a lot ofartists that are independent
artists making songs about men'shealth and mental health.
Yeah, that's been a decenttrend.

Speaker 2 (03:33):
I've seen similar.
I think that's a good trend.
It is yeah, technically ontopic.
I mean, it's our yearly wrap-upshow and we were going to talk
trends of 24.
So all of these are very on top.

Speaker 1 (03:47):
That's true, it's very true.
Well, men's health is important, mental health as well as women
and all human beings, andhealth, indeed, just all around.

Speaker 2 (03:56):
Yeah, finding that balance is key for sure, your
data health, your data health isimportant.
The health of your data is veryimportant.

Speaker 1 (04:04):
Don't trust every doctor, you know what I mean no,
no, some of those doctorsaren't even healthy themselves,
if you know what I'm saying.

Speaker 2 (04:10):
Isn't that the worst?
You see a doctor smoking on acigarette and you're like ugh.

Speaker 1 (04:14):
It's like a trainer that tries to tell you you know
how to eat and how to train,when they're not even fit
themselves, which is silly whichis silly.

Speaker 2 (04:27):
No, never, um, never trust a skinny chef is what is
what daddy used to say.
Papa always said never trust askinny chef.

Speaker 1 (04:33):
Wise words yeah, man was filled with wisdom well,
yeah, I mean in reflection of2024 as a whole.
Man, it's been a year.
There's been a lot going on.

Speaker 2 (04:47):
You know what the number one thing was, and I
think you'll agree with this.
It feels like it's died down alittle bit, but maybe not.
But flipping AI.
Ai dominated 24 in every waypossible.
Gpt was everywhere, like COVID.
Gpt was just like the infectionthat spread.

(05:08):
It was the COVID of 2024.

Speaker 1 (05:11):
It was the COVID of 2024.

Speaker 2 (05:12):
GPT was the COVID of 2024.
That's the title of thisepisode.
Gpt was the COVID of 2024.
And in a lot of ways, it did alot of harm.
It did a lot of harm the wayCOVID did harm.
We saw an increase in cybersecurity attacks that were very
driven um by ai, which is to say, the bad guys learned how to
use ai to to to make themselvesbetter.

(05:34):
Everything from generative aito write better phishing emails
to leveraging ai to write bettermalware code.
We see it everywhere in the badguy landscape, like oh, yeah,
yeah, in uh ai.

Speaker 1 (05:50):
I knew this was obviously going to be a topic
that we're going to talk about,but let me just say one thing
about chat.

Speaker 2 (05:56):
You just gave away to the listeners that we don't
prep together ahead of time.
Who preps?
Who preps?
Just you guys.

Speaker 1 (06:04):
Boring, yeah what do you want us to read off some
sheets or something exactly?
That's not real.
That's not what we're here for.
What's real is this chad gpt.
I do not like what they havechanged with it recently, where
it's trying to do too much.
Oh yeah, I don't.
Have you realized?
Have you noticed that, whereyou try to tell a simple task to

(06:25):
help with organizing somethingand it just does way too much
and you're like, no, what isthis?
This is this isn't even what Iasked for I have gotten.

Speaker 2 (06:34):
Whenever I do use generative ai, I've gotten very
strict about like like myprompts have all kinds of
details like and do not do itwhile standing on one foot
facing east.
Wait, that's not how you work ona day on a standing desk right,
like when you ask it a task,you're right, though like it's

(06:55):
you've got to be like so muchmore specific, and I think
that's because it's actuallygotten better.
Like when it was dumber, itcouldn't do more right like, so
it was like, ah, I've done all Ican and I've stopped, but now
it's getting better and better.
Has this inverse property oflike maybe doing more than you
really need, or required orasked.

Speaker 1 (07:13):
Well, that's the only reason why I was saying that I
agree with you.
I think it's gotten better, butI think it's also gotten like
it's almost.
You tell it something and thenyou have to reiterate the next
round.
After that comes the results.
You have to say no, I wantsomething a little more
simplistic.
Like this is too much, so I'mhaving to go in and tell it like

(07:35):
no, make this dumber for meplease.
This sounds so robotic.
I need a little bit more human,even though you're not a human.

Speaker 2 (07:43):
It's also gotten ridiculously good at coding.
I've used it to put us aroundwith some very small projects of
my own.
How do you trust it?
How do you check it?
How do you?
So?
That's a great question.
The short answer is, at leastfor myself, is in small parts
that I can actually validatelike, as opposed to try to do

(08:03):
big monolithic things, canactually validate like, as
opposed to try to do bigmonolithic things.
Um, but I have a friend who's II swear to you he's he's
generating probably 70 of hiscode right now with just genervi
.
To be fair, he's doing a lot ofprototyping work, which is
probably what tools like generviare best at right.
Like I don't think you want touse it to build real production

(08:24):
software, but yeah, you knowhe's prototyping so much faster
and cranking out tons of codewith generative ai.
It's impressive to watch itreally is.

Speaker 1 (08:37):
So is that gonna?
I mean, is that a threat to betaking jobs from coders?

Speaker 2 (08:45):
no, because actual coders still have to take
prototypes and make it intothings that work and scale.
So no, if anything, I'm of theopinion that it might increase
the number of coders.
If we gain the ability torapidly prototype more solutions
, we may find ourselves needingmore people to build those

(09:06):
solutions, so it actually, Idon't think, will take jobs away
.
From that perspective, andbecause it's gotten so good at,
for example, the things I justdescribed.
We're going to also see the badguys get a lot better at
generating malware and allthat's going to do is create
more demand.
On the defender side of thefence, ai is definitely creating

(09:28):
an arms race, and the people inthe middle sitting back selling
CPUs to power this well, theyhad a really good year.
If you look at 2024 stocks, ifyou think the AI companies did
well, look at the chipmanufacturers oh yeah.
If you think the AI companiesdid well, look at the chip
manufacturers oh yeah.
Every chat GPT article was.

(09:49):
They were just like that memewith the black guy standing
behind a tree rubbing his handslike this.
That was all the chip guys justlike yeah, oh man well, you
know what the old, the old adagegoes.

Speaker 1 (10:04):
I don't know what it is, but good versus evil,
there's always got to be an evilif there's a good right.

Speaker 2 (10:12):
A bit of a nihilist view, but I get it.
Yeah, I don't disagree.
I don't disagree.
I don't disagree at all.
Existence is, it's good, yeah.

Speaker 1 (10:21):
We have to have that.

Speaker 2 (10:22):
Yeah, that balance has to exist somewhere somehow.

Speaker 1 (10:26):
Yeah, because how else are we going to get
challenged or grow, or improvethings.

Speaker 2 (10:34):
I'd be okay if ransomware never existed any
longer, I mean, if itdisappeared tomorrow although
I'm heavily invested in solvingthe problem and I think one of
the things as we get to talkingabout trends of 25, I think we
will see the problem grow, but Ithink we're going to see more

(10:59):
solution providers actually stepup to the table and try and
tackle ransomware in a way thatisn't just a cash grab for them.
I'm not saying I think thenumbers are going to go down in
25.
I actually think the numbersmay start going down by 2030.
But I think the next severalyears it'll grow.
But I also think we'll see alot of people really fighting
the good fight in a way thatisn't, like I said, just a cash
grab.
Yeah.

Speaker 1 (11:20):
We'll stay on this topic.
So, when it comes to ransomwarein 2024 game and the whole
thing with AI, how has thatchanged the game in terms of
like, did that make ransomwarerise this past year?
And I mean, let's talk about itfrom a vulnerability aspect and

(11:41):
also from the tool side ofthings, if, if you have that,
that's a good.

Speaker 2 (11:45):
That's a good place to press on.
So we saw the numbers rise forsure.
I don't know if we can justattribute the numbers just to AI
.
I think we could probablyattribute the numbers to protect
themselves against it, and itcontinues to also be difficult
for organizations to recoverafter they've been hit by it,

(12:15):
and so that has drivendefinitely a larger rise in the
tactics being used.
Ransomware attackers realizethey don't just have to steal
your data, they just have tokeep you from your data.
To steal your data, they justhave to keep you from your data,
and if they can successfully dothat, there's a lot more ways
for them to do that namely keepyou from your data than it is
for them to just get your data.
And so that gave rise to it.

(12:37):
What AI did do for ransomattacks in particular, it
enhanced a lot of the tools thatthey were using.
It really made it a lot easierfor them to again do things like
generate phishing campaignsthat were grammatically accurate
and were able to bypasssecurity email gateways and

(13:04):
things of that nature ways andthings of that nature.
We saw entire new malware kitsshow up on the black market that
were all AI powered right Likewe saw a lot of that.
So what we saw was AIcontributing to the productivity
of attackers.
We definitely saw that, whichis unfortunately.

(13:25):
I don't think we saw as muchproductivity gains on the
defender side of the fence.
Although a number ofcybersecurity solutions were
introduced to the market thatall touted AI capabilities, I
don't think I saw any thattipped the scales and it's like
oh wow, okay, thanks to AI inthis security product, we've now
gotten much better at blankproblem.

(13:46):
I don't know anyone that canfill in that blank we've now
gotten much better at blankproblem.

Speaker 1 (13:53):
I don't know anyone that can fill in that blank.
So, in your mind and yourthoughts going into 2025, I'm
going to just do this now forthis last episode of the year.
If it is we're going to hone iton balance.
How do you balance the new useof AI and inside cybersecurity.

Speaker 2 (14:15):
How do we find that balance going into 2025, in your
opinion?
I think we point those newtools at the problem, as opposed
to trying to simply shove andembed them into the solutions.
That's not meant to be anoverly general statement, so let
me bring it down a little bitmore.
I think we have to start usingAI smarter and better.

(14:37):
We have to start using it tohelp solve the actual problems
we're trying to solve, forexample, differentiating someone
posing as someone else versusthat real person, right, as
opposed to simply shoving AIinto the box so that we can sell
more of the thing in the boxand go hey guys, look version
five now with new AI, and theprice has gone up.

(14:59):
There's been a lot of cash grab.
There just has been A lot ofquiche, quiche, a lot of quiche,
a lot of people grabbing at thequiche.
I like the qu cache.
Yeah, okay, I like my cache andcache.
That's how I take it.

Speaker 1 (15:14):
So here's just facts, just to kind of look over that
I pulled up Ransomware attacksincreased by 150% this year.
That's a wild number.
150%, that's a wild number,mostly for critical
infrastructure and healthcare,yeah, which makes sense.

Speaker 2 (15:33):
That makes a lot of sense.
I think we'll see more of that.
We saw a lot of healthcareattacks.
I think we'll see a whole lotmore of that.

Speaker 1 (15:39):
I don't know how true this is, but maybe you can fact
check it.
It says that AI drivencybersecurity solutions can
reduce the threat detection timeby up to 90%.
I can see the efficiency ofusing AI for that, but I don't
know how true that is.
That number seems a little high.

Speaker 2 (15:56):
I think that number should be qualified.
It can definitely increase thethreat detection for a lot of
tactics.
There are specific tactics thatbecome easier to find.

Speaker 1 (16:10):
Can you give an example?
Can you give an example of whatthat actually means?

Speaker 2 (16:14):
Yeah, it's easier to detect leveraging AI.
Some network attacks Okay,especially some of the
reconnaissance level attacks,attacks, right, like the ones
that are simply prodding andpoking and looking for things,
because those, those havethey're, they're pretty, the the

(16:37):
probabilistic nature of them isrelatively defined, that you
can, like.
Ai is good for that.
Ai getting better at, say,detecting phishing attacks I, I
still don't think we've seenthat yet and there's a little
bit of I don't know how that'sgoing to work either.
Right, yeah, you feed it morephishing emails to train it on
it.
But those same detectionengines are being used by the

(17:03):
bad guys as they generate newphishing emails to see if they
can bypass them.
Right, that?
I think there's some placeswhere trying to leverage AI, in
my opinion, is just not going toyield the results we actually
want.
Using AI in the privacy spaceand or in the security space to
identify sensitive information,for example yeah, I think we may

(17:27):
continue to see those kinds ofthings getting much better.
Yeah.

Speaker 1 (17:31):
Anything else on the security side to reflect on,
maybe things around zero trust.

Speaker 2 (17:38):
Zero trust saw a healthy amount of adoption in
2024.
I don't know if 2024 was theyear it went from just marketing
buzzwords to real technologiesthat implemented zero trust I'll
use the phrasing by the letterof the law, so to speak.
You know, nist literally doeshave a series of bullet points

(18:00):
that highlight, you know, whatzero trust systems should look
like and entail and how theyshould behave.
And, yeah, I feel like we'vegotten better across the board
at adopting zero trust, even ifit's just the very first steps
of okay, as an organization,we're looking to move towards
that direction.
Definitely a great year for it.

(18:22):
Versus the years prior, it wasstill just a lot of talking and
a lot of education.
I feel like we're tipping pastthat.
I'll agree with that.

Speaker 1 (18:32):
Nice.
Let's move into privacydevelopments this last year,
talk a little bit about, let'ssee, the role of privacy
enhancing technologies, so otherotherwise known as P E, t's um

(18:54):
around modern data analysis.
Yeah, um, I don't know likedifferential privacy and
homographic encryption werereally big topics at least that
I remember from this year.
Yeah, um, a lot of tech graphicencryption were really big
topics at least that I rememberfrom this year.
Yeah, yeah, um, a lot of tech,a lot of major tech companies

(19:19):
were adopting.

Speaker 2 (19:19):
um, I don't know what are your thoughts on that.
I think I saw three trends thatreally stood out in 2024, and
one of them I think is a veryimportant one which may have
helped the others a significantexpansion in the number of chief
privacy officer roles.
Those were either people thatwere taking on dual roles and or
that role itself was created asan individual role, but

(19:42):
ultimately we saw someone in aleadership position become
responsible for data privacy,and that, I think, was huge.
We saw that across the board ina lot of areas.
We saw a hell of a lot ofscrutiny of AI's impact on
privacy.
Speaking of AI, there was a lot,a lot, a lot, a lot of concerns

(20:04):
around the impact that AI doeshave on privacy across the board
.
It had been so rapidly adopted,without necessarily balancing
the needs that come along withprivacy, that we probably don't
even know the full extent towhich some of those concerns are
valid and others are lessworrying.

(20:27):
We saw that.
And then was it last week'sepisode, I think you covered it,
maybe it was the week prior tothat we saw a significant rise
in legislative developments.

Speaker 1 (20:37):
Yes.

Speaker 2 (20:37):
Just tons and tons of new legislation, not just in
the US but across the entirecountry, but hell in the US
specifically, since we don'thave a national law that governs
privacy, data privacy that is.
We saw so many on the statelevel.

Speaker 1 (20:56):
State level.
Yeah, we got eight coming in2025 that are already going into
effect, most of them in January, some in July, a little bit
later.
Gdpr this is a fact.
Gdpr inspired over 100countries to implement or
enhance their data protectionlaws, which is crazy powerful.

Speaker 2 (21:16):
That's ridiculous, yeah.

Speaker 1 (21:18):
A survey found that 70% of consumers are more likely
to trust companies thatprioritize data privacy.
I know that was huge this yearin consumer trust.
That was a big thing, I know,which will continue.
Like how do companies and techcompanies continue to innovate
with consumer privacy as apriority?

(21:39):
Yeah, keeping it in mind, Ithink going into 2025, that's
still going to be one of thenumber one things that companies
are going to be thinking aboutwhen innovating and, you know,
trying to uh, it's, it's, it'scool to see, because you can see
that's the growth withconsumers in general.
Oh, this company actuallyrespects my privacy.

(22:01):
They're not just, um, now, I'mnot.
I'm not saying that appledoesn't they you, but they do a
good job with this.
Using an example, when they dotheir privacy commercials,
that's an easy way for a companyto say for someone to see it
and go oh okay, these guys,they're respecting our privacy.

(22:21):
I like that.
I don't know if that's just aploy or if it's whatever, but
it's smart for companies toobviously find innovative ways
to show that they care abouttheir, their customers and their
, their actual privacy and theirrights yeah but it's also going
that way like let's move on toregulation, like you were just

(22:43):
talking about a lot ofregulatory stuff happening when
it comes to let's see AI ethicsboard influence the development
and deployment of AItechnologies.
I mean just going back.
I remember I forget the name,but there was one guy that was

(23:04):
in office that used AI to writeone of their new bills.
That got passed.

Speaker 2 (23:10):
Do you remember that?

Speaker 1 (23:10):
I forget the guy's name.
It was kind of a story.
I do remember it.
I do not remember the nameeither, but it was funny to me
because it's like he was likework smarter, not harder.
But it's also kind of like soyou ai write an actual bill and
that it already passed, and butI think it was probably taken
out of context, like I'm sure,I'm sure that they.

(23:31):
I think it was more so that itwas structured, um, but anyways,
it's just things like that.
There was over 50 countriesestablished ai ethics to guide
technology use.
That's interesting.
Yeah, data localization lawsare in place in over 30

(23:52):
countries, which is pretty hugein terms of affecting global
data flow, and this isinteresting.
The fragmented internet couldcost the global economy up to $1
trillion annually.

Speaker 2 (24:09):
I don't know what that one means.
Yeah, I don't either.

Speaker 1 (24:11):
I'm going to be honest yeah, I'm not sure what
that means.
If any of our listeners knowwhat that means, these are just
some facts that I've pulled.

Speaker 2 (24:20):
I'm looking forward to a more fragmented internet
and less centralized.
I'm hoping to see things likeMastodon, take off more and less
.
Consolidation of socialplatforms there's a small number
of social platforms owned by asmall number of entities.
It's interesting too right likemost people don't think about

(24:44):
the nature, the decentralizednature of email and the fact
that I can email you and wedon't have to be on that.
Use the same provider right likewe don't all have right, right,
but if you and I want tocommunicate, say via, I'll just
pick one twitter, then we, weall have to be on Twitter
together.
Right, like it's notdecentralized, I am hoping to

(25:07):
see a rise in decentralizedplatforms.
We've definitely saw some, withsome of the fallout we saw with
like Reddit this year.
There were some decentralizedplatforms that stood up that
well, they didn't stand up, theywere already there, but they
started taking on morepopularity.
That was another significanttrend in 2024 a lot, of, a lot

(25:31):
of decentralized technologyplatforms um that's a good point
.

Speaker 1 (25:33):
Yeah, that is.
That was one of the.
I can't believe we didn't thinkabout that.
That's well, you obviously justdid, but that should be
interesting.
Going into 25, because, yeah,you, I mean talking about
proprietary, centralizedplatforms like threads, twitter.
Yep, talk about an annoyingthing when you, if you use

(25:56):
instagram and threads pops upall the time.
It's just.
I don't know if you've everseen what's on threads.

Speaker 2 (26:02):
I have not, I do.
I think it's worse than twitter.
Yeah, I need, I am none ofthose I don't like it.

Speaker 1 (26:09):
I don't like the way that all that's going with how
people are using social media,things like threads, people are
just posting things, justabsurdity, yeah, to get to get
engagement and reactions, andit's it's so troubling and
distracting that I don't know.

(26:29):
I mean I'm glad that Floridajust passed a bill for I think
it's either 14 or 16 years oldaren't allowed to have social
media accounts or something likethat without a parent, which is
great.

Speaker 2 (26:41):
I think it should be a little.
I think it should be 18.

Speaker 1 (26:45):
I was going to say.

Speaker 2 (26:46):
I don't know how one enforces that, but Well, I think
I don't know how one enforcesthat, but well, I think I don't
know.
I mean, look, if 16 year oldscan get their hands on
cigarettes, booze andpornography, I think they'll get
their hands on the internet.
All right, it's, it's just it'sgood.
It's good legislation forlegislation seeing.

Speaker 1 (27:00):
I'm just not really sure where that goes I know I'm
hoping'm hoping for you're rightthough, but cigarettes has
always been a problem.
Now it's vape pens, yeah thereit is.

Speaker 2 (27:10):
That's what the kids are on Now.
Yeah, so see, they'll get theirhands on those.

Speaker 1 (27:13):
Those things are even more addicting, I think.
Yeah, probably they're moredangerous because you can do
them anywhere and you're justcontinuously doing them All the
time.

Speaker 2 (27:22):
That's true.

Speaker 1 (27:23):
The effect of what they're going to be when you get
older.
No, I mean, I don't want to getinto that yet, thinking about
my son when he gets a littleolder.
No, forget about it.
Slap that thing right out ofhis hands.
There it is.

Speaker 2 (27:36):
There it is Blue Sky.
We're going to talk specificstoo, right?
We saw a lot of adoption ofBlue Sky.
A lot of people fled Twitterand head over to Blue Sky, which
is a decentralized alternativeto Twitter.
Did you ever look into ityourself?
I mean no of it.
I've used it.

(27:57):
I am not on social like that.
So, beyond my technologycuriosity, short answer is no.

Speaker 1 (28:05):
It looks just like Twitter.

Speaker 2 (28:07):
Yeah.

Speaker 1 (28:08):
I've never used it.
I've never even heard of this.
Where am I?
Am I living under a rock, Gabe?

Speaker 2 (28:13):
You know that's the thing is.
I think a lot of people haven't.
Really, the decentralizedtechnology platforms are still
mostly the domain of a lot oftech techies.
I think blue sky might be thefirst one to kind of make its
way from the purely tech useradoption to regular user

(28:33):
adoption.
You know, but while we're,while we're throwing some out
there, there's also mastodon,right.
So mastodon, yeah, we see wesee a lot more adoption of that
and you know, I would highlysuggest folks go check out these
decentralized platforms.
There's versions of of ofYouTube, even right like those
are worth checking out.
There's.

Speaker 1 (28:56):
Yeah, actually you're .
You're spot on here, Gabe.
It says that decentralizedidentity solutions are expected
to grow by 40%.

Speaker 2 (29:03):
Hey, there you go, and 2025 look, the suit say is
not supposed to be proven rightuntil about 12 months from now,
but I think that I I mean anyonewho's tuned into the salty suit
say before knows that he, helikes to, he likes to to to
predict things that almost feellike duh moments and that one

(29:25):
feels like a duh right, like.
We'll definitely see more ofthat.
We'll definitely see more folks.

Speaker 1 (29:29):
Privacy.
Yeah, and I agree.
Privacy by design is apparentlybecoming a legal requirement in
several jurisdictions.
Nice 85% of consumers arewilling to pay for more products
or pay more for products thatguarantee data privacy.
We talked about that a littlebit.

Speaker 2 (29:47):
I think those people should not do that.
I understand why they want todo that and why they're willing
to do it.
I think that they should votewith their wallets in the
opposite way and simply refuseto spend money on products that
don't preserve their privacy,not spend more on products that
will Wait, wait.
Say that again.

(30:07):
Simply do not purchase productsthat don't respect their
privacy versus spending more forproducts that do respect their
privacy.
Yeah, like just don't spendmoney on the ones that don't
Right.
That's the way to drive theeconomic.

Speaker 1 (30:20):
That's a good yeah, that was almost a little
confusing for me.

Speaker 2 (30:24):
I was like wait, so you don't want people going for
that.
Well, that's a good, yeah, yeah, yeah, that's almost.
That was almost a littleconfusing for me.
I was like wait, so you don'twant people going for that.
Well, that's the thing.
Like I don't, I don't, I don'twant to incentivize, like you
can only get privacy if you can,if you can afford it yeah,
which isn't fair that shouldn'tbe the way.
That should not be the waywhich by the way, malls.

Speaker 1 (30:44):
This is, this is way out of left field malls and
things like that need to beturned into homeless shelters
and um places for like schoolsor things for kids that don't
have.
They should turn those placeslike because who malls suck?
Let's be honest, they seem likethey don't.

(31:05):
We don't need them anymore.

Speaker 2 (31:06):
They seem like they've been dying so I've seen
a couple do that I.
I have two.
Yeah, I've seen a few that Ijust wish more would do it.

Speaker 1 (31:14):
Yeah, use that so we don't have.
You know, people that need helpon the streets.
You know, like, put themsomewhere and try to like help.
I know that there's people outthere trying to do that, but
that's not.
That's not.
The government wants this,let's be honest, and if you're
listening calling you out there,it is.
They want you to be addicted toprescription medication or

(31:36):
drugs because they want tocontrol.
Anyways, we're getting toopolitical.

Speaker 2 (31:40):
That's it.
Just don't, don't add.
Don't add us don't add us,we're just talking yeah, it's
okay, don't get paid yeah, youdon't have to listen.

Speaker 1 (31:47):
I'm sorry, I'm not sorry anyways, um, all right.
So security, security, well,actually, yeah, no, we touched
on that.
So security, let's talk alittle bit about 2025
predictions, what you got.

Speaker 2 (32:03):
So the soothsayer does again known for more of
those uh duh moments, but a bigduh like look, we're gonna see a
lot more ai in 25.
If, yeah, if, if you're sick ofai, now's a good time to just
turn off your computers.
And just because you're gonnasee we're gonna see a lot more
of it in 25 speaking of?

Speaker 1 (32:23):
have you seen the video where?
Where is the robot?
Where they're teaching it howto run downhill?

Speaker 2 (32:28):
oh god, are these the boston dynamics robots?
I'm too afraid to watch thosevideos they gave.
I'm telling you pure nightmarefuel it's pure.

Speaker 1 (32:37):
Can you imagine?
Can you imagine you're?
I don't know you're, just yougo to shoot a piece of trash.
You missed a trash.
Can you get shot by one ofthese things for littering?
Yeah?

Speaker 2 (32:50):
I'm probably going to struggle with using the three
seashells.
I'm not ready.
I'm not ready.

Speaker 1 (32:58):
Alright, so AI definitely going to be, we're
going to see a lot more AI.

Speaker 2 (33:00):
We're going to see a lot more AI in cybersecurity.
On both sides of the fence,we're going to see the bad guys
tool up really tool the hell upin 2025.
Like we're going to see a lotmore tooling up in 25.
And we will, I think, see a lotmore useful AI in cybersecurity
products in 25.
Also, I think a lot of themissed expectations of just the

(33:28):
AI hype-driven products willlevel down and we'll see actual
innovation that drives change.

Speaker 1 (33:37):
Yeah, I think to your point.
They're saying that 60% oforganizations plan to increase
their cybersecurity trainingbudgets in 2025.
Yeah, which is interesting?
Yeah, 60%.

Speaker 2 (33:54):
I'll give a big, bold prediction, because there was
another trend this year thatwill continue next year.
But I have a bolder prediction.
So we saw a lot of investmentsin quantum computing and we saw
some decent results.
We saw some decent movement init and we're probably according
to many, we're probably still Idon't know five, 10 years out

(34:18):
easy from having stable quantumcomputing.
But my bold prediction is Ithink we're actually closer.
I think in 25, we're actuallygonna, we're gonna get a think
in 25.
We're actually gonna.
We're gonna get a lot closer tothat goal.

Speaker 1 (34:32):
I think we'll shorten the timeline to stable quantum
from 10 years to like five yearsand we'll see that starting
next year I think we'll see realwhat's the expected timeline
roughly five to ten years, yeah,yeah, I mean I think it was
2030, right yeah?
Yeah, give or take yeah,quantum computing is expected to

(34:53):
break current encryptionmethods by 2030.

Speaker 2 (34:57):
I'm inclined to agree with that, but I think that
we'll actually see.
But that's a little deeperright?
Yeah, I'm actually saying that.
I think that that 2030 number Ithink we're going to see so
many advancements in 25 inquantum computing that we're
going to be by the end of 25,we're going to revise that
statement from actually.
I think we may see this happenby 27 or 28, but not 2030.

(35:17):
I think 25 is going to be abreakthrough year for a lot of
quantum computing problems thatwe will have learned how to
solve.

Speaker 1 (35:27):
Okay, that's exciting , right.
What does that actually mean ata high level?

Speaker 2 (35:36):
Well shit.
What it means is what one ofthe things that we we should be
doing in 25, that we should docollectively, is we should
prepare ourselves, like thereshould be a lot more quantum
preparation or post-quantumpreparation for encryption being
broken, and so I don't knowthat that's, unfortunately a
trend we'll see in 25.
My intuition is in betting onit, but it certainly should be a
trend in 25.
More folks should be payingattention to the quantum horizon

(36:00):
and making plans for it.
You know a?

Speaker 1 (36:23):
typical CISO usually draws out a three to five year
strategic roadmap for theirbusiness.
Going back to your point aboutAI, maybe you know more about
this, but explainable.
Ai helps reduce false positivesand threat detection by 50% is
a prediction for 2025.
I'm not sure I understand thatone, I'm not 100% familiar with

(36:47):
the phrasing explainable AI.

Speaker 2 (36:52):
I take that to mean AI that is capable of distilling
information, like maybe lookingat large amounts of log data
and actually giving you a humananswer to what an individual did
.

Speaker 1 (37:06):
Yeah, apparently it's a thing, so it's abbreviated
XAI is a subfield of machinelearning that helps people
understand how AI models arriveat their decisions.

Speaker 2 (37:18):
Well, I think that's going to be a very useful thing.
Ai, being the black box that itis for many, isn't good right
Like not knowing how your newcybersecurity AI tool detected a
threat and maybe missed one isvery important.

Speaker 1 (37:33):
Yeah, I want to know, like if it's X, b, y, z, right
Up down, left, right, what'sthis finishing move, right?

Speaker 2 (37:40):
right.

Speaker 1 (37:41):
Yeah, that's Cameron's analogy right there.

Speaker 2 (37:44):
There it is Up, up down down left, right, left,
right, Get over here.
There it is.

Speaker 1 (37:50):
Are we too old now?
Finish it.
That's still relevant.
Right To somebody To somebody,yeah yeah.
I can imagine kids aren'tlistening to this, so I mean,
maybe they are.
I don't know If you're outthere.
Hello to you too.

Speaker 2 (38:08):
Are you over 16?
Do not go on social media.

Speaker 1 (38:12):
Avoid it Well, any other predictions.
We haven't gone over from thesoothsayer.
So what do we?

Speaker 2 (38:20):
predict.
We predict that ransomware.
It's too easy of a prediction.
Yes, it will continue to rise,but I think the real thing that
we'll see is the impact AI hason both the problem and the
solution.
I think that's what we'll seeis a lot more of that.
I actually think we may startgetting better at it by the time
we get through 25, such thatit'll be time to pay attention

(38:42):
to new problems, such as gettingready for a post-quantum world.
I think the time is now.
I think that that event horizonis upon us.
I think, generally speaking, Ithink what we'll see a lot of,
too, is in the business side ofthings.
I think we're going to see alot of cybersecurity

(39:03):
transactional activity,definitely going to see a lot of
privacy tech transactionalactivity.
There are a lot oforganizations solving similar
problems or related problems,and we're overdue for a bit of
consolidation in those markets.
So I think we'll see a lot ofthat in 25 as well.
Hopefully that spurs on moreinnovation.

(39:26):
But I think generally, myoutlook for 25 is actually
somewhat steady.
I don't think we're going tosee anything drastically
different.
I think 25 is going to be ayear of significant incremental
progress versus big bust outthings.

Speaker 1 (39:45):
Yeah, yeah, I don't think think we're gonna see any
kind of surprises.
Maybe maybe on the regulatoryside, yeah, but maybe not.
Yeah, maybe that's just a canthat'll continue to get kicked
down the road, like it alwayskind of is when it comes to a
global or the um state.

Speaker 2 (40:06):
The afro federal yeah yeah, like a federal type of
well, look, the.
The new administration islikely to take control of all
branches of government, so theopportunity is there to do it.
Whether or not the will isthere we'll see, but the I think
the opportunity to do it willdefinitely be on the table.

Speaker 1 (40:24):
So they say that 90, let's see companies that
prioritize privacy and securitysee a 25% increase in customer
loyalty.
It seems realistic.

Speaker 2 (40:39):
They would get me to be loyal.
I could do that.

Speaker 1 (40:46):
Businesses that adapt quickly to regulatory changes
are 50% more likely to succeedin the long term.
That's kind of vague, but it'sgoing to be interesting in the
world of tech privacy securityregulatory.

(41:07):
I'm excited to see what happens.

Speaker 2 (41:12):
Ready to ring it in.
Ready to ring it in.

Speaker 1 (41:15):
This has been a fun episode, If it's our last one of
the year.
Thank you everyone forlistening to our nonsense and
some of our good stuff here.
I think this was a greatconversation, Gabe.
It's always a good conversationwith you, Cam.
I can't believe we're goinginto season six, man Season six.

Speaker 2 (41:34):
We promised some new stuff this year, which we did
unveil.
We did more live shows thisyear.
That was new.
We unveiled a couple things,and so 26 for us, 25 for us also
has some more new things coming, so we're going to have a whole
bunch of new guests.
We're probably looking atexpanding the Privacy Please
team a little bit, which isexciting.

(41:54):
We've been growing our listenerbase.
Thank you all for tuning in,and so it's starting to require
a little bit more work than Camand I can do alone.
So thank you all for yoursupport.

Speaker 1 (42:06):
Thank you all very much for tuning in yeah,
continue that and tell yourfriends if you have friends if
you don't we'll be them, we'llbe, yeah, we'll be your friends.
We'll be your friends, um, butyeah, uh, reach out if you guys
want to see something more,something different, different,

(42:27):
or stay silent if we're doingthe right thing, I guess.
Thank you for the support, gabe.
Appreciate you, man.

Speaker 2 (42:34):
Appreciate you Absolutely.
Another year into books forprivacy, please.

Speaker 1 (42:40):
Another year and can't wait to see what happens
next year.
All right, y'all, take it easy,stay sleazy.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.