Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:06):
Next curve.
Leonard Lee (00:10):
Hey everyone.
Welcome back to next curves.
We think webcast series onsecurity and trust where we talk
about the hot topics in theworld of privacy and trust that
matter.
And today, um, we're going to betalking about a lot of stuff.
We have a lot on our plate.
I mean, it's just the beginningof the year.
(00:31):
And we're out of the gates withjust, I would have to say just
pure insanity.
There's just so much to cover.
But of course, as always, I'mjoined by Debbie Reynolds, the
Data Diva of Debbie ReynoldsConsulting LLC.
How are you Data Diva?
Debbie Reynolds (00:51):
I'm well, thank
you, Leonard.
I'm happy to be here.
This is so much fun.
Leonard Lee (00:56):
Yeah, and I love
your scarf as always.
It's just amazing.
Many varieties that you have.
And, they just, make you lookfantastic.
Debbie Reynolds (01:05):
This one is
from Spain.
Leonard Lee (01:08):
Really?
Oh, wow.
Wow.
So you're, this is like aninternational, it's a global
collection.
Debbie Reynolds (01:14):
it is.
It is.
Yeah.
I have a thing.
Yep.
Yep.
Leonard Lee (01:17):
So cool.
Yeah, so, we have a lot ofthings that we're going to
cover, and, before we start herethough, remember everyone to
like share, comment on thisepisode and remember to
subscribe to next curves,rethink podcast at www.
next curve.
com, as well as on YouTube.
(01:39):
We're also on Buzzsprout.
So you can subscribe there.
Just look us up next curve,rethink, and then subscribe,
listen, and you'll get yourconstant diet of the insights
that are both tech and industrythat matter.
And so with that Debbie, let'sget started here.
(02:00):
a lot of crazy stuff.
the first thing.
I want to talk about is Apple,because for the longest time, I
think we've supported them intheir mandate and their mission
to put privacy 1st, right?
Trust and security and we wouldargue that they've been the gold
standard in the industry interms of, driving that mission
(02:25):
for their customers and bringingthat value through.
Yeah.
Their products, their solutionsand their services.
And yes, they've been challengedin that mission.
But I think there's been alwaysa sense of merit in terms of how
they executed.
Although now I'm starting tosee, some, let's call them,
(02:45):
issues with the sustainment ofthat mission.
And in particular, we saw Apple,sidelining it's Apple
intelligence, summary feature,because, uh, the BBC.
Complained that, the Appleintelligence summarization
feature was producing inaccuratesummaries.
(03:07):
So, what do you think?
Debbie Reynolds (03:10):
Yeah, well,
unlike here in Europe, they have
laws about.
the accuracy of news.
Yeah.
So for them is
a big deal.
And, we know that these thingscan make mistakes and there
isn't someone really veryvigilant on watching it.
We know that they can go off therails and do different things.
And so I think, pulling back thefeature makes a lot of sense.
(03:33):
Maybe, re figuring how you dothat.
obviously.
Yeah.
And you know this, you work withsummarization things and it, it
can be wrong in bad ways,especially in news, right?
Where this is something wherepeople are really relying on it
for accuracy.
And so to me, this is just anexample why you either maybe
need to do use different usecases for this stuff or really
(03:55):
have more humans involved inthat, in that.
Process,
Leonard Lee (04:01):
right?
Or maybe even not using it atall.
Quite honestly, I thought that,apple would have better
judgment.
And, be a bit more responsibleand in fact, I think this is a
perfect example ofirresponsible.
AI, where you're putting, notonly the user, but society of
risk of Bad information orinaccurate information.
(04:23):
For the sake of, putting out aproduct and, you know, quite
frankly, I'm pretty disappointedby this.
when we think about otherfeatures that they're
introducing in their operatingsystems across the board, things
like notification summaries,email summaries, I found that.
Quite honestly, I'll have to bevery honest here.
(04:45):
Apple intelligence doesn't dothings better than other, quote
unquote, tech companies outthere that put out similar types
of features.
And I think this is where, thesecompanies really need to tread
very carefully.
And quite frankly, I thought,uh, Apple knew better, but
apparently, they don't.
And maybe they should checkthemselves before they wreck
(05:08):
themselves.
Right?
Debbie Reynolds (05:12):
well, yeah, I
agree.
I think, my thought about theway that Apple is trying to do
Apple intelligence is that theywanted to be sort of a safe
space for people who weren't.
Comfortable when going out intothe wild and playing, using all
these other tools for that, Ithink it's fine.
But for those of us who aremaybe looking for something a
(05:33):
bit more, different, morerobust, give you more
flexibility.
it may not cut it, but maybethat wasn't the goal in the 1st
place.
I thought it was just kind oflike, let's socialize.
AI for people so that they don'tfeel like they're missing out,
even though it may not be thebest, there is out there.
Leonard Lee (05:53):
Yeah, and I think
it does highlight a problem as a
lot of the conversation aroundconsumer AI.
Has gravitated or transitionedor is being pivoted toward
agentic when we have these typesof reliability issues, right?
(06:15):
These, quality issues.
how are these agentic AIframeworks that propose to
personalize things for you?
How much utility will they.
Actually provide and, I thinkagain, especially when we start
to get into a gentic automation,these guys have to be really
(06:37):
careful.
and I think, you know,definitely you and I are going
to keep a close eye on thisdynamic and how it's playing out
or this new pivot and how it'splaying out.
But I think we're seeing earlyevidence of how these, on
device.
Models are actually quitelimited and you do really have
to be very careful.
I would say the feature that Ithink has proven somewhat
(07:01):
useful, even though I don't useit very often is the image
playground where I put togethera Christmas card for last year.
That came out okay, but then,what you're doing is you're
prompting, you're, feeding thatcontext window, or at least it's
feeding itself, photos of yourfamily members yourself, and,
and creating cartoonish typeimages that you can then, use
(07:25):
tools like whatever Adobe.
Photoshop or what have you,PowerPoint to put together some
content, but, that's kind ofsafe and harmless.
I thought that's where Apple isgoing, but I think they might
want to tread cautiously becausethey had to back the stuff out.
So,
Debbie Reynolds (07:43):
I think that's
true.
one of the things I don't likeabout the way that AI works,
especially on like devices and
mm-hmm
Debbie Reynold (07:51):
Personalization,
right?
The way the algorithm works.
Infers things about you and itassumes certain things that you
would want or you would like.
Right?
So an example, even though thiswas an apple intelligence, the
feature that I absolutely hatedon apple and I love all the
features.
What's the memories where theylike, take random things from
(08:14):
your.
Your photo album and say, Hey,remember this, remember when
your dad had cancer and youwere, took a video of him in the
hospital.
It's like, you don't know what Icare about or what I want to
see.
Right.
So I think if they're going thisroute, definitely trail lightly
because you may make people veryupset about the things that
you're assuming or referring ortrying to, lead them towards
(08:38):
trying to help them be helpful.
Leonard Lee (08:40):
Right.
Yeah, maybe trigger a bad day,but yeah, totally that you don't
want to dwell on that morning.
Right?
Debbie Reynolds (08:49):
Yeah.
It's like, yeah, your husbandcheated on you that you, look at
the text 10 times.
Like, I don't really want to seethat, you know, so yeah,
Leonard Lee (08:57):
well, okay,
sticking with Apple.
Here's another thing that I sawthat didn't quite make me
exactly.
Happy is apparently an iOS and Iassume the other device OS is
whether it's macOS and, iPadOS,at least with the iOS 18.
(09:22):
3, that released, a lot of theAI functions are going to be
default opted in.
So it's all.
Turned on, and then you have toactually proactively go in and
turn that stuff off.
And I thought ample might learnfrom the Microsoft, recall
(09:45):
incident.
And I think this is like a datadiva.
No-no.
Right.
Can I, can I qualify as such?
Right.
This is not what we do.
Debbie Reynolds (09:56):
I was
disappointed in this email.
I have been an Apple loversince, the first mask come out
and, obviously I take a criticaleye at this and I love most of
the stuff that they do.
I was very disappointed in thisbecause, this is the same
company when they did apptransparency.
They opted everybody out ofadvertising and then we had to
(10:17):
opt in and we love that, right?
And so obviously they know howto do this.
But the fact that they chose notto do it here and then have
people, not everybody wants touse AI, not everybody feels like
that's important, right?
So, you know, I doubt I probablydon't use a fraction of the
(10:37):
features that I have on myphone.
Phone now, even before you getit, because I'm just a simple
person, right?
Maybe I don't need all the bellsand whistles.
And most people who are usingtheir phones probably aren't
using it to that extent anyway,to create something that makes
them have to stop what they'redoing and try to.
dig down into the settings andturn things off, especially this
thing that they had donerecently with Siri, where they
(11:00):
had opted people in for Siri toshare certain informational apps
that you have on your phone orsomething like that.
And, you have to actually gointo each app and disable it.
And I'm like.
I know that they know better todo them to do this.
And so, yeah, I mean, it's not,it's not a customer first.
It's not a privacy first.
(11:20):
my thing is that it wasn'treally communicated really well.
And I feel like what they.
What was communicated was, hereare all these cool ways that
we're going to protect your dataand secure it.
So it's private and it's like,but that's not the point.
The point is I want control overmy choices and not to have you
(11:41):
opt me into things that I don'twant,
Leonard Lee (11:44):
I can
Debbie Reynolds (11:44):
opt in myself.
Leonard Lee (11:46):
I think we're in
total agreement here.
And of course, Apple, if youwould like to come on and
explain yourself at any time,we're more than happy to speak
with you.
And, if you'd like to come on.
Yes, because, I think these arethe things that can really
damage a brand's, trust,consumer, faith and trust in a
brand.
And, I think once you'vecompromised that trust it's
(12:08):
really difficult to earn backand I have to be honest, I'm
really starting to scratch myhead about Apple and, that's not
a good thing for Apple.
I can always switch to somethingelse, right?
It's just, hopefully the folksat Apple, think a little bit
more intensely about whatthey're doing here and,
prescribed to ourrecommendations on what privacy.
(12:31):
First strategies are and howthey can benefit your brand.
so why don't we move on to,LinkedIn?
I know that this is a particularmoment if I can put it that way,
that sort of triggered you, ifnot very much triggered you and
it's, That interesting noticethat we got about the AI
(12:54):
training of our information, andengagement on LinkedIn.
And of course, LinkedIn is ownedby Microsoft.
So, what's the deal?
Data Diva.
Debbie Reynolds (13:09):
Very
concerning.
Also, this is another one thatpeople are like, super upset
about, because someone hadfound, they were just going
through the, The settings theyfound they were opted in to this
AI thing that you have to optout of.
And so the concerning thing is,okay, especially in the U.
S.
So if they opted us in to thisthing, that means like, let's
(13:31):
say if you were on LinkedIn for20 years, so they took 20 years
of your stuff and put in AI.
And then in the U.
S., depending on what stateyou're in, if you opt out.
They don't necessarily have toerase that any of that data that
they've already gotten, right?
Yeah, they
may only have
to opt out.
they may only either have to saywell going forward We won't do
(13:52):
this or we'll delete some ofyour data up to a certain point
But really,, it actually,there's a lawsuit that was filed
in California about thisparticular thing.
And so we'll see.
We'll see.
Part of that is trying to figureout why they did it and what
they're doing with that data.
And so if it doesn't work, we'llsee.
Settle, I think we're going toget a lot more details about
(14:14):
what's happening or what hashappened with people's data with
this.
So people are very unhappy aboutthat as was I, because, I look
at it was funny because I haverecently when the person sent me
the message, I'm like, I haverecently gone through
periodically the settings and Inoticed it was a lot more.
Choices than there were beforeand I was like, what's all this
stuff right now?
I had gone through it recentlyand this is one that I didn't
(14:37):
see and I'm like, you know I seealmost all this stuff So the
fact that someone like me whofollows this very closely can
miss something like that Likewhat with the regular user a
regular person who doesn't havetime or the head To do this with
it would do so,
Leonard Lee (14:55):
Yeah, and again, I
think we run into this problem
where they opt you in bydefault.
And you're participating withoutconsent and assumed consent on
the part of the platform orservice provider.
And I think this is really wherewe start to have issues.
Also from a brand perspective,and again, we're just really
(15:17):
surprised that Apple is adoptingthis approach.
Apparently it is that it.
It's, not providing consent,,compromises trust, if you're
going to be using or intendingto use, our information and our
engagement on a platform totrain your or for any other
(15:37):
purpose consent, I mean, that'scommon courtesy, right?
It's not like you're paying.
Us to train your products andbuild your products, right?
We should be compensated.
I think that's mindset thatconsumers should more broadly
and increasingly have.
And I think they willeventually, I think that
movement is growing and becausewe have, privacy advocates, like
(16:00):
you and I, who are making.
People aware of what's going on,the nature of privacy and how
it's being, compromised in manyregards, for, corporate
purposes, but I think thatconsent mechanism has to be
clear has to be apparent andshould uh, accessible and
(16:24):
sometimes it's not right.
So.
Debbie Reynolds (16:26):
I guess I'll go
broader.
I'm going to say agency, right?
So we want control.
The control is what we want,right?
So when whatever vernacular youwant to use to express what that
means.
Yeah,
but more
control over how our data is
handled.
And so maybe I decide, okay, I'mfine with that.
okay, I'll opt in or whatever.
(16:47):
But I think the farther thesecompanies get away from.
Okay.
Being able to articulate howsomething benefits me as a user
makes me even more suspicious ofthem doing stuff like this
because like, I don't care aboutany of that.
I don't want any of that.
So why don't you opt me intothat?
(17:08):
Like, how does that benefit me?
It doesn't.
Leonard Lee (17:10):
Right, and it also
highlights the issue with, the
challenges with these modelsregarding the right to be
forgotten, right?
That becomes a real significantissue, especially, as we look
at.
These privacy regulationsprobably getting more entrenched
in the future.
(17:30):
And so I think these could veryeasily become headwinds for a
lot of these technologycompanies, quote unquote, tech
companies that are trying tocater to global markets.
Debbie Reynolds (17:42):
Yeah, well, the
right to be forgotten is
different than deletion.
So, the right to be forgottenhas a broader scope, and we
don't have that in the U.
S., so we don't have a right.
We don't have a right to beforgotten, and even the deletion
rights that are articulated inour laws, depending on what
state you live in, you may onlyhave the right to delete stuff.
(18:02):
maybe a year, maybe two years.
But beyond that, like, let's sayyou are a customer of AT& T for
20 years, right?
You can't say, forget me for 20years.
they'll, forget you for twoyears.
Somewhat, we'll delete some ofyour stuff, but not all of it.
The reason why this, LinkedInlawsuit is going to be very
interesting because, if it goesforward, they'll have to really
(18:24):
explain what deletion means.
Yeah.
what opting out
means, how long they keep stuff,
what do they do with the stuffthat they took and what did they
take?
You know, that's whateverybody's concerned about.
Leonard Lee (18:37):
Yeah, exactly.
So let's move on to meta.
I guess the big news item hasbeen, that meta AI can now use
your Facebook and Instagram datato personalize its responses.
What are your thoughts there?
Debbie Reynolds (18:54):
Yeah.
when you think ofpersonalization, that should
indicate to you that there'ssome privacy issue there
somewhere.
Right?
So personalization just meanswe're going to take more of what
we know about you.
And then we're going to try touse it in a way that gives us
the right to have it, but thengive you something in return,
whatever that is.
(19:15):
Right.
But as Usual, a lot of thesedata exchanges are very
asymmetrical.
Right?
So it's like, okay, well, youlike this color lipstick, or you
like to eat at this restaurant.
But then that data is packagedup and sold to someone else 1000
times over, right?
So all you got was arecommendation.
(19:36):
And what they got was more datato sell about.
Yeah, it's always concerning,right?
So to me, But the value exchangeneeds to be a little bit more.
I don't think it'll ever beeven, but it needs to be more
compelling.
I don't see a compellingargument for doing this
personalization, especially ifpeople don't want it.
(19:57):
So it looks like they're goingto just turn it on and just say,
Hey, I'm personalizing all thisstuff, especially with a company
like metal, where they havedifferent properties.
And so People interact withthese different apps because
they want them for differentreasons.
Like I don't see someone let'ssay for someone has Facebook and
whatsapp.
They probably don't wantanything They're doing on
(20:19):
whatsapp to ever go over intoface facebook.
You know what i'm saying?
Yeah
that needs to
be a consideration as well.
Leonard Lee (20:27):
Well, yeah, and I
think There is a question of how
did they design theirpersonalization engine around
these large language models orthese personalization AI models,
right?
I think that's a real bigquestion and how does it secure
(20:48):
your privacy?
how does it isolate your, Idon't know if you call it data.
It's embeddings.
Basically, how does it protectthose embeddings?
Because that's privateinformation.
if these models know.
About you.
but there hasn't been enoughisolation of the data and the
(21:10):
embeddings to, you know,institute privacy for the, those
insights.
Mm-hmm.
Those preferences.
don't know.
Mm-hmm I, I, I don't, I don'tknow how these things work.
I don't know if they'vepublished it, but I think they
need to.
Oh, absolutely.
And I haven't seen anything.
So anyone from meta, if you'rewilling to share, I'm happy to
(21:34):
bring you on board and tell ushow you are protecting people's
privacy and instituting privacyfirst principles and the
implementation of your productsand services.
So, yeah, I mean, it's a big,you know, what I would turn this
stuff off.
Um, yeah, yeah, I mean, I woulduse things that are on device
(21:57):
before I would turn any of thisstuff on.
And I think there'sarchitectures that would allow
you to get personalization.
benefits without necessarilyhaving to give up your, you
know, having this memory, if youwill, and, and, yeah, so this
(22:18):
lingering question of, well, howis it protected?
Right?
Debbie Reynolds (22:21):
well, I'll give
you an example.
Something happened to me thisweek and this, this is a Gemini
thing.
So I was, I was invited to ameeting, right?
I was not, the host of themeeting, right?
I'm just an attendee.
And then this Gemini thingpopped up and I guess it was on,
it was their system.
So I pressed, you press thebutton and they say, Oh yeah,
I'm recording or do whatever.
(22:41):
So I'm for the transcript.
So I guess they were like, Ohwell, Gemini is going to do this
transcript and this summary orwhatever.
And so the way the meeting was,I did my speaking and then I
left the meeting.
Right.
So I later, I get a transcriptof the meeting and It has a
summary of what I said, but thenit has stuff that happened in
(23:02):
the meeting that where I was notthere.
Leonard Lee (23:06):
Oh, geez.
Debbie Reynolds (23:07):
And I thought,
who did this?
Like, this is so bad.
This is so bad.
Like, a lot of people incorporate America, they do
meetings like that where someonedrops off and they maybe they
talk about something else.
Yeah, now you've created thishuge loophole and so that's what
concerns me when people say theywant to personalize or whatever
They're not really thinkingabout who should know what you
(23:29):
know, it's almost like spycraft,right?
To know and all the other typeof stuff like I
don't yeah, I
don't see that
being Folded into a lot of these
innovation.
They're just like, Oh, this isthis cool thing and wish to
share everything.
It's like, no, it's not like,no, that's not okay at all.
Leonard Lee (23:47):
Security is really,
really challenging issue with
these things.
And what I mean by things likeour back and being able to
control access all the way downto the embedding level.
Right?
I mean, we're talking about ifit's ragged or if it's, a
standalone model.
That you're interfacing with,these things are very, very
difficult to do.
(24:08):
And so architecture, how youdesigned it, where's data,
stored, ingested, and, embeddedin all these different possible.
elements in a quote unquote, a Iplatform application.
These are things that reallyneed to be asked.
I don't hear anyone asking thesequestions.
They just assume that thesethings work, which is, I think
(24:31):
if you're an enterprise outthere.
You and you're a C.
I.
O.
Make sure you ask thesequestions and that you are very
clear on what your company isusing in terms of, external
service and what you're actuallybringing into the enterprise and
what enterprise grade.
Um, with enterprise gradesecurity and confidentiality
(24:56):
protection, what that lookslike, because I'm telling you
right now that these are thingsthat are not apparent and clear
at the moment is 1 of the thingsthat are really stymieing
enterprise adoption ofgenerative AI tools and
applications.
Debbie Reynolds (25:11):
And not
everyone needs to see
everything, right?
So there's a problem there.
If you, this, and this actuallycame up with recall where they
were like, well, let's justindex everything on the
computer.
regardless of the user, right?
So you're going to take all thedifferent user profiles and
scramble everything together.
So there's some, whoever hasaccess can see it, but not
(25:34):
everyone who's using thatcomputer.
Should see everything toeverybody else to see.
Right.
So to me, this is like a themethat I'm seeing in terms of
you're basically creating anunauthorized access issue that
could be, could be a harmfulsafety issue, like in a personal
situation, or it could be aconfidentiality issue.
Leonard Lee (25:55):
corporate setting.
Right.
Exactly.
Debbie Reynolds (25:57):
Almost.
I like, because I mean, Iremember when companies are
trying to do enterprise searchand it was bonkers.
It was totally crazy because itwas so bad because it's like,
how did that get in there?
it's like, because you didn'tsecure your data.
The right, that's why that's howright.
So we still have those problems.
And what we're doing is bringingmore complexity and more
(26:18):
technology.
That's bringing up things.
So maybe maybe we're hiding inplain sight.
no longer, but we don't reallyhave the security around that,
or the barriers that need to bethere, now people use it or be
confident in it.
Leonard Lee (26:34):
Yeah, and, we're
not just managing content
anymore documents andinformation.
This is like knowledge, right?
There's like, this layer thatsynthesizes a corpus of.
Knowledge, that is informed bycontent, by information, like
data, and now you have to figureout how do we security trim this
(26:56):
thing so that it only shows andit only acts like it knows, or
it refuses to respond tocertain.
Folks, based on access controlsthat have been defined, right?
And that whole exercise is very,very complicated, and I haven't
really seen anybody out there,quite honestly, that's, cracked
the code for that.
and this is a huge gap at themoment.
(27:17):
And again, anyone out there whoclaims that they have a
solution.
Please reach out.
would love to talk to you.
so we'll put that challenge outthere for folks.
You know, I think this is gonnabe a regular thing.
We'll do now.
It's like, if you have asolution and we didn't know
about it, give us a jingle
and
see if you have
actually have something real.
you and I, we like to keep itreal, right?
(27:39):
So that's what we do.
let's move on to the next thing,which is, oh my gosh, this is
like the news item that hasshook the world, deep seek,
right?
It's an AI comp, it's a AI labactually out of Hong, Hangzhou
(27:59):
in the Zhejiang province of.
China that has released a model,that is on par with anything
that's in the Western world, itarguably can outperform, even,
GPT, what is a GPT 4.
0 and, does this add a very,very, very.
(28:22):
Very competitive economicprofile, meaning, the price of
the training or the cost oftraining was one 10th of what it
took to, train comparable, llamamodels by, meta and also it's,
inference costs are ridiculouslylow.
And so now the AI world isgrappling with the advent of
(28:46):
this model, but then also.
I think people are going back tothe basic question, or at least
maybe ignoring the basicquestion of what's the
difference between a service andthe model that, and, because,
there's a lot of talk about opensource and how open source,
blah, blah, blah.
I think that's a topic for adifferent discussion.
(29:10):
that has more to do with,regulatory and, export control
type stuff.
And then, some of these, openversus close, parties wanting to
claim bragging rights, but thenthere is the question of
security.
and privacy, Because we have nowtick tock.
That issue continues to be afactor.
(29:32):
That's an open issue.
If you will, that the Trumpadministration is negotiating,
some sort of, resolution on, butthen now we have this.
I just read that DeepSeek, theirAI bot or assistant is the
number one downloadedapplication on the Apple store.
(29:52):
What are your thoughts here?
And how should enterprises andconsumers think about this?
Debbie Reynolds (29:58):
Yeah, well,
it's definitely got people's
attention on one side We have alot of talk about how much money
that these companies need howmuch more money They need to be
able to do this stuff
Yeah,
how much more
power and all this stuff and
then you have this group out ofChina doing similar things for a
fraction of the cost, not evenwith, the most advanced chips.
(30:20):
And so, I don't know, maybenecessity really is the mother
of invention.
Leonard Lee (30:25):
Yeah,
Debbie Reynolds (30:26):
it's very
interesting.
But I mean, to your point, themodel and the service is
different, but it's, if you havea model, you can create a
service.
So, I think they wanted to go inthat route.
And so, yeah, I think thesecompanies should be concerned
and I know a lot of the talk,the Stargate stuff that we're
talking about, it's all aboutenergy, I think it's all about
(30:49):
chips.
The fact that we see the stockmarket go up and down based on
who has the most advanced thisor advanced that.
And so now you have a group.
It's not using the most advancedthis or advanced that or have a
trillion dollars in their backpocket and they're, I think they
should be concerned, causethat's what these companies
(31:09):
don't want to pay our arm andleg for AI, right?
And so having it at a cheapercosts, to be able, cost of entry
and also maybe creates more.
Situations where maybe companieswanted to leverage.
I couldn't do it because of thecost.
Maybe that opens that up aswell.
But I think, they're probablygoing to be some friction here.
(31:33):
Because this is out of China,right?
Yeah.
Yeah.
You know, it
reminds me of.
Decades ago where there were,bans for, Russian diamonds.
apparently, Russia has tons ofdiamonds and they didn't want
those diamonds to get out tothese other markets.
Cause it would like makediamonds go like way down in
(31:53):
terms of costs So I think we'reprobably going to start to see
more of that on a Global scalewhere they're like, hey, you
have to buy our expensive modelbecause we're going to ban this
other thing.
But then, the Internet doesn'treally have boundaries.
So, there are going to be peopleare going to be able to use the
stuff regardless.
(32:13):
So, really interesting.
Leonard Lee (32:15):
Yeah, with the
model itself, it can be deployed
on device, or it can be deployedin a data center.
A service provider can useleverage the model to provide
like a service.
I think, the thing that is a bigquestion mark is with DeepSeek,
their own hosted service, right?
What are the privacy andsecurity risks associated with
(32:36):
that?
And, I think there's no lack ofconcern in the United States at
the moment.
Of, Chinese entities, harvestinginformation, and maybe even in
coding and embedding, human,like, profiles, personal
profiles onto, a, model or atleast an AI service platform
(32:56):
that they're hosting.
And I think that these are thethings that I think after all
these years, actually, 2 years,I don't think we've really
gotten really good at, having aclear discussion about what that
risk structure looks like.
I know that there's a lot of,people who have.
Applied and and you're, you'reprobably 1 of the only people
(33:19):
that I know of who have lookedat this problem holistically.
So, but.
The question of what does safe aI look like, not only from a
consumer's perspective, but anenterprise perspective, as they
look at external parties,providing these services, what
(33:40):
does that look like?
And how do CIOs and tech andbusiness leaders evaluate these
companies?
And these services, right?
And these applications, and Ithink for consumers, it's almost
like a hopeless thing, but therehas to be something done.
I know that you're working on alot of stuff in this regard, but
(34:00):
things seem to be moving fasterthan human consideration.
Right?
And the things that, can be.
Institute in terms of protectiveregulations and policies, right?
I think that's a, it's a big gapright now.
Debbie Reynolds (34:14):
Yeah, I think
the gap is that it's 1 thing to
say that, we think that there'sdanger if you do this, use this
tool from whenever, but whathasn't come out is why and what
that risk is.
Right?
So Part of choice, and maybethis goes back to my thing about
agency, is for people to knowwhat that risk is.
And so to not articulate what itis, is not.
(34:38):
It's like me saying, instead ofme saying, don't touch the oven
because it's hot.
I just say, don't touch.
You're like, well, why not?
Well, just don't.
I was like, well, why?
I was like, I think it's reallydusting up people's curiosity
more than it needs to.
And so I think it's going to behard to convince someone.
(34:59):
That is harmful if you don'texplain how it's harmful and
why.
It's like, that's what'smissing.
So you can say, I don't know.
It's almost like it's cartoonishin a bad way.
it's like the, movie devicewhere, everything is around this
one thing, the MacGuffin, right?
It's a MacGuffin, right?
So we don't know what it isexactly.
it's the light and the suitcase.
(35:21):
From Pulp Fiction.
That's what it is.
So unless you're going to openthe suitcase and tell people
what's going on, you can't stopthem from trying to go in these
ways, especially as it comes tomoney, right?
we see, especially a lot ofthese small and medium sized
businesses, they want to edge.
They want to be able to do stuffwith AI.
And so they don't have atrillion dollars in their pocket
(35:43):
to do the stuff.
And, so I think, any companythat can create, give people
access to something that theywould not have otherwise had
financially.
It's going to have an edgeperiod, whether it's the Mars.
Or Venus or China it doesn'tmatter.
Leonard Lee (36:00):
Yeah.
and I think that question of howneeds to be asked about How are
these companies doing the thingsthat they do in particular in
regards to privacy protectionand, security.
And I think we'll just have tocontinue to do the good work
(36:23):
that you're doing hopefully.
This podcast is elucidating orhighlighting some of the
considerations and questions tobe asked.
but anyways, let's move on tothe last topic that I know you
want to chat about a chain ofthought.
And this new threat or risk, Idon't know exactly how you want
(36:44):
to characterize it, but maybeyou can share it with our
audience.
Debbie Reynolds (36:48):
Yeah, so there
are some researchers that have
been doing some work on, AImodels that use this kind of
chain of thought reasoning.
And basically they were sayingthey had come up with a type of
adversarial attack thatbasically can jump into the
chain of thoughts and maybechange or tweak things that the
(37:09):
model is doing to be able togive an output that may be wrong
in some way.
And so this can have hugeimplications depending on how
companies are using.
These models make decisions,right?
I think we all will see this inmovies, right?
Where someone goes into somecomputer system and they make
some tweak or change and theones thinking about, and this
can really happen.
(37:29):
so basically.
as we see all these newarchitectures come up and these
new ways to, manage data inthese models, there are always
going to be adversaries outthere that are trying to
manipulate it.
And, it's becoming easier forthem to do it, right?
Because that cost of entry ismuch lower.
And so as companies are reallyrelying on data, maybe overly
(37:53):
relying on data for models, theyreally need more humans.
In the loop to be able to catchthose types of things, but this
is something that's really noton anyone's radar because they
feel like, Oh, well, let's justput this thing in a model.
And, we think it's better than ahuman because, humans suck and
so computers are better and allthis stuff.
So, we have adversaries who canmanipulate these things,
(38:16):
through, Yeah.
And not in obvious ways, maybethings are more subtle, that may
play out over, days, months,years, as opposed to what people
think, okay, if they're going tocome in, they're going to take
something, maybe they want tocome in and they want to throw
you off track or put you inthis, go in a different
direction or something that youmaybe wouldn't have done.
So it's just to me, I read thesestories and I always think,
(38:38):
well, what could they possiblydo?
What would be the risk here?
It fascinated me that's the way,this particular attack, it's not
like a head on, it's kind oflike a sideways thing,
Leonard Lee (38:51):
you know, so yeah.
Yeah.
Wow.
I have nothing to say or add tothat.
Definitely something I need tolook into.
But, I think 1 of the things I'mreally looking forward to is
RSA, uh, C, 2000, uh, 25,because I think we're going to
see a pretty, significant.
(39:14):
Attitude change in the cybersecurity community as they look
at these new tools.
Oh, yeah.
Here's the other thing that cameout is, the, open fast,
following anthropic coming upwith an agent, right?
That basically can take overyour computer screen and execute
(39:35):
tasks, right?
Discover how something works andthen executing tasks.
I don't think that anyone hasreally raised the question
about, okay, what are theimplications from the
perspective of a, a bad actor.
Using this and weaponizing itand using as a tool to,
basically cause harm and, Ithink.
(39:57):
A lot of these innovations lookreally cool.
Some of them are not as novel aspeople think, but their ability
to do harm is apparent.
Their ability to providebenefit, not as much.
A lot of these are hypothetical.
I think that's one of the thingsthat, business leaders, if
you're a board of directors,stop putting pressure on your
(40:21):
organization.
do a little bit more homeworkbecause some of these
technologies are not whatthey're cracked up to be.
some of these solutions areactually quite dangerous.
So you might want to, listen toyour CISO, give them a little
bit of credit and maybe saveyourself some headache and,
maybe save yourself some moneyinstead of investing in these
(40:42):
experiments.
working with the CSO to, come upwith measures for safe adoption
of these technologies, right?
Because they're going to comeanyways.
I think, being first toimplementing a potentially
dangerous, application withinyour enterprise or a tool within
your enterprise, may not exactlybe the.
(41:04):
Best thing for, yourorganization and, you as a
consumer, not a great thingeither.
So,
Debbie Reynolds (41:12):
depending on
what you want the agent to do, I
still think a lot of the stuffthat people should be trying to
do with these models should bethe kind of low risk.
Yeah, if it goes bad, it's notgoing to harm someone, right?
It's not going to create a harm,but like, I hear people like,
oh, well, let's give this agentyour credit card number.
(41:32):
And, Leonard likes the aisleseat on, United airline or
whatever.
And the cyber criminals, I don'thave to infiltrate.
Leonard in his life.
I just have to get a hold ofthis agent and then yeah Book
flights for him.
I could book flights for me andmy criminal enterprise and the
places that I want to go and dostuff Right, but then they'll be
(41:53):
like was your fault because youshould have you know known that
you shouldn't have done this Soit's like stay with the low risk
Stay with the low, the annoyingtasks you want to automate to me
really the agent thing, you knowA lot of what I think general AI
and these agents can do willprobably be doing things that no
(42:14):
one would care about.
It wouldn't be on the front pageof a paper, right?
It'll probably be these lowhanging fruit automation.
Like for people who don't knowhow to automate stuff already,
it may be just an easy way toautomate.
A couple things, right?
Yeah.
So that's fine.
It depends on what it is, but,my giving me your credit card
and the code to your hat off thecar and stuff because the
(42:36):
cybercrime was like, Hey, thisis so cool.
We can, like, do theft, like,you know, automate it in an
automated
Leonard Lee (42:42):
way.
Yeah, absolutely.
I still want to know.
How did you know?
Yes.
United.
I'll see.
That is my preference.
That is creepy.
Debbie Reynolds (42:52):
That is my
preference.
Leonard Lee (42:54):
Oh, okay.
Okay.
Well, hey, Debbie, thanks somuch for jumping on and kicking
off.
2025 with this podcast andsharing your insights and
perspectives on some of theseand topics that really should
start to concern us, going into2025 and I want to thank
(43:16):
everyone for tuning in.
And if you have any questions,or would like to get in touch
with Debbie Reynolds, who is thedata diva, go to W, W, W dot
Debbie Reynolds consulting.
Dot com.
She is doing wonderful work and,if you have any questions about
what is happening and privacyand security, she is absolutely
(43:39):
the person to be talking to.
if you'd like to engage withnext curves research, go to W.
W.
W dot next dash curve dot comand you can.
Tap our, research portal thereas well as our media center
where you can find links.
(44:00):
To all the cast and presentpodcasts and other media that I
am engaged with.
Just go there and, obviously youcan subscribe to the next Curve
Rethink podcast here on YouTubeas well as on Buzz.
And so, uh, once again, Debbie,thank you so much and we'll See
(44:25):
everyone next time.