Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Imagine this the AI
model you've spent millions of
dollars training, the one meantto power your customer
experience, your operations.
Your future is quietly beingsabotaged, Not by a bug, not by
a disgruntled employee, but bythe data deliberately poisoned,
subtly manipulated, right underyour nose.
Because, as enterprises race todeploy generative AI, a new
(00:24):
class of threat is emerging.
It's not just about hackersbreaching your network anymore.
It's about exploiting the veryfoundation of AI its data.
On today's episode, I talk withtwo AI security experts from
Worldwide Technology, JustinHadler and Chance Cornell, who
are on the front lines of thisshift.
They'll walk us through howprompt injection and data
(00:45):
poisoning attacks actually work,what they mean for enterprise
trust in AI systems, and whysecuring your models may be
harder and more urgent thansecuring your endpoints.
This is the AI Proving Groundpodcast from Worldwide
Technology Everything AI all inone place, and today's episode
is about more than defendingagainst a new threat vector.
(01:06):
It's about waking up to thereality that your AI might be
already compromised, and youwon't know it until it's too
late.
So, without further ado, let'sdive in.
Okay, two of you.
(01:26):
Thank you so much for joiningme here today.
We'll dive right in, Chance.
I think we're all understandingof the fact that large language
models and AI models canhallucinate from time to time,
but we're here to talk todayabout data poisoning, prompt
injections and, basically, modelcorruptibility.
What's the difference betweenhallucinations and some of those
other terms that I justmentioned?
Speaker 2 (01:47):
on that back half,
yeah, so when thinking about
hallucinations, this is going tobe more of the model maybe is
getting tripping over itself.
Long conversations can lead toit and it gives outputs that
aren't correct or it has alittle bit of misinformation in
there, Whereas when you'rethinking about data poisoning,
(02:10):
you're going to be thinking moreof someone has intentionally
modified the underlying dataunderneath the AI model to
intentionally get it to behaveincorrectly or influence its
operation incorrectly orinfluence its operation.
So the difference is going tobe more of AI.
Hallucinations are kind of justit tripping over itself.
(02:31):
Not necessarily someone hasintentionally attacked that
model to get it to behave thatway.
Speaker 1 (02:39):
Yeah and Justin talk
to us a little bit about how
vulnerable these models are tothat malicious activity such as
data poisoning or promptinjection.
Is it a little bit about howvulnerable these models are to
that malicious activity such asdata poisoning or prompt
injection?
Is it a little bit more shakythan we would all like to think,
or is this a growing trend?
Or maybe we're all safe andsound?
Speaker 3 (02:53):
No, I think there's a
couple insertion points there
that you can consider.
The first is these underlyingmodels.
Whether someone's building oneon their own and training with
data, or if people are procuringthose downloading from a
repository someplace likeHugging Face.
There is a baseline set of datathere that needs to either be
(03:17):
inspected or trusted.
And then the more populararchitecture today is retrieval
augmented generation, wheresomeone would download a base
model and then add their owndata on top of that to augment
for their own company's answers,and so at any point in that
(03:40):
development pipeline there'sdata involved and we need to be
sensitive to that.
There's opportunities to injector include things that might be
intentional or unintentional,that could be bad.
So a couple of different areasthere.
Speaker 1 (03:53):
Yeah, and let's walk
through Chance.
I know that you've given acouple of demos of your data
poisoning demo there.
Can you just walk through whatyou did and why it's important
for our listeners to know about?
Speaker 2 (04:06):
Yeah, so in my lab
that I created, it was a RAG
architecture that used a simplerecipe forum where users had
uploaded their favorite recipes,and they decided to implement a
chatbot LM that would have RAGarchitecture in the back so that
, instead of users having to gothrough all these recipes
(04:27):
manually, they could come in andask different questions about
the recipe.
In the lab, it was a disgruntleduser that decided they were
going to put in misinformationin the recipe.
They didn't like their secretfamily recipe being used by this
forum's AI, and what that ledto is that you know, a bad user
experience in this instance.
(04:48):
Right, this forum is trying touse this LLM to maybe bring in
more users, grow their businessand make sure that users are
having a really good experiencewhen they're interacting with
this chatbot.
But because this disgruntledmember had changed the recipe on
the website, it led to usershaving a bad experience, getting
(05:09):
misinformation.
That was obviously incorrect,and so you know, as a business,
you can imagine that it's prettymuch accomplishing the exact
opposite of what that recipeform would want Instead of
bringing more users in it'sdriving users away.
Want, instead of bringing moreusers in, it's driving users
away.
And it was all because theydidn't have the proper security
(05:29):
measures in place for that RAGarchitecture.
Speaker 1 (05:32):
Yeah, and Justin?
What does that make you?
What does that signal to you?
As it relates to security teams, particularly those securing AI
, is this a common type of story, or is this something that'll
be continuing to grow as wecontinue on?
Speaker 3 (05:49):
Yeah, chance just
outlines probably the third area
that I didn't touch uponearlier, which is if models are
taking input to enhance trainingfor future iterations.
What does that look like?
So yeah, just another area tobe aware of.
Certainly, we need to belooking at both the input and
the output as it relates tothese large language models and
if you are allowing your modelto be trained for future
(06:15):
responses based upon some of theinput, just to be aware of
what's going in and having aprocess to at least examine and
or consider it before it'sincluded in future versions.
Speaker 1 (06:28):
Yeah and Justin, if a
model were to spit out you know
wrong information, how would Ibe able to tell, as the end user
, whether it was just ahallucination in an innocent
form, or corrupted data, or isit undetectable?
Speaker 3 (06:58):
Or is it undetectable
?
Overall, against over-reliance,and I think too many people are
running around thinking thatwhat they're getting out of
these models is true and factand we should be training
everyone to at least considerthat hallucinations exist and
while I might get some reallygood output, I need to go double
(07:19):
check that or fact check itagainst another source to make
sure that it is accurate.
Speaker 1 (07:24):
So the human in the
loop continuing.
It's good to know that we'restill needed for the time being.
Speaker 3 (07:28):
Yes, for the time
being.
Speaker 1 (07:30):
Well, justin, what
are we seeing in the market in
terms of mitigation oreradication, in terms of vendors
and how we're able to defendagainst data poisoning or
basically securing these largelanguage models?
Is it a growing market?
Is it something that's, youknow, a lot of our clients are
trying to make sense of rightnow?
Speaker 3 (07:49):
Yeah, I think from
just a what data should be
included either in training ofan application or augmenting.
There may be things that folkswant to include or don't want to
include.
Be things that folks want toinclude or don't want to include
.
There's a whole provenancelineage question of do I have
(08:09):
good, true, unbiased, sound datathat's going in to train these
models and making sure thatyou've got explainability and
that inventory to point back tothat?
Whatever you're using to beincluded with answers out of
these large language models?
You know the underlying datathere and you've got a good view
(08:30):
of what went into thatdevelopment pipeline.
Speaker 1 (08:41):
Yeah, chance, take me
back to your lab there and put
your bad actor hat on.
How did you go about actuallyplanting the malicious or
incorrect data in the firstplace?
Is this something that is ahard lift for hackers or bad
acting organizations to do, oris this something that's
relatively easy?
Speaker 2 (09:00):
In a way, it's not
going to be easy, I would say,
from the lab itself.
Since we wanted to showcase thevulnerability, we obviously
looked at it from the point ofview of this forum had no
existing security controls inplace, which is not very
realistic.
Right?
You're not going to walk into acompany that has its data just
(09:21):
wide open or its RAGarchitecture that anybody can
just upload context data into.
And it kind of goes back towhat Justin was talking about of
these companies need to stickto the basics of data security
and data management, and Iactually wanted to ask Justin
get his opinion.
I saw something the other dayand they were talking about AI
data management and they werelooking at it from the data
(09:43):
management lifecycle things likecollecting, cleaning, analysis
and then governing right, likecollecting, cleaning, analysis
and then governing right and theAI part was just implementing
AI into all of those parts ofthe life cycle to try and make
each one better.
Are you seeing any of that outin these companies?
Speaker 3 (10:02):
Yeah, I definitely am
, and it probably points back to
the different type oforganizations we're interacting
with in this space.
The ones that have been doingit for a while have some tech
and some AI that is far moretargeted when discovering
whatever high value assets anddata are out there.
They're not simply just usingregular expression to try to
(10:25):
pattern match, and so early onin this space, companies would
just be out searching for acertain string of digits or a
format of a type of data andinherently you're going to get
so many false positives in thatarrangement.
It's so hard to separate thesignal from the noise and so,
(10:45):
using intelligence, some arelooking at fields around
potential hits or matches withinthe data and they can make a
decision and filter out a lot ofthat noise and leave users with
just the signal left.
That's one area that we'reseeing.
The next is probably justbuilding some intelligence
(11:10):
around remediation.
So if you are finding a bunchof different areas where you're
sensitive or your high valueassets and data exist, how do we
do something with that oncewe've got a laundry list of
things we need to go fix, andsome of the more advanced
platforms in the space are doingthings like remediation or
(11:32):
retention policies or deletingthings on the fly where there
are duplicates, or findingthings in data that are in areas
that they shouldn't be.
So we are seeing quite a rapidintelligence, as you asked about
, in those areas.
As you asked about in thoseareas.
Speaker 2 (11:48):
Yeah, from that.
When I was looking at that, Iwas most interested in the data
security aspect because it wasinteresting to think about.
You know, we're talking abouthow to secure your AI's data and
make sure that it's beentrained properly and not on any,
you know, poison data, but thenalso using AI to further
strengthen your data securityposture and they talked about
(12:10):
things like data loss preventionor fraud.
Detection has historically beenvery rule-based.
You know you have someonesitting there like, okay, what
are the things we don't want tosee?
We don't want to see peopleputting their social security
into an email or stuff like that, right, right.
But then you know they touchedon that.
Ai is extremely powerful atanalyzing large volumes of data
(12:34):
and recognizing patterns beyondwhat just simple rule-based
detection can see.
So I just thought that wasreally interesting the aspect of
using AI to further strengthenyour own potentially AI data
security posture.
Speaker 3 (12:50):
Yeah, definitely, and
I think one other area you've
just kind of opened up there isthis whole matching up of
persona or permissionsassociated with the sensitive
data that's been discovered, andso there's a whole other area
and a whole other workflowaround making sure that we don't
have open access and some ofthe machine learning, as you
(13:14):
touched upon, and theintelligence is able to take all
that into consideration.
Downstream, where I think wherethis all ties together and gets
really interesting is if we wantto be building applications and
giving the right information tothe right people, provided
they've authenticated correctly.
When I type in something or Ipose a question, I might have
(13:37):
different access to differentthings behind the scenes and
therefore the answers that I getback out of these language
models might be very differentthan someone else and depending
upon what role you're doing orwhat information you're supposed
to have access to.
And there's a trick there totying what sensitive information
and which people have access towhat and making sure that the
(13:59):
answers are both helpful,accurate, but also tied to the
right persona to make sure thatthe right folks are getting the
right information.
Speaker 1 (14:08):
Yeah, we talk about
zero trust.
You know plenty.
We usually talk about it withinthe network setting, but
certainly that applies to thedata setting as well.
Speaker 3 (14:17):
Certainly does, I
think, what we're seeing.
A lot of the customers in thelast couple years they were in a
real rush to turn onproductivity tools like CoPilot,
and I've never in my careerseen the amount of pressure
coming from a board level or aC-level to get some of these
things turned on quickly.
And so we're working with a lotof just the practitioners in
(14:40):
the space and they're nervousbecause they know that they
don't have their arms aroundtheir data appropriately to turn
these things on.
And the nightmare scenario thateveryone's worried about is
someone's able to just sit downin front of Copilot and ask for
sensitive information like giveme everyone's salary in the
company, and unless you've gotyour arms around what documents
(15:02):
have what information and whoshould be accessing those, it
can be very problematic beaccessing those.
It can be very problematic and,like I said, I've not seen this
amount of pressure to getthings turned on quickly, and
it's because of the productivityenhancements and the promise of
productivity that we're seeingthis.
Speaker 2 (15:23):
Yeah, I saw something
similar.
It was less around maybepre-built tools like Copilot,
but companies out there that arelooking for ways that AI can
improve their business andthey're developing their own
tools around AI.
And they were talking about howyou have to be very careful
because if you get moving soquickly and release these tools,
(15:44):
if you do find that there is aproblem or a data security risk
in there, it takes that muchlonger to go back and identify
where that risk came from,Because now you're having to
manually comb through all ofyour data and determine.
You know what is it that led tothis certain vulnerability?
Speaker 3 (16:08):
probably the most
cited examples in this space,
and you know a lot of peoplethink that the advent of a lot
of these tools happened whenChatGPT was released in the end
of 2022.
There's one that I go back toquite a bit.
Microsoft put out a TayChat botback in 2016, and it was
probably the first example I'maware of of something that was
(16:31):
learning on the fly, and whenthey launched this thing, it was
taking all the input andtraining it and within 16 hours
it had so much offensive stuffcoming back that they ultimately
had to shut it down veryquickly.
This episode is supported byCrowdStrike.
Crowdstrike providescloud-native endpoint protection
(16:53):
to detect and prevent breachesin real time.
Secure your organization withCrowdStrike's advanced threat
intelligence and responsecapabilities.
Speaker 1 (17:04):
Well, something that
you guys were just talking about
as well.
You know, justin, you'dmentioned kind of the nightmare
of you know, for a lot oforganizations out there is you
can just sit there in front of acopilot or another you know
model and grab some personalinformation Chance.
That gets into a lot of whatyou've been talking about with
prompt injection, and you know Iwas.
When I first saw you demo this,your prompt injection, I was
(17:26):
like appalled that it was thateasy to just trick, so to speak
I don't know if that's the rightword for it but trick a model
into just giving you moreinformation.
Hopefully you can dive a littlebit deeper in terms of what you
did there.
Speaker 2 (17:38):
Yeah.
So I've done that a coupledifferent places.
One is going to be the trainingdata poisoning lab, which we
took it a step further beyondjust misinformation showing up
in the chat's output and insteadit was entering in a simple
system prompt that was hidden inthe recipe and that led to,
when users asked about thatcertain recipe, the LM would
(18:00):
respond with something saying Ibelieve you should actually go
to a different forum.
I don't think that this forumis the best for you, right,
driving more customers away.
I don't think that this form isthe best for you, right,
driving more customers away.
Or I have a prompt injection labout there that is showing some
of the dangers around promptinjection.
Now I will say that promptinjection is getting harder and
harder with each day.
(18:21):
Companies are definitelycatching on that.
There's ways to get aroundthese security measures.
So you know again, they'reexpanding what they have through
things like input and outputguardrails, ensuring that the
input the user is giving to theLLM passes a company's checks
and ensuring that that outputthe LLM is then returning to the
(18:43):
user passes their checks aswell.
So prompt injection isdefinitely getting more
difficult as time goes on, butpeople are still finding ways
around it and it's constantlyevolving on both sides, both the
attacker trying to getinformation, as well, as, you
know, the security professionalsthat are trying to defend
against these kinds of things.
Speaker 1 (19:03):
Yeah, and Justin, I
know one of the interesting
things that you've said beforeabout prompt injection is it's
really lending itself.
You're seeing more of like apsychologist type background or
just the power of people,persuasion to be able to coax
whatever information you mightneed out of these models.
Speaker 3 (19:17):
Yeah, you know, as
Chance mentioned, there are
plenty of vendors in this spacethat are developing ways to red
team or test against these sortsof attacks, the background of
which really had to tap outsideof traditional cyber thinkers
and hackers.
We found a lot of folks thathad backgrounds in psychology
(19:40):
and masters of manipulation andlanguage to work with cyber
folks to build tools that couldtry to manipulate via language.
It has been very interesting towatch unfold.
Speaker 1 (19:54):
Yeah, Well, as we can
see, whether it's, you know,
data poisoning or promptinjection, it's not like a
hacker or you know a bad actorneeds to own your entire data
set.
They could plant small seeds inthere that can lead to massive
disruption, and that feels likeit's a little bit of like a
needle in the haystack for anorganization to find.
So does that, Justin?
Just speak to the fact toalways be on top of your data
(20:16):
and have it clean, and if that'sthe case, how do you go about
doing that?
Speaker 3 (20:21):
Yeah, so I mean
there's a couple of ways I've
mentioned the two foundationalthings that I think are really
important.
We are also seeing some vendorspop up in the space that aren't
necessarily scanning throughthe payload of the data.
They're looking at otheraspects.
It could be metadata or filename or directory structure or
(20:42):
permissions.
Those are probably far morefocused on one or two types of
uses.
Like I mentioned, Copilot Gleanis another one.
We're seeing a lot withorganizations pulling that data
in, and so there are some thingsout there that can help if
there's a time issue and I'vegot to get to a value or an
outcome very quickly.
(21:03):
When we look at more of thefoundational things I was
talking about before, A lot ofthis just plays into the
standard data governance riskcompliance.
So data source validation wouldprobably be the first thing.
We want to make sure we're onlyusing trusted, verified data
sources lineage where that datacame from, what changes have
(21:31):
taken place, and we've verifiedthat all of it is accurate, and
there's a number of differentways that you can track that and
make sure that that's sane.
Data integrity checking isanother mechanism.
There's things that can beperformed with hashing or
signing to make sure that theyare unchanged or untampered with
, and we also can use versioningcontrol much like we would in a
(21:53):
software development pipeline.
There are some other techniquesaround outlier and anomaly
detection.
So before training, run anunsupervised learning or
clustering, run an unsupervisedlearning or clustering.
We can find through some ofthese intelligence techniques
(22:15):
what might be outliers in dataand maybe inspect that before we
put it into a training orreject it outright before it
enters any of the trainingprocess.
Speaker 2 (22:28):
Yeah, that's building
upon that.
That was kind of the techniqueas I was researching for the
training data poisoning lab thatI landed on as a cool way to
show some of the securitymeasures you can take was
actually using machine learningalgorithms for anomaly detection
.
So in that lab the forum isaround a coffee shop, right, so
you would expect the data thatit contains should be somewhat
(22:50):
related to coffee or anythingadjacent to that.
So when a user puts in thingsthat are outliers and that
anomaly detection algorithm isable to identify it, it would
then sanitize that data from theRAG architecture, not show up
in a user's output.
Speaker 3 (23:10):
Yeah, definitely.
And then I think there's, youknow, ongoing things.
After you've got your baselinemodel and you think you've got
to train with the right data,what you feel is a good set and
it's accurate and it's whatyou're intending.
There's always feedback loopswith that.
So continual model auditing andvalidation.
(23:32):
We want to be running queriesto make sure that we're getting
responses back, and then somewill go as far as to pin up a
separate environment with anidentical application and do
some adversarial training, wherethey're purposely interjecting
poison data to make sure thatthe tools that they're using to
detect such are finding thosethings as well.
(23:54):
So a lot of different ways.
Again, I'll highlight that thisisn't a buy one product and
it's fixed type of approach.
There are many different layersthat companies need to be
paying attention to in thisspace.
Speaker 1 (24:09):
Yeah, justin.
I mean, for the most part we'vebeen talking about protecting
or mitigating risk within modelsthat I assume our organizations
are running for themselves.
But how much does this issue orchallenge amplify when we start
to bring in the fact that youknow there could be agentic
frameworks at play or you couldbe dealing with a different
partner or customer, or just adifferent model from some other
(24:31):
organization, and those arespeaking to each other, or just
a different model from someother organization, and those
are speaking to each other?
How do we ensure that we candownstream also protect
ourselves from maliciousactivity that might be happening
outside of our four walls?
Speaker 3 (24:42):
Yeah, that's a great
question.
It's one that customers bringup often.
I'd say this having thatframework of a governance risk
compliance foundation is so key.
Governance Risk ComplianceFoundation is so key.
With that, you're going to havea questionnaire or a series of
questions that you need to beasking all of those downstream
partners or vendors orcontractors, arrangements, etc.
(25:04):
And sometimes they'll be ableto answer those appropriately
and other times they can't.
And when they can't,organizations are faced to make
a decision around vetting outrisk versus outcome and they can
only make that choice forthemselves.
Right, it is one that's beingbrought up often.
(25:27):
Having that foundational stanceand questions that you're going
to be asking and making surethat those other applications
are doing their job in thisspace is very important
(26:11):
no-transcript.
Speaker 2 (26:15):
nothing will ever be
100% secure and I'm sure that,
as we see, different things likeagents start to come to
fruition, more security risksare going to be out there
available for attackers and it'sgoing to be more challenges
that security professionals haveto tackle as they show up.
Security professionals have totackle as they show up.
(26:35):
I was curious, justin.
Something I've been hearing alot about has been MCP, or Model
Context Protocol when it comesto agents.
Have you had any discussionsaround that?
Do you believe that's going tohave an impact on agentic
security?
Speaker 3 (26:50):
This is an area that
I'm not very well versed in.
I know it is a developingstandard and it is a protocol
that will be in place for a lotof these reasons but chance I'm
not, I'm probably not able tospeak to be a way to allow
agents to connect to a largevariety of data sources without
(27:19):
having to code those connectionsthemselves.
Speaker 2 (27:22):
Right, so Dell
developers don't have to try to
make each connectionindividually, they just connect
it to this server.
That then will give that agentaccess to all these different
data sources which it can beboth data sources and other
applications.
right Right right, which, as Ithink about it, you know good
(27:42):
and bad.
If you have a lot of securitystandards or tools in place to
protect that MCP server, it canbe really useful.
Right, but also there feelslike there's a danger to
allowing that agent access asingle point of access to all
those different sources orapplications, apis and whatnot.
Speaker 3 (28:05):
Yeah, it probably
cuts both ways right.
You've got one place thatyou're defining policy and,
provided you've got that policyset correctly, it can be a huge
time savings and a greatefficiency gain.
If something is compromised inthat one point of contact for
all those different systems, itcould be catastrophic as well.
(28:26):
But I think the intent is toreduce the complexity, pin up
policy at one point and have itpropagate to all these different
systems that are interacting.
Speaker 1 (28:40):
Well, you know,
beyond a malicious external bad
actor trying to do harm to anorganization or an entity, what
about the fact that you know alot of mistakes are just made by
you know innocent people andbystanders within an
organization.
So is that also a situation inwhich you know myself or
somebody else within a companycould inadvertently expose you
(29:02):
know bad data into a model, andthen that would just have a
ripple effect.
Speaker 3 (29:06):
Yeah, definitely so.
Probably one of the last pointsalong ways to defend against
this is all around accesscontrol and monitoring.
So having a very strict list ofwho can submit training data,
especially for user-generatedplatforms, and then monitoring
those incoming streams for anyunusual patterns, spikes in
(29:27):
submissions or specific areasthat users might be really
interested in.
You know as well as this goesoutside of data poisoning, but
just educating users on if theyare out, have found their own
tools online, what it means tobe uploading anything into a
prompt, whether it's a questionor a document, plenty of tools
(29:51):
and platforms in that space,that kind of monitor for things
exiting the walls of anorganization and that probably
gets outside the scope of thisconversation but a baseline of
really educating users.
Listen, when you're going to beputting something into a prop
someplace, where is that dataactually going?
Who owns it?
(30:11):
Is it training, future models?
These are all things that wehave to educate users on.
Speaker 1 (30:18):
Yeah, absolutely Well
, we're running short on time
here so we will wrap up, butbefore we do Chance or Justin,
any parting thoughts or justwords of advice for our audience
out there as it relates tosecuring language models and
making sure that it's not givingback harmful or deceptive
information.
Speaker 2 (30:37):
Any parting advice, I
think a big one that I really
liked that Justin touched on inthe end.
There again, it goes outsidethe scope of data protection and
more towards shadow AI thattopic but education around what
exactly you should input intothese LMs, Also being aware of
(30:59):
over-reliance.
Like Justin said, I thinkpeople have a tendency to act as
if AI is a single source oftruth, which isn't really the
case.
At times, you need to be ableto go out and still ensure that
the information that you aregaining from it is accurate and
correct before, obviously, yougo out and start to, in a way,
(31:19):
spread that misinformationyourself and tell your
colleagues or create articlesaround whatever it is that the
AI has given you.
Speaker 3 (31:29):
Yeah, I guess the
only parting thoughts I have
I've probably already touchedupon.
This is not a buy a pointproduct and fix this particular
problem of data poisoning ordata security within AI
application development.
It's going to be baked into alot of existing things that are
(31:53):
already in place and thenaugmenting those policies
procedures to include new typesof applications like AI models,
and there are different pointsalong that development pipeline
where folks have access to dobad things with data and it's
not just at one point.
There are about three differentareas that that can take place
(32:16):
and there are differentstrategies and different things
that companies can be augmentingwith at each stop along that
development pipeline to at leastminimize the risk of
introducing bad data into thesemodels or unintended effects of
pinning up these applications.
Speaker 1 (32:39):
Yeah, absolutely Well
.
Great words of wisdom from thetwo of you, and thank you both
so much for joining us heretoday on today's episode.
Lots to take into considerationand certainly it's an evolving
landscape moving forward.
So to the both of you, thanksagain.
Speaker 3 (32:50):
Thank you very much
for having me.
Speaker 1 (33:03):
Okay.
So what did we learn in thisepisode?
First, data poisoning andprompt injection.
Thank you very much for havingme.
Securing your models requires anew mindset.
Traditional cyber controlsaren't enough.
Guardrails must be built intoyour data pipelines, your model
training and, yes, your promptengineering.
And third, trust in AI isn'tjust about explainability or
(33:24):
performance, it's aboutintegrity.
If you can't trust the data,you can't trust the outcome.
As enterprises rush to adoptgenerative AI, this moment
demands something more vigilance, because in this new era of
intelligence, the attack surfaceisn't just your network, it's
your knowledge.
If you liked this episode ofthe AI Proving Ground podcast,
please consider sharing withfriends and colleagues, and
(33:46):
don't forget to subscribe onyour favorite podcast platform
or on WWTcom.
This episode of the AI ProvingGround podcast was co-produced
by Naz Baker, cara Kuhn, mallorySchaffran and Stephanie Hammond
.
Our audio and video engineer isJohn Knobloch and my name is
Brian Felt.
We'll see you next time.