Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Andreas Welsch (00:30):
Today we'll talk
about security and
authentication for Agentic AIand who better to talk about it
than someone who's activelyworking on that.
Tim Williams.
Tim, thanks for joining.
Tim Williams (00:40):
Thanks for having
me, Andreas.
Awesome.
Andreas Welsch (00:43):
Why don't you
tell us a little bit about
yourself, who you are and whatyou do.
Tim Williams (00:46):
Sure.
I've spent 25 years in corporatelife basically trying to make AI
make money for organizations.
So largely on the go to marketside of businesses, using things
like predictive algorithmicmodeling 20 years ago to predict
customers that were going tochurn through to optimizing
workflows through voice channelslive channels.
(01:10):
Live chat.
Chat bots, those sort of things,right through to next best
action guidance for sales teamsand AI driven marketing
strategies through CRM Lifecyclemarketing teams.
I went out on my own about ayear ago to try and help
Australian businesses try andmake sense of how to.
Capitalize on agentic AI and,take that forward into their
(01:33):
businesses and got turned ontothe idea of identity and
security when it comes to AIagents back at the start of this
year.
So about eight or nine monthsago.
And it's a really fascinatingspace.
Andreas Welsch (01:46):
Absolutely.
So that's why I'm so excited tohave you on, and thank you for,
staying up late for us.
Thank you, to have thisconversation.
Obviously agents are thebuzzword of the year, maybe
going into next year as wellfrom what it seems, and
hopefully much longer than thatas well.
There's so much promise and somuch opportunity of what these
(02:07):
agents, these goal-drivensystems will be able to enable,
but I think there's also somerisks associated with that.
Isn't that right?
Tim Williams (02:17):
Absolutely.
Very different from humanactors, not only in terms of
scale, but also in terms oftheir operating cadence and,
how, just exactly how things cango wrong if I think about humans
operate, generally speaking, in,in a an ad hour day, they would
(02:39):
handle a certain number oftransactions and they're largely
operating either.
Bad intention or good intention,and it's generally easy to tell
the difference between the two.
AI agents can be operating 24 7365 days a year.
They could be making thousandsof decisions a minute.
They can self spawn additionalagents to help'em achieve their
(03:00):
goals and they can, as manydifferent incidents have been
reported lately.
Have very good intent and bedesigned with good intent.
At heart yet make verycatastrophic decisions and
mistakes in the pursuit of thosegoals.
So a couple weeks ago I wasreading that security firms
(03:21):
Cyber Rx said agents alreadyoutnumber humans 45 to one.
So what changes then in, interms of identity and trust and
verification when you no longerhave a human in front of the
screen, but an agent takingthese actions?
Absolutely.
So generally speaking the way Ilove to explain it is that
humans generally havefingerprints.
(03:43):
That's not just biometrically.
It's not just the ability tofingerprint every time.
I want a two fa into a system,but also I tend to leave a
trace.
I tend to leave an audit trailbehind me that can be followed
up that I can't access andchange.
Agents are obviously verydifferent that there is no bio
to metric.
There's no fingerprint.
(04:04):
And often there's been exampleswhere AI agents have been found
to autonomously delete theiraudit trails and act in ways
that humans just simply cannotat a speed, at a scale that just
can't be replicated by humans.
Andreas Welsch (04:19):
It's really this
part of scale and agents being
even a bigger black box than AIand models have been to date.
Tim Williams (04:28):
Yeah, absolutely.
And it's these like selfspawning capabilities, that you
look at.
Claude Co for example, is areally good example of the
ability for an agent to spawnsub-agents who can then continue
to act.
And so instead of acting toprevent risk from a single
human, like you say before, 45to one you've got.
(04:52):
Agents already operating atscale Agents who can then self
spawn multiple subagents to helpthem achieve their goal.
And those agents.
Exist one second and don't thenext.
And very different to how humansoperate.
Humans generally are operatingin a, one-to-one kind of
environment.
Their, identity persists.
You can go and find where theyare, whereas an agent could
(05:14):
exist one second and not thenext.
Andreas Welsch (05:16):
Right now, if I
look at your background image,
that kind of is a little scary.
There's this guy with a longmustache and the red eyes and
holding a bag with a dollar signin his hand.
And, the other one with a tieseems like they're diligently
working.
Yeah.
How does that translate to whatyou're seeing in businesses
(05:36):
emerging and what that riskcould be?
Tim Williams (05:39):
We've got the the
FBI looking agent over here who
has authority.
If you think about an FBI agent,they wear a badge.
They wear a way of knowing thatagent has a specific identity
that is being delegatedauthority by the state.
That's the way we like to thinkabout agent identity.
Not only can I tell who they're,but I know that they've been
(05:59):
delegated a level of authorityby the human entity that's
responsible for their actionsversus old mate over here who
doesn't have an identity.
He's operates in the shadows.
He's he's out there trying tomake life difficult for,
well-meaning humans trying to goout, go about their day.
Andreas Welsch (06:21):
I like that
analogy that you just shared
about the good, the legitimateagent having a badge being or as
us being able to identify themand, them being authorized to,
to represent who they say thatthey represent.
How do you do that kind ofauthentication then with agents
and, how do you make sure thatthey stay within limits and are
(06:42):
Yeah, rather on the good sidethan on the rogue side?
Tim Williams (06:45):
Yeah.
So if you look at the way manyare approaching it today in most
cases, because this is a verynew territory for a lot of
people you find that a lot ofpeople are simply providing
agents with their username andpasswords or their private keys
or their OAuth tokens.
And we've seen a number ofdifferent breaches that have
(07:07):
attacked what have traditionallybeen accepted to be relatively
secure avenues.
There was the a, a breach acouple of weeks ago, the Drift
AI breach, 700 organizationswere, compromised via sales
agents.
And those agents had persistentall tokens that were able to be
(07:30):
compromised by the attacker.
Who then was, that were able touse those oil tokens to get into
their AWS accounts, theirSalesforce accounts, all sorts
of different things that arequite scary.
And.
Those attackers were in for 10days and managed to delete
their, trail as they, went.
And so it's a good example ofthose.
(07:51):
Traditional security givens thatyou would use to, to manage
humans just being significantlyinsufficient for for managing
agents.
There's a few differentapproaches that are emerging.
There's, some people who aretrying to use MCP for the
solution.
Others who are trying to stretchother identity and access
(08:11):
management capabilities.
What we're doing at Async isquite different.
We're actually approaching itmore social security numbers and
FICO scores.
The ability to have a uniqueidentifier that tells you
exactly who this person is.
(08:31):
Sorry, exactly who this agentis, but also the person that's
responsible for any of theiractions and who to hold
accountable if anything goeswrong.
And then rather, than thinkingabout security and trust in a
binary, trust me, or don't trustme model, which is what all of
these traditional securitysolutions are thinking about is
actually.
Trust when it comes to agents isa sliding scale.
(08:54):
An agent could be verytrustworthy, one minute and very
untrustworthy the other, andthat's where credit scores came
in for humans.
If you think about why creditscores became available was
because the ability to trust ahuman with credit or with money
is not a binary yes or no.
It changes over time with,circumstance.
(09:15):
The first time I borrow money, Icould be very trustworthy.
I haven't committed fraud.
But then the second or thirdtime when you can see I've got a
payment history or I've donecertain things suddenly starts
to change the dynamic.
And the same thing can happenwith agents.
They could be well, built.
They have a good origin.
They've been built on solidplatforms.
(09:36):
You, you've got a, reallyreputable accountable entity for
their actions.
And when they start to dobusiness, they could be quite
trustworthy, but they coulddrift as we've seen.
Anyone who's used AI can knowthat if you the longer you
interact with an AI, the lessreliable some of its actions can
become.
And then you can also havecompromised agents where an
attacker has actually been ableto.
(09:58):
Compromise the agent, whetherit's through memory poisoning or
whether it's through tool misuseor, a number of other different
types of attacks that we've seenrecently.
That ability to trust that agentcan degrade over time.
And so we are thinking aboutthat.
In terms of it being a slidingscale and that, that trust score
(10:19):
really can also be used,determined used to determine
what level of risk you want totake on by trusting that agent.
(10:46):
If you're talking about a verysimple transaction that the
agent's trying to do, maybe atrust score of 50 is fine,
versus maybe this agent's tryingto finalize a million dollar
transaction
Andreas Welsch (10:57):
Yeah.
Tim Williams (10:57):
You probably wanna
address for in the nineties.
And so we are thinking about itquite differently.
And our position really is thattraditional solutions in this
space just, aren't going to beadequate for agents operating at
the scale that they're alreadyoperating on in the future.
Andreas Welsch (11:14):
So it sounds
like that it becomes a
transaction-based check.
Is the transaction that theagent is trying to complete
something that is high risk,medium risk, low risk, and some
risk modeling going on to saythere maybe some classes or some
categories of tasks or inimpact, and what is that risk?
(11:36):
Is it somewhere in the ballpark?
Tim Williams (11:39):
Yeah, absolutely.
But it's also comes down to thepersistence of that access as
well.
Like what you just talked aboutwas actually really a bit that I
probably didn't cover off asclearly, but is really important
is traditional access managementsolutions and identity solutions
are a binary and persistent kindof model.
I talked about the token breachbefore where they were
(12:00):
persistent all tokens, whereasagents.
Really can't be trusted withpersistent access because of all
the reasons that I've mentioned.
And so you are talking aboutshort intervals and transaction
level approval rather thanpersistent identity access like
(12:20):
you would have in traditionalsystems.
And yeah, as an organization,you can make decisions around
what trust score do we accept?
For agents, and does that trustscore differ depending on which
systems they want to interactwith or which transactions
they'd like to complete?
Andreas Welsch (12:34):
See, to me
that's been one of the big
questions.
When we talk about agents, howmuch do we delegate to them in
terms of task, risk, impact?
Yes.
And how trustworthy are they?
Because, yes, on paper, yes, inour lab, yes.
These one transactions that wefire off here and there to test
(12:55):
it out.
They seem to work fine.
But then when you have them inyour environment or in the wild,
so to speak.
Tim Williams (13:02):
Yeah.
Andreas Welsch (13:02):
Who knows what
really happens and how quickly
can you act?
So it sounds like you're reallyworking in something that's very
impactful and can be impactfulfor a lot of businesses.
Figuring out what do we do here?
Yeah, absolutely.
Tim Williams (13:15):
Yeah.
Andreas Welsch (13:17):
We've always had
some sort level of trouble with
bad actors and challenges aroundtrust.
People saying I'm this person,or I have these yes privileges,
permissions, but they didn't.
We had the whole thing of socialengineering going on.
I think a year ago or so, therewas something there was some
(13:38):
papers of agents deceiving theirusers about the intents and what
actions they've taken.
So how does it all change withagents now agents saying who
they are in verifying all ofthat.
Tim Williams (13:55):
Yeah it's a really
good point.
You, think about I used theexample of social security
numbers before and FICO scoresand it's a number of breaches
that we've seen in the past havetargeted exactly those kind of
data points.
If I can work out what someone'ssocial security number is, and
I've got their date of birth andI've got their address, yeah, I
(14:17):
can pretend I'm them.
That's not a problem.
And so it's another reason why.
This space demands a, differentapproach.
And cryptographically, uniqueimmutable and un spoof
identities are really important.
We are choosing to useblockchain as part of the
technology we're using to enablethat.
We are not a traditional cryptoor token platform.
(14:41):
We're actually generally a webtwo enterprise type solution,
but.
The research we've been readingand the academic research that
we've been pointed to reallytells us that at the moment,
blockchain is really the onlysolution that's providing in a
reliable manner.
A cryptographically secure andand zero trust solution for,
(15:06):
Creating these identifiers andit creating them in a way that
can be verified in adecentralized manner, in a
sub-second kind of latency.
But it requires that level ofsophistication because like you
say it could be very easy forone agent to imitate another.
Like I said before, there's notric, so you, don't have the
(15:29):
fingerprint.
Yeah.
That's unique.
You don't have all those sort ofthings that could be used for
humans to, to resolve thoseissues.
And we have to look for moreinnovative ways to use
technology to solve thatproblem.
Andreas Welsch (15:44):
Curiosity
question there.
So as we're talking about agentsas we're talking about where
they might fall short, wherethey might have some level of Im
impersonation or even a lack ofauthentication, how do you deal
with that kind of technology asyou're building the product, as
you're building the company?
What's your thinking on agents?
(16:06):
Where do you rely on them?
Where do you say no, this is toomuch, or this is too risky?
Tim Williams (16:11):
Yeah, I think it
all comes down to it comes down
to observability and it comesdown to having systems and
processes in place to managetrust.
And often we had thisconversation of it's a bit
chicken or egg, right?
(16:31):
Like agents aren't quite at thatpoint where you can trust'em to
be out on their own.
And you need human in the loopat all these different steps.
And for us to remove thosethings really is quite a leap.
But that leap is what was, isrequired to actually fully
realize the innovative potentialthat agents represent.
And I often have thisconversation of, async will,
(16:58):
would be a solution for this or,whatever technology or solution
you've got in place would berequired.
And the answer I often get is Idon't really trust my agents to
get out into production at themoment, so I wouldn't use a
solution like that.
It's imagine what you, whatwould happen if you could.
And so I, I think it's thatthing of before there were cars,
(17:19):
there were no roads, right?
There were tracks four horses,right?
And so you've got two options.
You either build roads and waitfor the cars to catch up, or you
build cars and you wait for theroads to catch up.
Actually you can do both.
And I think that's what thisparticular moment in time
requires is this technology isadvancing at a rate that is I've
never seen before in mylifetime.
And I'm not sure about how otherpeople feel about it.
(17:42):
And.
I don't think it's enough to sitback and go we'll wait for
agents to operate at a scalerequired to build
infrastructure.
Actually, I think you need tobuild the infrastructure in
parallel so that you canactually get the most out of
this innovation and theseadvancements that you possibly
can in as short time aspossible.
Andreas Welsch (18:04):
One of my guests
earlier this year talked about
roads and self-driving cars and,autonomous cars in a similar
analogy, more talking about thefact that, hey, we need to
design for two things (18:14):
one is
the future of how do we run
autonomous vehicles to act andbehave.
Yeah.
And there's also morefiguratively speaking about
software when we still havehuman drivers on the street,
when we have bikes, when we havemotorcycles and all different
kinds of transportation on thisstreet.
So that was an interestingconcept, and it's interesting to
(18:37):
hear you talk about even theanalogy that layed the
foundation for that.
Is it roads?
Is it is it dirt tracks?
And don't wait, is what I'mhearing.
Tim Williams (18:48):
Yeah.
And I think it's important ifyou think about AI is similar to
a lot of other technologies inthat it comes and goes in
bursts.
So if you think about internetspeed's a really good example,
right?
You, I remember before even themi, the dial up modem your
internet was really only biginstitutions that could afford
(19:08):
really big infrastructure tomake it happen.
And then all of a sudden someonerealized, Hey, I can send the
internet down a phone line.
And all of a sudden everyone'sgot internet.
And then those 56 K modems werethe thing for quite a while.
And then someone went, oh,actually broadband's a thing and
then cable's a thing.
And then that was the acceptedstandard.
And then all of a sudden you'vegot fiber and then it takes off.
(19:30):
And so you see these kind of.
AI operating in the same, kindof way like AI's been around for
70 years.
And you'll see periods ofmassive acceleration and then
periods of very much stagstagnation in, in, in
advancements.
And I think it's important whenyou reach a point where you are
seeing technologicalbreakthroughs like we're seeing
(19:52):
at the moment that you continueto double down.
Because what happens next is if.
Long period of stagnation whereyou don't get that value.
And so being able to enable thatinnovation and to help advance
how far AI will get in thisparticular burst period, I think
is really important.
Andreas Welsch (20:09):
So we're already
pivoting towards what is the
opportunity, what's thepotential?
And that was my next question Iwas going to ask you and I'll
ask it anyways.
When you work on a topic thatdeals with risk aversion, that
deals with prevention of harm offraud, of identity theft in that
(20:30):
sense, and all these securitytopics, I have a feeling that
can be pretty daunting, prettynegative for, some people if you
always need to think about whatcan go wrong.
Yes.
So where do you see theopportunities in all of this?
Because of the work that you'redoing or despite of the work
that you're doing?
.Depending on how you want tolook at it.
Tim Williams (20:53):
Actually think
it's because of, and the reason
I say that is, we wouldn't haveto build something as complex or
as innovative as we need to dowhen we're building async.
And so we are constantly seeingwhat's happening at the
forefront and at the at thefrontier of, where this
technology is going.
(21:13):
And yeah, it can be scary, butat the same time you can start
to see the potential we arehaving to build these things out
because they're so clever,because they're, unlocking a
capability that you've neverseen before.
And having to design and buildaround all of these advancements
is actually really excitingbecause you can see where this
(21:36):
can go and, there's always therisks, right?
There's always the risks of badactors, and there's always the
risks of, things going the wrongway.
And I think it's reallyimportant that the right kind of
architecture and the right kindof infrastructure is built to
help mitigate those risks.
Because ultimately thistechnology exists.
(21:56):
Even before we got to the levelwe had with agents.
You still had bad actors to finddifferent ways to get into
different things.
I had a very similarconversation.
This is gonna sound like I'mname dropping, but I'm gonna do
it anyway.
I was in a conversation at aSalesforce meeting with an
ex-head of the of cybersecurityat the CIA and we were talking
(22:17):
about quantum computing.
And the question that got askedwas quantum compute, it's
exciting.
Quantum computing in the handsof bad actors really is quite
scary, right?
Like you are talking aboutprocessing capability at a level
we've never seen before, andthen you are putting AI agents
on top of that and what couldthat become?
(22:38):
And, and his response wasactually quite prescient.
It was.
It was.
Bad guys are always gonna findways to get into things.
Our job is to close those holes.
There's always going to be a newhole that pop up.
It's whack-a-mole.
They're always gonna find newways.
You've just gotta knock'em downas quickly as you can.
Try and predict what is gonna beout there and try and close
those gaps before they happen.
Andreas Welsch (22:59):
So then with all
of that out there, the
challenges and risks, the bigopportunities, what should
leaders be looking for when theybring agents into their
business?
Tim Williams (23:11):
Yeah, I think, for
me, it all comes back to
strategy.
I think I think everyone's readthe, different reports that talk
about the failure rates of AIexperiments or AI projects and I
think what I see as a commonthread between a lot of them,
I'm not gonna say all of'em, butbetween a lot of them is, it can
be very kneejerk, it can be verynarrowly focused.
(23:35):
Often you see those thoseefforts really focused on just
trying to cut costs and tryingto just I always say it's the
shareholder's favorite story isthe CFO's one, which is, oh,
we're cutting costs and we'rereducing head count.
And we're when actually I don'tthink that's the right strategy
with AI and with AI agents inparticular.
(23:58):
I actually think it's a growth,play, and I think it's a very
different strategy, but itactually doesn't matter what
strategy you are following.
It actually has to be a reallyclearly well thought out
strategy.
Not just for AI, but what isyour.
Business strategy for the nextfive years look like.
Empowered by AI and by AI agentsin a way that is unencumbered by
(24:20):
those constraints of the past.
And so the first thing I wouldsuggest is strategy.
Think about if you think reallyhard about what it is that you
are trying to solve, what is itin your five year strategy that
AI can.
Make faster, better, moreprofitable whatever the outcome
might be.
And particularly what, how canyou use it to deliver a customer
experience that is worthy ofwhat your customers deserve, but
(24:43):
also one that you can protect,make customer experience your
much through AI and through AIagents.
So to be my first piece, thesecond piece is really think
through the, that theobservability and the controls
you have in place around knowingwhat AI is in your business what
(25:03):
I, how you will manage decisionsaround letting AI new AI into
your business, and being reallyclear on how you can take steps
to manage those risks.
If they go wrong, whether theyare agents that you are
responsible for or whetherthey're agents of other parties
that you are leading into yourbusiness, be really clear on how
you are going to manage it, if,and if, and when those things go
(25:24):
wrong.
I think that's a big piecethat's really important right
now.
Andreas Welsch (25:28):
I think that's
very actionable and very
tangible advice.
Tim, we're getting close to theend of the show.
And I just wanna say thank youfor sharing your experience with
us and for staying up late, Iknow you're in Australia.
It was a great conversation.
I learned a lot about where wecurrently are with agents and
what needs to happen to makesure that they are secure, that
(25:49):
we can authenticate them, andthat we can give them a real
identity and tie it back to auser.
Tim Williams (25:55):
Yeah.
Great.
Thank you for having me,Andreas, so I really appreciate
it.
It's been a great tour.
Andreas Welsch (25:59):
Thanks so much,
folks.