All Episodes

April 9, 2025 46 mins

AI isn't just evolving — it's accelerating into every corner of business and society. But while innovation surges ahead, AI policy and regulation is playing catch-up. In this episode of The AI Proving Ground Podcast, two of WWT's foremost AI and cyber experts — Kate Kuehn and Bryan Fite — dive deep into the fragmented and fast-changing world of AI policy, regulation and governance. Plus, what every enterprise should be doing right now to stay ahead of regulatory change while building AI systems that are secure, inclusive and future-proof.

For more on this week's guests:

Kate Kuehn joined WWT in early 2024. She brings more than 25 years of experience leading and advising cyber security, technology, innovative AI strategies and teams to help shape the industry with better business security and risk decisions. Kate is a trusted advisor, thought leader, speaker, published author, and mentor. Currently, her main area of expertise focuses on the collaboration of Risk Executives and Boards to achieve a successful and secure integrated cyber risk management program in this new normal of ever-changing regulations in this elevated threat landscape. Kate also has expertise in the correlation between security, traditional IT initiatives and the implications of AI.

Kate's top pick: Trustworthy and Responsible AI at the Global Scale

Bryan Fite is a committed security practitioner and serial entrepreneur who uses Facilitated Innovation to solve "Wicked Business Problems". Having spent over 25 years in mission-critical environments, Bryan is uniquely qualified to advise organizations on what works and what doesn't. Bryan has worked with organizations in every major vertical throughout the world and has established himself as a trusted advisor. "The challenges facing organizations today require a business reasonable approach to managing risk, trust and limited resources, while protecting what matters.

Bryan's top pick: Shadow AI: The Threat You Didn't See Coming

The AI Proving Ground Podcast leverages the deep AI technical and business expertise from within World Wide Technology's one-of-a-kind AI Proving Ground, which provides unrivaled access to the world's leading AI technologies. This unique lab environment accelerates your ability to learn about, test, train and implement AI solutions.

Learn more about WWT's AI Proving Ground.

The AI Proving Ground is a composable lab environment that features the latest high-performance infrastructure and reference architectures from the world's leading AI companies, such as NVIDIA, Cisco, Dell, F5, AMD, Intel and others.

Developed within our Advanced Technology Center (ATC), this one-of-a-kind lab environment empowers IT teams to evaluate and test AI infrastructure, software and solutions for efficacy, scalability and flexibility — all under one roof. The AI Proving Ground provides visibility into data flows across the entire development pipeline, enabling more informed decision-making while safeguarding production environments.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
AI has been reshaping the world for decades, but the
current wave of AI, led bygenerative models and autonomous
systems, is different.
It's not just enhancingindustries, it's disrupting them
.
Governments are beginning torespond with new policies and
guidelines, but, if we're beinghonest, the regulatory landscape
is fragmented and harmonized.
Global governance remainselusive For enterprise leaders.

(00:23):
This creates a growingchallenge how do we move fast
and innovate while stayingresponsible, compliant and
trustworthy?
On today's episode of the AIProving Ground podcast, we talk
with Kate Keene, wwt's lead onglobal cyber advocacy and a
governor for the United Stateson the Global Council for
Responsible AI.
We'll also talk to Brian Feit,a principal security consultant

(00:46):
for AI, about how organizationscan leverage the TRAI framework
to build AI systems that areethical, transparent, inclusive
and secure, even as technologyevolves and regulatory
expectations shift.
A quick disclaimer on thisepisode this episode is based in
part on a concept paper thatdoes not provide any specific or
actionable legal advice, policyadvice or other professional

(01:10):
guidance.
Any references to laws, rules,regulations and frameworks are
illustrative and intended todemonstrate how a
purpose-specific framework fortrustworthy and responsible AI
could be created.
For any specific frameworkcreation, please do your own
research and reframe fromrelying on this paper for the
latest facts.
Kate, how's it going?

Speaker 2 (01:40):
It's great Thanks for having me here.
Appreciate the time.

Speaker 1 (01:43):
And Brian.
Brian with a Y.
I won't fault you for that, butwelcome.

Speaker 3 (01:48):
Thank you, great to be here.

Speaker 1 (01:49):
Yeah Well, the both of you, just before we get
started, have amongst the bestLinkedIn profiles that I've seen
Kate, risk executive, cyberadvocate, board member, advisor,
investor, speaker, hacker.
As if that wasn't enough, a micdrop mom times five.
Pretty impressive.

Speaker 2 (02:08):
It's just duct tape and band-aid.
My friends, it's my life.
I love cyber, I love my kidsand that's pretty much it.
That's all I do in life.

Speaker 1 (02:16):
And Brian equally impressive resume here.
But I am struck by the headerimage.
Deepfake Justice League.
Are you moonlighting for us?

Speaker 3 (02:25):
Well, no, we're just a campaign to try to make the
world a safer place, given theadversaries out there, and just
the tools are outpacing.
The fakes don't look like thevalley anymore.
They're very hard to determine,so we need to make sure that we
can help our fellow humans.

Speaker 1 (02:46):
No, absolutely Well, we got lots to get to today
talking about AI policy,regulation and just the.
You know the environmentoverall around cyber and AI, the
regulatory landscape, you knowit seems to me to be one of the
toughest and most difficultareas for you know, whether it
be enterprise AI leaders or justorganizational leaders to wrap
their heads around.

(03:06):
You know it's a fracturedlandscape.
What is compliant in the USmight not be working over in the
EU or not even viable in aplace like China.
It's a difficult, complexlandscape out there.
Kate, I know you've got a lotof expertise in this area.
I was just hoping, before weget into the thick of this
conversation, level set for uson what the current environment

(03:26):
is from an AI policy andregulation landscape as our
clients or organizationsexperience it.

Speaker 2 (03:35):
Yeah, I mean let's start with some groundwork on it
.
So you know, all organizationstoday are going through.
We've used the term digitaltransformation for a number of
years, but the reality is is nowwe're in the middle of a
digital revolution, and thereason is is that technology
impacts every area that acompany uses.
I mean literally.
It's in every part of anorganization.

(03:56):
I used to say every part exceptclimate, and then a woman very
smarter than me even showed mein climate.
So there's a technologycomponent to all areas a board
looks at.
When you have technology.
There is cyber, because cyberis nothing more than anything to
do with the computer orcomputer transmission.
And now you have AI beingconsidered.
Because organizations arelooking at AI for one of two

(04:19):
reasons it's either going tohelp them on a cost impact
perspective make employeessmarter, faster, more productive
or there's a branddifferentiation where it's being
used to innovate and create.
But it's creating this digitalrevolution when we look at it
from a legal aspect, thisconcept of trustworthy and
ethical AI because AI can beused for really good things.

(04:40):
For really good things.
It can also be used to spoofpeople, to make it harder to
detect when threats happening,to hallucinate data and make
data do different things that itdoesn't need to do, and it's
going to continue, and we'regoing to see AI get further and
further and further into ourlives.
So, as that happens, theregulatory community is trying
to figure out how do we maintaina technical and an ethical

(05:03):
baseline for our consumers andour companies and our government
.
The issue that we face is, youknow, looking back in history,
we didn't do a good job ofhaving, you know, in essence,
one version of cyber regulation.
Cyber policy starting we weresporadic with it.
Cyber went from the guys in theback of the room is an
afterthought to a forethought inabout a 15-year history.

(05:26):
We're trying now with AI not torepeat the mistakes of the past
.
So you're seeing organizationsand governments starting to look
at it from a regulatoryperspective to say, okay, we
can't have 10 different agenciesregulate this.
We don't want to see statesregulating everything
differently.
We want countries to have kindof a baseline across the board.

(05:52):
But the best of intentions pavedthe road to hell, because you
saw the EU release something.
You saw our organization, ourgovernment, release and then
pull back and now release again.
You've seen the states start toget in.
So we're actually going downthe path of what we did in cyber
, where everybody's starting topoke at it, and the goal we're
seeing from board organizations,from the ISACs and others, is
to try and hit the brakes andget organizations and especially

(06:13):
interconnected governments, tolook at the regulatory landscape
with one perspective, becauseit's massive concern for boards,
for companies that are lookingto implement AI is what they're
implementing in one area goingto be considered regulatory safe
in another area?

Speaker 1 (06:30):
Yeah, Brian.
What are the implications thenon cyber teams for organizations
around the world?
Pick your vertical there.
You know what is all thatshifting, what is all that
uncertainty?
What does that equate to forhow organizational cyber leaders
are trying to position their,their companies?

Speaker 3 (06:50):
Yes, engineers don't like uncertainty, and the rate
of change and the impact ofthose changes create unique
challenges.
I think the the advantage thatwe have in our practice area is
we've our North Star issomething we call trustworthy
and responsible AI, and whatthat actually is is a unified

(07:10):
compliance framework of all ofthe global frameworks of the
regulations and various decrees,and by having that what I'll
call lens or Rosetta Stone, itmakes it very easy for us to
consult to our clients.
Internally, we drink our ownchampagne, but also when we go
out and consult with theindustry on, because they want

(07:35):
to take the advantage of all thepromise of AI, but they want to
do it safely and securely andeconomically, and so the benefit
of having that joined up viewreally can provide the lens
that's right for that particularorganization and allows us to
meet them where they happen tobe.
Now, if you hear me talk aboutthis to people at conferences or

(08:00):
consults, trustworthy andresponsible AI is hard, first of
all because nobody reallyagrees on what it is, and that's
okay, because it is differentfor every organization.
But through our lens, we cansay here are the things that
matter to you your mission, yourstakeholders where you operate

(08:20):
the industries that you operatein and having that kind of view,
then we can customize themessaging and the advice and
consult that we provide ourclients in the industry.
And the beautiful part aboutthis is it's just fluffy words.
We can actually give you theKPIs to measure that and we can

(08:41):
walk back.
One of the nuances that we foundwhen we started to try to put
together this unified complianceframework even though there's
so many of them out there thereare our stuff, the NIST I love
NIST and our North Star, as itwere, would be NIST AI 600-1.
And that contains what I callthe dirty dozen.

(09:13):
They're the 12 bad things, theway humans can be harmed if we
don't get these AI systemscorrect.
And the beauty of that is thenwe can map from there and
framework walk wherever you needto be to be compliant and we
know compliance doesn't equalsecurity, but it's still a cost
of doing business and then wecan equally say here's a threat
catalog, here are the controlsthat have a high affinity for

(09:36):
mitigating that bad thing, andso you can distill the world of
possibilities down to two orthree decisions.
And if you ask us, we'll bevery prescriptive.
We'll tell you if this was ourenvironment.

Speaker 2 (09:46):
This was what we're responsible for this is the path
we'd take, but ultimately it'sthe client's choice.
It's interesting, brian,because this is why Brian and I
are good yin and yang.
He can say things like NIST 600is our North Star.
If I said that in the Dirty Dothe dirty dozen, if I said that
to a Congressman or to a boardof director, they're going to

(10:07):
look at me like I have 10 heads.
So I always have to distill andmake human language out of
Brian's language.
Funny, funny story.
So this week I was headed out toWashington for an offsite and
some meetings and got on theplane in Denver and it was like
you know the old song clownsleft of me, jokers to the right.
I, honest to God, had a senatorand a congressman sitting on
either side of me, struck upconversations, started talking

(10:30):
about.
You know what's going on.
I'm in cyber, I'm an AI, theywere asking opinions and you
know it's interesting becausethe the frameworks that we're
talking about and we look at theregulatory landscape.
Everybody's grappling with thesame thing.
So from a lawmaker perspective,when I put my hat on from a

(10:51):
government, they're looking athow do you create regulation
around a couple of areas.
One is data privacy.
So you think about consumerdata privacy.
You think about how businessesare going to use or misuse data
in their AI models.
What do we need to regulatearound that?
The second piece is reallyaround coming further into what
Brian's saying is trustworthypractice.
You know how much is it thecompany's responsibility to
ensure that the images that arebeing used, things that are

(11:12):
putting up, are real, that we'renot inundated by deepfake, and
where is that line ofresponsibility in ensuring the
non-malicious use of contentfrom an AI perspective?
The third area is aroundmisinformation, disinformation.
So you know we really take inthis country to heart the idea
of the first amendment andfreedom of speech, but where is

(11:33):
that line when you start tobring AI in on misinformation,
disinformation.
And the fourth is responsibleuse, and that comes into you
know.
You know there was a big jokethat you know you could go to
the Amazon chat bot a couple ofmonths ago and type in you know,
give me examples of maliciouscode or your customer list.
I'm not just picking on Amazon.
This happens a lot and all of asudden the wrong data gets spit

(11:56):
out.
So one of the things you knowin talking with you know again,
sandra on my left, congressmanon my right was, you know.
We highlighted and I wentthrough.
What I love about the work we'vedone is Brian's exactly right,
there are 32, I think as oftoday hopefully this isn't dated
by the time we release globalframeworks out right now and

(12:19):
everyone has a little bit of adifferent flavor of what you
should do from a guidepost.
Again, those guideposts whatboards care about?
Misinformation, disinformation,insider threat, reputational
damage those are the main andoperational ability to create
operational efficiency.
Those are the areas that aboard looks at AI for and the
pillars that we've come up with.
There's five of them fourtrustworthy and responsible AI,

(12:41):
data privacy and protection,security standards, regulatory
framework alignment, economicimpact, ai governance and ethics
guidelines.
That really distills down forthe layman why we love NIST 600
and why we've pulled these apartinto areas that are consumable
for the average person tounderstand where regulation is

(13:04):
heading.

Speaker 1 (13:18):
A lot of what the two of you are talking about is in
this research paper that the twoof you helped co-author,
amongst several others here atWWT Trustworthy and Responsible
AI at the global scale.
I mean this is a very deeppiece of research.
One of the most deep pieces ofresearch that I've seen us come
out with has everything fromthose pillars that you just
mentioned to implementation, toquick and easy wins.
I mean, this thing goes downthe list for enabling

(13:39):
organizations to handle those 32frameworks.
Brian I know Kate was justabout to hand it off to you Talk
about how an organization canuse this research to help
advance you know their AIstrategies amidst that uncertain
ground that we've been talkingabout.

Speaker 3 (13:55):
Well, yeah, besides the stories that we hope
resonate with them, there's sometools in there, and you know,
kate, you hit it on the head.
You've got to translate to makethe complex simple, and even
though sometimes I hope I'mdoing that, I don't.
But the lens that you can takeis who's your stakeholder?
So maybe the CFO is reallygoing to care about the pennies

(14:19):
and pounds, and so their lenscan be how do we make our data
centers more efficient and, atthe same time, that efficiency
could appeal to the missionstatement of the organization
around sustainability.
So those are, you know, twodifferent stakeholders who care
about two different things.
But, at the end of the day, ifwe have that informed framework

(14:42):
to say this is the optimumsolution that meets both of
those stakeholders needs, we getto yes, faster, and they can
both feel, hey, I'm getting whatI need out of that, and so the
work that was, by the way, thatexercise was really excellent.
There was a lot of people whohelped with us, but those use
cases and the stories on howwe're actually doing in the

(15:03):
field, we're taking, you know,the voice of the customer, the
wicked problems that theybrought to us and said we have
to be compliant, but we alsodon't want to stifle innovation.
Guess what?
You don't have to choose one orthe other, you can choose both.
And actually it's funny.
I used to tell people, becauseI was in InfoSec, I was the
Bureau of no.
It's funny.
I used to tell people because Iwas in the InfoSec, I was the

(15:24):
Bureau of no.
Now I'm the facilitator of yes,because I can actually come in
and show how we can meet allthose stakeholder demands at the
same time and we can becompliant and secure at the same
time.

Speaker 2 (15:36):
I can attest that he used to say no a lot and he says
yes a lot more now.
So it's totally true, he's nowthe guy of yes when he used to
be the guy of no.
But the reality is is, as youlook at the regulatory landscape
and you know we're going to seea lot of change this year from
an AI perspective, you know ourgovernment's going to be coming
out with some new guidance.
There's now the Global Councilon Trustworthy and Responsible

(15:58):
AI that's been stood up with.
I think we have 23 countriesright now in the council.
You're going to see a lot ofchange.
But the reality is is what Brianjust said the kind of the
comparison of yes or no, and thebody of work that was created
by Worldwide, and this guide isan example of where we're headed
.
So this piece of research was,you know, created by probably

(16:22):
the most amount of groups I'veever seen at Worldwide coming
together to collaborate.
We had sustainability, we hadcyber, we had AI, we had digital
, we had everybody looking atthis, and that really lends to
the fact that our customers,like I said at the beginning of
this, it's no longer justdigital transformation, because
that concept has been, you knowaround for a while.
I mean, brian, what we weresaying digital transformation 15

(16:44):
years ago, I think, when cloudcame out.
Right about that yeah, forever.
We're in a digital revolutionand what I love about the work
we're doing at worldwide is thatI'm just you know, we have,
like the implementation approachthat's also highlighted
initiate regulatory mapping andidentification of key
stakeholders.
Brian nailed it it's no longerjust a CISO or risk executive,

(17:05):
it's the CEO, it's the CFO, it'sHR.
Everyone has a role to play inhow we look at not just AI
adoption but digital revolutionadoption, pilot projects to test
cross-border data sharing.
Data is king now and the cybercomponent of it is just one
facet of how data is going to gothrough an organization.
Third piece full-scaleimplementation and metrics

(17:27):
rollout.
As we look at these new digitaltransformation programs that AI
is generating and kind ofpushing the envelope on the
regulatory piece, the pilotprograms to implementation and
then measuring it constantly foraccuracy and impact is going to
be key.
Monitoring and revision how dowe continue to learn?

(17:48):
This is never going to be astatic environment.
We will never see AI and how weleverage technology today be
static again.
And then, fifth, thecommunication and enhancing your
AI culture because, just likewe used to say that cyber needed
to be culture, now digitalrevolution, digital embrace is
where culture is going to livein cutting edge organizations.

(18:10):
So the fact that we can take aregulatory framework mismatch
and create a roadmap for digitaltransformation, I think is a
secret sauce for our companygoing forward.

Speaker 1 (18:22):
Well, brian.
So Kate just outlined the howor not the how, the what to do,
how to implement it.
But what might some obstaclesor challenges be, as an
organization might be looking togo through that process and
what are we even driving towardsin the end?
What are the outcomes thatwould arise if and when we go
through that implementationprocess?

Speaker 3 (18:44):
Well, great question.
So what I'm seeing?
It's kind of what we talkedabout the Bureau of no, there's
a lot of fear.
So there's folks say this istoo dangerous, we can't do this,
we shouldn't do it.
They might be giants I don'tknow if you know that reference
to an old black and white sci-fishow but the point is that the
bigger fear should be of notadopting some of the most

(19:07):
transformational technologyavailable to us, and so, to get
over that fear, we also have aflip side of that, which is FOMO
fear of missing out oreverybody going around with
their AI hammer looking fornails to hammer in, when, in
fact, a Google search could dojust as good a job of surfacing

(19:28):
something.
So it's that balance and itreally is most important, and
the organizations that I've seenthat are doing it the best
actually are saying OK, you wantto do something.
Here's an intake process.
What is your wicked problemthat you're going to solve with
AI and what is the businessoutcome we can expect, and we're
going to hold you accountablefor delivering that.

(19:50):
And then going in and testingthings and doing fast fail, not
being afraid to fail, but failfast and also making sure that
the business case is therebefore they start running, after
you know those, those value thevalue proposition.
They also just basic safetytraining.
We do see some organizationsthat are running and doing

(20:12):
training because you can hurtyourself or others and you can
do it, you know, with all thebest intentions.
Because the thing about some ofthese AI agents or systems is

(20:37):
it's all about the data and theywill find the data, or systems
is it's all about the data andthey will find the data.
And if the data is not properlycurated, classified, labeled,
it's very easy for you toaccidentally do something very
harmful, and so we're trying tofind ways to.
You know, not necessarily saylet's do all this huge

(20:57):
investment now, but here aresome basic rules of the road
safety rules.
Here are the guardrails andcontrols that you already have
in the organization, so justturn them on or reconfigure it
and then if there is a majorkind of threat category or
control that you're missing,that we quickly identify that
and say here are the things thatneed to be true in order for

(21:21):
you to move faster and adoptthis transformational technology
.

Speaker 2 (21:25):
I was going to say you bring up a really good point
about the adoption and you know, kind of the FOMO versus the
fear.
And what I see too is, you knowand I was joking about it we
saw it a year ago and we'reseeing it even more now.
We have a bit of you know, seeno evil, speak no evil, hear no

(21:47):
evil going on.
We have about a third of ourcustomers that are running as
fast as possible.
We have a third going notreally sure I want it in my
environment.
We have a third going, notreally.
You know, one way or anotherwe're kind of on the fence, and
part of that has to do withculture, is, if you think about
it, you know.
I'll give you the example ofwhen Brian and I first went to
college many, many years ago, Ihad one computer in my dorm room
.
It had a green screen and therewas one computer lab on campus
let's start there and there wereno cell phones.

(22:09):
You had to be mega rich to havea cell phone.
By the time I entered theworkforce everybody had a
computer and everybody had acell phone.
So that's like five, six yearsand all of a sudden we've got
both.
You fast forward.
My kids, my older kids, grew upwith a computer.
They understood cell phones,but they didn't grow up with AI.

(22:30):
They're almost adults.
They just are starting that AIjourney where we joke about zero
trust.
Baby, my youngest will neverknow a world that doesn't have
AI.
So when you think about thatfrom a workforce perspective,
you're dealing with companiesthat have people that came
before, in essence, the computerrevolution, the cell phone
revolution.
You have those that haveentered the workforce that don't
know anything but leveragingand relying on technology, and

(22:52):
we're about to bring in ageneration that will have known
nothing but a world with AI.
So leveraging the unique depthwe have at Worldwide, I think,

(23:23):
is one of our superpowers inthis space is that we can help
and then also help future-proofthat the investments that are
made are things that are goingto continue to help be measured
and impactful to organizations.
And that's, I think, whereyou're seeing these walls come
down and why you're seeing somuch more collaboration,
holistically.

Speaker 1 (23:46):
Brian, I do want to go back to the trustworthy and
responsible AI research that weput out, that we've been talking
about.
That wasn't just made up out ofthin air.
That was actually based off ofreal work.
Correct me if I'm wrong that wedid from Malaysia and the
Association of Southeast AsianNations.
Correct.

Speaker 3 (24:06):
Yes, and even before that, that was where the
insights came from earlier gigsto support that great work, to
try to spread the good wordabout this unified compliance
framework, or the lens, theRosetta Stone, to do it right.
If it's okay, I'll give you alittle antidote here.
So this AI, machine learning,neural networks this stuff's

(24:33):
been around for a long time.
In fact, wwt made hugeinvestments well before I built,
10 years ago in this, and I'vehad the pleasure of working with
data scientists throughout mycareer, and the one thing that I
learned early on in my tenurehere which is, you know, a year
was that when we did theinterviews with our clients who

(24:55):
were trying to make sure theyhad the right policies and
everything, we'd always end uptalking to the data scientists
and I was like, ok, datascientists, you've got the lab
coat, you've got the degree, I'msure that they're doing things
securely.
Wow, I was wrong.
Only because they care aboutthe data and they don't care how
they get the data and they wantmore data and they want

(25:16):
questions to answer, and so thatwas their focus.
And even you know we're talking.
It's like well, we don't reallyworry about that.
You know, we're assuming thatthe data owners had done it.
We're assuming that theinfrastructure folks had done it
, and so that was very tellingto me that there was an
opportunity here to educate thePhDs on kind of how to do it

(25:37):
right and, at the same time,provide the business the
confidence and assurance theyneed to take on those bigger
projects.
And Kate, you put it right.
You know culture counts and sounderstanding you know, the
sector, the industry or theculture, and it always begins
that the tone at the top matters.
So when a CEO comes out or themission said we are going to

(26:01):
embrace this technology, we aregoing to do it responsibly and
we are not going to harm humansor stakeholders or the planet,
those are bold statements and welove that.
Then it's figuring out okay,how do we actually do it?
And so that is where we get tocome in and help do it.
And the one sector and help doit, and the one sector that it
probably has the biggest backlogand you'd think, wow, they, you

(26:24):
know, is financial, and thereason they love, love, love
data science, because they'vebeen doing it forever and in
fact they're heavily regulated,you know, to have to prove the
model doesn't drift, and allthese things in there.
You know the harmful bias isn'tthere and they've really been

(26:51):
good at that.
They're fearful of generativeAI, because my calculator
doesn't hallucinate, but theseLLMs do, and that is really
where I, you know it was awake-up call.
So being able to actually workwith these very mature
industries, with these reallysmart people, to just, you know,
hey, look at it this way andyou could measure it this way
and to translate between allthose different cultures, it's
just it's it's very rewardingand also, you know, it's been

(27:14):
challenging at times.
But I think you know this.
This, the paper, is aculmination of kind of all those
experiences and we have thetool set to really help people
do it the right way and besuccessful.

Speaker 2 (27:27):
I think you just summed up.
You know so and, by the way,before I even say this, if you
want me to get your calculatorhallucinate, I'll work on that
for you if you'd like that.
But I think you just summed up.
You know, the relationship thatwe have here at Worldwide is
you know, I see us at WWT.
I see us a lot of times.
We make really bold statementsto help customers and then you

(27:48):
and others and the amazing madscientists go figure out how we
help our customers.
And you know, leveraging theATC and the proving ground and
all the different you knowpartners we have, we're able to
create some really interestingyou know, groundbreaking

(28:08):
solutions across the board.

Speaker 1 (28:11):
Yeah, Kate, you know the idea of trustworthy and
responsible AI seemed to bemaking its way into policy
regulation and then certainlygeopolitics shift, and you know
we're trying to understand aboutwhat's going to be happening
with the new administration interms of this policy.
I do want to read a quick quote.
I was reading a New York Timesarticle from late March and it

(28:34):
was from Laura Caroli, seniorfellow at the Center for
Strategic and InternationalStudies, and she said that
issues like safety andresponsible AI have disappeared
completely from leaders'concerns, just given the nature
of where politics are today.
I want to just gut check thatwith you.

(28:55):
Is that what you're seeing outthere right now?
And then, if so, what does thatmean for transparency as it
relates to how organizations arethinking about AI?

Speaker 2 (29:04):
No, I totally, I disagree 100% with that
statement.
I don't think it's totallydisappeared.
I think that thisadministration and I'm not going
to speak on behalf of theadministration, but what we've
done is kind of hit pause andtrying to figure out what are
the lines and the delineation ofwhere we should have regulation
versus self-governance.
And what I mean by that is ifyou look at the five pillars

(29:25):
like we just talked about dataprotection and privacy, you know
thinking about responsible use,things like that there's a line
there because when you haveadversaries using AI on one side
and us using it on the otherside, what is that role and
responsibility of companiesbetween it?
How much can we govern and howmuch are we going to be
impassioned and pushed back?

(29:46):
So you know, we've seen somefits and starts.
There was an open call with theFTC last year regarding putting
in much stringent, morestringent parameters around
deepfake on consumer facingwebsites and holding almost up
to criminal charge for nothaving proper deepfake
protection.
We've seen now the last AIexecutive order be pulled back

(30:12):
and really the reason is thatthey're trying to understand
that line of governance versusself-governance, versus what is
possible from a regulatorystandpoint.
We're dealing with verybleeding edge technology.
We're dealing with bleedingedge concepts and one of the big
concerns with thisadministration and I'm 50-50 on

(30:32):
this one is there is a bigconcern about stifling
innovation in the name ofregulation.
So how do we make sure thatwe're bringing responsible and
trustworthy solutions forward,but not over-regulating to
decrease our innovative lens?
The other reason why you've kindof seen a step back in the
safety aspect is there's veryinteresting geopolitical

(30:55):
scenarios going on right now.
That's nothing new.
But there's also the questionof how our adversaries are going
to use AI and while we don'ttypically take what's called an
offensive stance in cyber, ourprivate organization is going to
have to start looking atoffensive cyber, and will AI
have to come into that?
There's a whole question ofbody around that.

(31:16):
So these are tough, you know,conversations to have from a
regulatory and a legislativeperspective.
You know, being on the Hillthis week, the new cyber and AI
leadership in this government isnot fully baked.
We don't expect it to be foranother month or two.
It's coming and then I thinkwe'll see kind of a reshift and

(31:37):
focus on what are the bounds andparameters from a safety and a
regulatory standpoint that wecan stand up to and feel
confident that we're protectingour consumers, protecting our
businesses, but not oversteppingthe mark of reducing innovation
or the ability to leverage AIif necessary in a cyber attack
scenario.

Speaker 3 (31:56):
Yeah, and Kate, just you know, add on to that.
I don't think you have to.
There's always going to bechange.
So again, our approach iscognitive of that and we,
basically our lens, can, canmodify, can adjust as needed,
and if you don't need thoseframeworks, you don't care about
them.
We can limit that view and bevery specific and provide

(32:19):
prescriptive guidance.
But at the same time, good isgood.
So you don't necessarily haveto be fully immersed in the
ecosystem or the planet to knowthat using less energy is going
to get more compute or morevalue is a good business

(32:41):
decision.
You don't have to necessarilycare about certain social
programs to know that if youralgorithms can't speak a
language or denote differentfacial recognitions and that's
part of its success criteria isto be able to do that the

(33:03):
solutions aren't going to be fitfor purpose.
So I do think that the lens, ifwe take the social, political,
geo stuff out of there, good isgoing to be good because it's
fit for purpose.

Speaker 2 (33:17):
Well, and Brian, I think you know and I'll turn it
back on you that's why we wrotethis paper is you know, to your
point and to what we've beentalking about.
We can't wait at this momentfor, in essence, the lawyers,
the regulators, the legislatorsin any country you know, to
catch up with the innovationthat's happening.

(33:38):
And what I love about you knowthe pillars and the guide is it
takes the best of what's outthere right now and it's boiled
it down to look, this should beyour North star.
Here's five pillars, here's animplementation guide.
If you follow this, you'rebasically following the best
class regulatory landscapethat's out there today and the
best frameworks.

(33:58):
And you're doing it in a waythat, while you might have to
tweak a little bit to go towardsNIST, or you might have to
tweak a little bit to go to theEU or whatever it is, you're at
least putting guardrails on yourbuild.
That is good, that looks toyour point, best in class, and
that's, I think, what our goalis is to make sure that there's
a North Star our customers canalign with.

(34:20):
That's going to be symbioticwith whatever regulatory pieces
come out to some degree.

Speaker 1 (34:25):
Yeah, I love that idea of good is good, brian.
I wonder, are some of usoverthinking this or over baking
it just with the?
You know either the fear ofmissing out or the complexity of
the situation, but you know, inthe end, like you said, good is
good.
Good cyber standards sometimesare often the most basic cyber
standards.

Speaker 3 (34:46):
They are, and I'm glad you said that hygiene is
the number one thing.
I think in a I don't know if Ishould say this on your podcast,
but in a couple of years ofmaybe 18 months people, it's
just another application.
If you really look where thefocus and what's different about
generative?
It's everything above.
I'm going to list 853, right,the foundation of good security,

(35:08):
and if you don't get that right, you have no hope of getting AI
right.
But then if you mature and youdo that the right way, that
little bit it's the life cycleof a model.
So the models are trained withdata and even though our friend
Sergey Bratkus would call themweird machines because large
language models are, it's partcode, it's part data and then

(35:31):
that interaction with humans orother machines through the
prompts and the responses.
That's where all the actionhappens and that's when we talk
about the dirty dozen or the 12things.
We got to be concerned thatthey're all happening up there.
So you want to make sure thatyour models aren't using other
people's intellectual property,because that could expose, you
know, hurt them, but you knowagain, it could hurt your

(35:52):
company.
If it turns out you can be sued.
So that's where thefoundational models that are
being used and what they'retrained are so important.
So the fact that we're havingthese conversations is good you
can never have too muchconversation about this and
raise the awareness butabsolutely we have to do stuff
because if you don't, if youthink that people are not using

(36:13):
AI in your environment, you'remistaken.
So this whole idea of shadow AIif you thought shadow IT was
bad, wait until you see shadowAI, because they live amongst us
and it's so much easier toaccidentally leak secrets or
have your intellectual propertyconsumed.
And we were at GTC last week andthe keynote was really about

(36:39):
the fact that we used to doretrieval of get data and where
the data.
Now these are thinking machinesand the paradigm is changing.
I haven't used that termparadigm for a long time, but it
is.

Speaker 2 (36:51):
You're going deep today.
You're quoting Sergey and usingparadigm.
I'm proud of you, my friend.

Speaker 3 (36:56):
But I think we have to have the conversations.
We should not be afraid, weshould boldly go.
But let's not be uninformed,let's make sure we're armed
properly.
And I love the term theguardrails make it easy to do
the right thing and hard to dothe wrong thing.

Speaker 1 (37:27):
Yeah well, brian.
I mean you've already droppedtwo pretty specific NIST
references.
Do we need to take the rest ofthis time to quiz you on what
else is in the NIST documentthere?

Speaker 2 (37:34):
Would you like me to pull out the NIST guide and we
can do like NIST hacker Jeopardyand see how far down he can go?
We could have a lot of fun withthat.

Speaker 3 (37:41):
No, but I would be happy to talk about the human
friendly threat catalogs that Ibuilt on those 12 dirty dozen.
And because that actuallyallows you to you know seriously
, because that allows you tohave like a human conversation.
So you know, some of thetenants are your AI chatbot

(38:01):
application should not tell youhow to hurt others, like how to
make bombs.
They should not tell tell youto hurt others, right?
That seems pretty prettylogical.
They shouldn't use otherpeople's intellectual property,
because we value that.
We want to make sure thatcreatives are compensated.

(38:21):
We certainly don't want toexpose our private secrets to
training models, and so a lot ofthe very simple things that
people could do is read that SLA, read that EULA I know they're
terrible to read, but just heyor ask a simple question to your
vendor Are you training themodels with my data and other

(38:44):
people's data?
How can I be assured that mysecrets aren't being shared or
going to be leaked?
And it's very telling when youlook at service level agreements
, because that'll tell you whatthe remedy is typically for a
breach of that confidence.
And you know I can only tellyou that typically people don't

(39:06):
necessarily want to take on moreliability.
So I would think that thoseSLAs are purposely complicated.
So have a partner that canactually help you understand it,
or even just give the rightquestions to them so they can do
the right thing.

Speaker 2 (39:23):
I think that's our next blog.
We should do a blog around yourresearch there.
It'd be cool.

Speaker 3 (39:27):
The Magnificent Seven is what I call them, but I'm
not sure anybody really wants towrite that paper.

Speaker 1 (39:33):
Yeah, I mean one of the most important things I
think anybody can say, brian,you mentioned it ask the right
questions, kate, as it relatesto responsible AI, trustworthy
AI or anything in the regulatoryenvironment right now, what are
some questions that leadersaren't asking right now that
they should be asking?

Speaker 2 (39:49):
Yeah, I mean.
So, first of all, when you'relooking at what you should be
asking, there's a wholemethodology.
We talk about the frameworksand we talk about that.
There's 32 of them.
Most companies aren't ready yetfor framework.
We're at a methodology stage.
So the first is you know, assess, get a group of
cross-functional leaders from.
You know the actual users thatare going to be using the AI up

(40:11):
to your leadership and assesswhere is AI going to create
value and impact in yourorganization value and impact in
your organization.
The second thing is quantify.
What are the risks associatedwith that, with bringing it in?
You know what are the benefits,the opportunities and what's
the risk.
You can have something that'sgoing to.
You know, if you're going tocreate a, you know an AI engine

(40:32):
that's going to allow you tohelp understand all the
different.
You know, say, in a hospital,all the different types of
patients that are there.
That'd be amazing.
What kind of research are youdoing this and that?
But if it exposes PII and allthe patient data on the backend,
that's really bad and it'sgoing to hurt your reputation.
So what's the opportunity,what's the risk?
And then it's from an assess,quantify, and then it's a

(40:54):
question of going from there.
How do you remediate?
How do you actually look at itand go okay, if something does
go wrong, how do we roll it back?
What can we be focusing on?
What are the impact and what'sthe impact across our risk
layers?
You know from operational,reputational, all sorts.
You know IP.
If something gets leaked, whatis it going to do?
So there's a whole processthere on methodology, of really

(41:17):
understanding what are the usecases we think we're going to
use it for and then what's theimpact if we actually, you know,
put it in both from a monetaryand a holistic risk perspective.
That's the first step.
Then you can get intoframeworks and everything else,
but the first is make sureeveryone's on board with both
sides of that coin opportunityand risk.

Speaker 3 (41:43):
Well, I think Kate kind of hit on it.
This diverse stakeholder groupcalled a center of excellence or
birds of a feather, or eveninnovation moratorium day,
because people are using it.
There is a kind of a humandynamic we found in a lot of our
interviews where people areashamed of using AI they don't

(42:06):
want.
And in fact, if you think about, maybe, how you first heard
about generative AI, it wasthose scandals of plagiarism
where students were having AIsdo their chat, gp2, their
homework for them, and it's like, oh, that's bad.
Well, now the universities are.
And it's like, oh, that's bad.
Well now the universities arehaving to figure out no, these
are the tools they're going tohave to use in the future.
So we have to figure, we haveto change as educators.

(42:26):
How are we going to teachpeople to use it safely and
responsibly, and what part dohuman machine interface have in
the future?
And so we did.
I think you know a center ofexcellence or a central place
where people can come to seewhat AI is already approved in
the organization.
It's like, oh, we've got that,I don't have to go reinvent it.
Or moratoriums on citizeninnovators to say, please bring

(42:49):
us your ideas, we want to hearabout them.
We want to showcase them.
I mean, honestly, I've got topitch the culture here.
We've been doing that.
There are some pockets ofexcellence where somebody and
it's like come out, show us whatyou're doing.
And then you're like I want tobe a pickpocket of excellence, I
want to go take some tools fromthat one and this one.
So I'm because what we do inour practice area besides, you

(43:11):
know the regulatory bits we doapplied AI and what's weird is
you know we do something.
Six months ago, one way Ichallenged my team.
I said okay, here's a problem.
It looks similar.
Are we still doing it?
The best way?
Is there a more efficient way?
And things like co-pilots thatunderstand the application
you're using and some othertools you could.

(43:33):
The best thing is bring thepeople together, showcase what's
working, you know, de-stigmifythe use of this technology,
actually raise it up andcelebrate it, but also make sure
people understand thatsometimes everything isn't the
nail that the AI hammer hits.

(43:54):
There's other ways that workbetter.

Speaker 1 (44:05):
And the one thing I'll tell you do not automate a
broken business process or giveit to an AI, because it will
only fail at scale.
Love it.
Well, I know we're wrapping upon time here, so I do want to
thank the two of you for joining.
I know each of you have busyschedules between travel, client
engagements and everything elsein between.
That we obviously listed onyour LinkedIn profile.
So thanks again and we'll hopeto talk to you soon.

Speaker 3 (44:19):
Thanks for having us.
Bye Kate.

Speaker 1 (44:22):
Bye.
Okay, as we wrap today'sepisode, three key takeaways
that stand out for anyorganization looking to lead
responsibly in the age of AI.
First, building trustworthy andresponsible AI starts with a
strong foundation, and thatmeans aligning across the five
essential pillars data, privacy,security standards, regulatory

(44:43):
compliance, economic impact andethical governments.
These aren't just checkboxes,they're imperatives that must be
embedded into every phase ofthe AI lifecycle.
Second, achieving this level ofalignment isn't accidental.
To stay on top of this shiftinglandscape, you'll need to start
with regulatory mapping andstakeholder engagement, move

(45:03):
through real-world pilots, scaleby measuring metrics and evolve
through ongoing monitoring,communication and cultural
reinforcement.
It's a blueprint for progressthat doesn't sacrifice trust for
speed.
And third, while the regulatorylandscape may be fragmented
today, the need for a cohesiveglobal framework is becoming
clearer by the day.

(45:24):
Organizations that act now,investing in a holistic
governance strategies, will befar better positioned to
navigate change and build publictrust.
A special thanks to Kate andBrian for sharing their insight
and wisdom today.
If you like this episode of theAI Proving Ground podcast,
please consider leaving a reviewor rating us, and sharing with

(45:44):
friends and colleagues is alwaysappreciated.
This episode of the AI ProvingGround podcast was co-produced
by Mallory Schaffran, naz Baker,brian Flavin and Stephanie
Hammond.
Our audio and video engineer isJohn Knobloch and my name is
Brian Felt.
We'll see you next time.
Advertise With Us

Popular Podcasts

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.