All Episodes

July 25, 2025 22 mins

Send us a text

How do you build AI governance that scales without becoming the innovation police? In our final conversation with tech lawyer Gayle Gorvett, we tackle the ultimate balancing act facing every organization: creating robust AI oversight that moves at the speed of business. From shocking federal court rulings that could force AI companies to retain all user data indefinitely, to the Trump administration's potential overhaul of copyright law, this episode reveals how rapidly the legal landscape is shifting beneath our feet. Gayle breaks down practical frameworks from NIST and Duke University that adapt to your specific business needs while avoiding the dreaded legal bottleneck. Whether you're protecting customer data or designing the future of work, this customer success playbook episode provides the roadmap for scaling governance without sacrificing innovation velocity.


Detailed Analysis

The tension between governance speed and innovation velocity represents one of the most critical challenges facing modern businesses implementing AI at scale. Gayle Gorvett's insights into adaptive risk frameworks offer a compelling alternative to the traditional "slow and thorough" legal approach that often strangles innovation in bureaucratic red tape.

The revelation about the OpenAI versus New York Times case demonstrates how quickly the legal landscape can shift with far-reaching implications. A single magistrate judge's ruling requiring OpenAI to retain all user data—regardless of contracts, enterprise agreements, or international privacy laws—illustrates the unpredictable nature of AI regulation. For customer success professionals, this uncertainty demands governance frameworks that can rapidly adapt to new legal realities without completely derailing operational efficiency.

The discussion of NIST and Duke University frameworks reveals the democratization of enterprise-level governance tools. These resources make sophisticated risk assessment accessible to organizations of all sizes, eliminating the excuse that "we're too small for proper AI governance." This democratization aligns perfectly with the customer success playbook philosophy of scalable, repeatable processes that deliver consistent outcomes regardless of organizational size.

Perhaps most intriguingly, the conversation touches on fundamental questions about intellectual property and compensation models in an AI-driven economy. Kevin's observation about automating human-designed workflows raises profound questions about fair compensation when human knowledge gets embedded into perpetual AI systems. This shift from time-based to value-based compensation models reflects broader changes in how customer success teams will need to demonstrate and capture value in an increasingly automated world.

The technical discussion about local versus hosted AI models becomes particularly relevant for customer success teams handling sensitive customer data. The ability to contain AI processing within controlled environments versus leveraging cloud-based solutions represents a strategic decision that balances capability, cost, and compliance considerations.

Gayle's emphasis on human oversight—

Kevin's offering

Please Like, Comment, Share and Subscribe.

You can also find the CS Playbook Podcast:
YouTube - @CustomerSuccessPlaybookPodcast
Twitter - @CS_Playbook

You can find Kevin at:
Metzgerbusiness.com - Kevin's person web site
Kevin Metzger on Linked In.

You can find Roman at:
Roman Trebon on Linked In.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:05):
Customer success.

Kevin Metzger (00:10):
Welcome back to the Customer Success Playbook.
I'm Kevin Medgar.
Roman is unable to join usagain, but we are wrapping up
our shows on AI and AIgovernance with Gal Vet.
We've explored the fundamentalsof accountability of AI
governance.
Today we're focusing onscalability.
How do you do governance atspeed whenever.

(00:32):
We have to involve legal, uh, assomebody who's been involved in
customer contracts and workingwith legal teams, some legal
teams are really good aboutspeeding things up.
Some are, it feels like, well,our job is to make sure the
process slows down andeverything's checked, and, and
that is right.
I mean, it's to make sure we doa good job of.

(00:53):
Making sure we protect ourcompanies and making sure we
protect our customers.
At the same time, we don't wantto run into the bottlenecks.
How do we avoid doing that inworking through a governance
policy?
Right.
Well, I think what people are,are, are trying to do the, the
ideal, um, answer would be, ohyeah, we're gonna automate our,
uh, our AI governance process.

(01:14):
And that's unfortunately not theideal, um, solution.
You know, you have to havehumans in the loop.
For one thing, this is.
Uh, complex.
And it sounds weird to say thatit's too complex to be managed
by ai, but it, it just, it, itjust is, things are changing,

(01:37):
um, rapidly, not just on atechnical level, but on a, a, a
regulatory and, um, you know, anoperational level in this area.
Um, and so I guess the answeris.
You, you, you need to adapt toyour business and your
particular use case.

(01:58):
What this is gonna look like anddo your best.
To not thwart, um, theinnovation, again, for smaller
companies, it's gonna lookdifferent than it is for larger
companies.
One of the things that I've beenseeing is that both large and
small companies have been likingthe sort of adaptability of the

(02:20):
risk framework model.
And so that's what NIST.
Has come up with, and that'swhat we are using in the working
group that I'm working with atDuke.
Sorry, you mentioned NIST a few times.
Yeah.
And that, I just wanna make surethe audience knows.
NIST is the National Institutefor Standards and Technology,
right?
Yes.
That you're referring to.
Okay.
Yes.

(02:40):
And they are a federal agencythat often comes out with
guidelines in these very keyareas.
They come out with cybersecurityguidelines that they, in
playbooks and that are veryuseful, um, and, and free to the
public.
Um, and I highly recommend thatpeople go on their website and

(03:00):
look at them.
They've already come out with anAI framework.
They're coming out with agenerative AI framework as well.
And even the federal governmentis encouraged in the various
agencies to use those frameworksas sort of their baseline, um,
for how they start, uh, dealingwith the AI risk assessment in

(03:24):
their agencies and then, um,sort of tweaking it to adapt it
to their particular needs.
And that's one of the thingsthat we used in, um, our working
group at Duke.
And we also looked atinternational, um, regulations
and, um, some of the, you know,regulations that are already out

(03:48):
in, in Europe, um, to create ourrails risk framework.
Um, that's specifically forlegal teams.
Either one of these types offrameworks can be useful for
people who are trying to getsomething that is adaptive to
different needs.
You also wanna take into accountwhether you have specific

(04:10):
regulatory requirements, ifyou're in healthcare or if
you're in financial services,uh, or in industry like legal,
or you deal with defensecontracts where there are
particular requirements that youhave to layer into that.
Now you can, um, you know,potentially try to, um, automate
some of the, you know,checklists and some of the

(04:33):
deadlines, some of those typesof things within your, um,
assessments.
But I would say at least yearlyyou wanna be auditing what
you're doing.
And you, you wanna have humansin the loop on, on these
processes.
And definitely, um, as you'recreating them, you need to be

(04:54):
involving the different, um,teams within your organization
that are gonna be responsiblefor the data that flows into the
AI systems that you're using.
Yeah.
Yeah.
You mentioned auditing, and Iknow in it there's all kinds of
kind of standard security auditsthat we tend to, we tend to run

(05:14):
across organizations.
Have there been any changes tothose types of audit processes
to start, including how AI isbeing used within the
organization that you're awareof?
Yes.
Part of the, this, the riskassessment that that companies
go through when they're creatingthese governance policies is an

(05:36):
audit of the AI processes, um,that the company is doing and
the, and the data that isflowing through.
Um, through the, those AIprocesses.
Yeah.
So like the SOX complianceaudits and things like that, are
they being modified that you'reaware of?
Are you familiar with them?
I am familiar with them.

(05:57):
I'm not aware of those beingmodified specifically.
Gotcha.
Yeah,
I, I assume that'll, that'll probably start
coming down the pipe.
Pretty quickly, I'm guessingfor, for enterprise security
because it's such a big impacton how AI is doing.
I think that'll be, or.
Technology.
I think it'll be a piece of whatcomes down with the SOX

(06:19):
compliance audits.
Right?
That, that definitely prob willbe coming down the pike.
But, um, I'm not currently awareof, of those being modified
specifically for ai.
I, I, I'm not either.
I just.
It occurred to me as you weretalking about the audit process.
I'm like, you know, there's somestandards around some of these
things.
Maybe that's where, maybe that'swhere it'll go and I'm sure from

(06:40):
anything else that you, you kindof wanna share?
Well, I mean, I think one thingpeople should be aware of in, in
addition to what's happening inthe US on a regulatory
perspective in terms of the, thefederal government introducing
this.
Legislation to put a halt tostate AI regulatory initiatives,

(07:01):
which have been really the onlymandatory AI regulation up until
now is there's a very biglawsuit that's happening in the
us.
In New York, which could havevery big impact.
It's open AI versus the New YorkTimes in federal court in New
York.
And there is a, a magistratejudge that's in charge of

(07:22):
evidentiary rulings in that casewho has made an initial ruling,
which I find to be quiteastonishing.
Um, she, I think was about.
A week and a half ago came outwith a ruling that requires open
AI to keep all of their dataoutputs from their large

(07:47):
language model, regardless ofwhat regulations say, like GDPR
or the EU AI Act, regardless ofwhat, uh, whether the customer's
on an API or an enterpriseversion of.
Um, their, uh, chat, GPT, um, orother tools, regardless of what

(08:08):
any, you know, terms andconditions or contracts say
about what they're supposed tobe doing with those outputs.
So the lawsuit is about the NewYork Times, but the decision is
about every single, everybody.
Yeah.
And so it's something thatpeople should really be aware of
because now this has massivepotential privacy implications.

(08:31):
I.
It, it's something that I'm,this judge did to begin with,
and it, it's another, I have toagree.
There's no way
that can stand.
No, there isn't there.
It
can't,
no.
And, and it, it just seems to meanother example of judges going
far beyond their, you know, uh,their purview in terms of their,

(08:54):
you know, their.
The jurisdictional powers and,and, and the, the four corners
of the case that they're meantto be deciding on in our, in the
United States.
I don't know what's happening,but it's something to also
think, have in the back of yourmind as you're, you know,
thinking about enterprise usesof these tools, the privacy

(09:17):
consequences we have to betechnically designing around
this type of thing.
I would just suggest vendors whoare working with companies are
now gonna have to have an answerto how do you protect against
this type of, uh, of incident Asa customer, I would point blank,

(09:40):
ask this type, you know, thattype of question.
If you're in a serious, youknow, negotiation with an AI
vendor because.
This could be a concern.
And
it's not just vendors, right?
This is a really interestingthought, right?
So open AI is being used by allthese SaaS vendors, right?

(10:04):
Right.
That's as the backend.
As the backend for that.
So you've gotta be aware ofthat.
And then if it's not open AI andit's cloud or somebody else, if
it's a public, if it's acloud-based vendor.
Everybody's gotta, I assume, besanctioned by the same law there
or the same ruling?
No, no, no.
This is exclusive to, it's only

(10:25):
for open
ai.
But I, my my point is they allneed to have an answer for how
do they protect customers.
Yeah.
What's the wrapper?
What's the workaround?
What's the technical solution?
What's the, anyone who's, yeah.
Do I have to run local
models instead so that at least it's contained

(10:46):
within my environment and I canprotect my customer that way
versus a hosted model.
So now I can, these aredecisions you gotta actually
consider as you're looking atdesigns of how to implement AI
tools within your business.
Right, right, right, right.
Uh, yeah.
I mean, in, in this.
This decision applied to allenterprise customers, all, you

(11:10):
know, even customers in the EUthat weren't supposed to be
subject to this type of thing.
So it,
yeah.
How is that?
Even PO doesn't the, I wouldeven say the EU data laws,
of course they apply.
This is
Of course, yeah.
How did, how you can't overrule that.
Well,
yeah, I don't
understand the,

(11:31):
it's a huge wrinkle.
And then the other, um, thingthat has.
Come into question in the lasttwo weeks, the application of
existing copyright law to ai.
And if this is, we've beenhearing a lot of chatter
inventors of some, I won't evencall'em inventors'cause they
wouldn't.
Use that term by the creators ofsome of these models who've been

(11:54):
saying delete all IP law, whichI think is really ironic because
some of them have othercompanies, or even within their
own companies, you know, anumber of patents and they don't
seem to see the irony in.
And suggesting to delete all IPlaw with respect
to me.
I think so.
As somebody who doesn't own anypatents.

(12:15):
Right.
I, I think it's a veryinteresting thing.
These models have been written,basically consumed all mankind's
knowledge.
Right.
Um, and basically been used to it.
It's used IP from centuries ofwriting down.
Information and learn from it,and now it's producing

(12:39):
knowledge.
They're producing informationthrough prompts and, and input
and, and it's taking it and thenreproducing new derivative
information.
There's, right now, last Iheard, you still can't put I,
you can't.
If it's produced by ai, there'sa certain amount.

(12:59):
I think that that has to be, thelast time I I read about it,
there's a certain amount thathas to be human generated.
Very little amount can be AIgenerated to be considered new
Copyright.
I'll, I'll, I'll stop and letyou talk.
Yeah, no, that, that, that istrue.
There's two aspects to this.
To get a copyright on somethingthat's generated by ai, there's

(13:21):
a whole in the US a whole seriesof.
Of sort of tests that the, thecopyright office has sort of set
out.
They've, they've said that thereis no ability to get a copyright
on something that is entirelygenerated by ai.
There has to be a human, um,involvement.
There has to be originality and,um, you know, there has to be,

(13:44):
um, it can't be solely based ona number of prompts.
It has to be, uh, you know,something that was with.
Human interaction other thanjust prompting, there has to be
human originality as part of it.
Um, the question is how much,and, and then the copyright

(14:05):
office is continuing to monitorjurisdiction's perspectives on
all of that and how, um, othercountries are viewing the, this
copyright ability.
Um, question.
The, the interesting thingthat's happened in the last
couple of weeks is.
With respect to the, the, um,the inputs into the large

(14:28):
language models, which weresubject themselves to copyright
because copyright only extendsfor a certain number of years.
So things that were, you know,over a hundred years old were
already, you know, no longersubject.
Yeah.
The Trump administration inlike.
A week has turned all this onits head by firing some really

(14:50):
key employees of the USCopyright Office.
Insinuating, although I haven'theard any real like official
statements on this, that theymay be leaning towards the tech
industry in this argument thatcopyright won't apply.

(15:15):
In the information that wassucked into the large language
models.
And I, I, I'm actually acabsolutely stunned by that.
Um, yeah, because copyintellectual property as a, uh,
you know, a field.
Was created to promoteinnovation and fuel investment

(15:41):
in our country.
So it's, it's quite astonishingto me take that approach.
Yeah.
It's interesting though, becausethe other thing is, is today in
the world we live in.
The world we live in is Cha ischanging very rapidly, but when
you go to work for a company,anything you produce is then
owned by that company.

(16:01):
Right?
Well, companies are takingadvantage of all of that work,
all of that human knowledge andcapital that's been given to
them.
And it's been given for pay athourly rates, but now they're
automating it into AI systemsthat move on in perpetuity and
then letting the people go.

(16:22):
And so now how the, and we're,we're moving it rapidly in the
direction of building lots of AIagents and there's all kinds of
concepts as to whether or notthere'll be you if you're able
to produce more agents and haveagents do work, whether you'll
be able to have many peoplemanaging multiple agents or
what, but.

(16:44):
Realistically, you're automatingworkflows that are designed by
humans now through agents thathave been trained by humans,
right?
Humans put an invested capitalinto that, and now you're
letting those humans go andletting those agents continue to
run.
I, I, I think we're going tohave to move towards a, um, new

(17:06):
form of compensation model wheremaybe IP goes into the
blockchain.
You actually.
Somehow have to bring, bring theblockchain along and compensate
based off of the blockchainusage in the blockchain going
forward.
It's, it's weird because we'vealready broken.
That model's already broken andlike you would've had to almost
start with that model to make itwork in perpetuity.

(17:29):
But somehow, if you're usinginformation that's produced by
somebody in perpetuity, how haveyou fairly compensated them for
that?
Like just their time isn't anactual fair compensation.
Right.
Um, right.
Going forward and like we, and we have to come up
with a new model because thealternative is taxing these

(17:50):
companies that are now buildingthese tools and redistributing
it as some kind of universalbasic income that we, we've seen
that doesn't work right.
Coming over time.
So somehow we have to figure outnew ways of.
Awarding people for the workthat they do produce.
I, I, I, I, I don't know whereit goes.

(18:12):
Uh, this is just some wildspeculations, but I keep, keep
thinking about it more.
I am seeing what's happening inthe industry, especially with
the, on the tech side, withcoders and with how these, these
new coding agents are working,which are really incredible.
I mean, mm-hmm.
Did some work with somebody acouple weeks ago where I had a

(18:33):
requirements conversation.
Worked it into a requirementsdocument, and then through using
tools like Cursor and Rep Litand now Claude Code and Open AI
Codex, you can turn them intoproofs of concepts.
It, it took an hour and a halfwow.
Of conversation to proof ofconcept code.

(18:54):
It, I, I have some codingskills, so I don't wanna say I
don't have any, but I, I haveenough to.
To have figured out, Hey, thisis incredible and amazing and,
and, and make it work, but notenough to say, Hey, I can make
this work as a, you know,enterprise tool and enterprise
application and protect itbecause I, I, there's

(19:15):
specialized skills.
You need to make sure that thosethings and rules are in place
and, or code it in a way to makeit do, make it secure.
That is the path we're goingdown.
Right.
We're taking idea to inception,we're automating those paths,
we're automating workflows andas we automate workflows, we're
taking work that people manuallydo.

(19:38):
Today maybe isn't the best, sowork's gonna change how, how we
work's gonna change what, youknow, taste.
I keep.
One of the biggest things thatI've heard in the last several
weeks is taste is really gettingthe human element of looking at
what.
Is being produced and is it goodor is it not good going to, to

(20:01):
try and bring it to the, to thenext level?
But I think that'll change.
But still, we are takingpeople's intellectual work and
processes that they're designingand and working, and now turning
it into workflows.
If we're going to use thoseworkflows in perpetuity, then.
And say, we no longer need theperson for that role, then how

(20:22):
do you manage that?
Because you've had a personcreate a workflow to, to
automate.
That's, that is a piece ofintellectual design.
I don't know whether itqualifies as intellectual
property, but it's intellectualdesign that then company can
profit from in in perpetuity andI think we've gotta figure out
new compensation models.

(20:43):
Yeah,
no, definitely.
I'm definitely working on newcompensation models for my work
because
the old ones were based on time.
Time is changing how you it,it's not the same, uh, it's not
the same models.
Well, thank you so much for yourtime.
I really enjoyed theconversation.
I love when I can get deep onai.
I love being able to talk with,uh, somebody who, who knows as

(21:05):
much as you do about the law andabout what's happening currently
in the law, uh, on ai.
I hope you.
Audience liked and enjoyed thisepisode.
If you did like, subscribe,share the with with others,
we'll be back with morestrategies for your customer
Success Playbook.
And until then, keep on.
Advertise With Us

Popular Podcasts

Stuff You Should Know
24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.