All Episodes

July 16, 2025 12 mins

Send us a text

When AI systems fail spectacularly, who pays the price? Part two of our conversation with global tech lawyer Gayle Gorvett tackles the million-dollar question every business leader is afraid to ask. With federal AI regulation potentially paused for a decade while technology races ahead at breakneck speed, companies are left creating their own rules in an accountability vacuum. Gayle reveals why waiting for government guidance could be a costly mistake and how smart businesses are turning governance policies into competitive advantages. From the EU AI Act's complexity challenges to state-by-state regulatory patchwork, this customer success playbook episode exposes the legal landmines hiding in your AI implementation—and shows you how to navigate them before they explode.


Detailed Analysis

The accountability crisis in AI represents one of the most pressing challenges facing modern businesses, yet most organizations remain dangerously unprepared. Gayle Gorvett's revelation about the federal government's proposed 10-year pause on state AI laws while crafting comprehensive regulation highlights a sobering reality: businesses must become their own regulatory bodies or risk operating in a legal minefield.

The concept of "private regulation" that Gayle introduces becomes particularly relevant for customer success teams managing AI-powered interactions. When your chatbots handle customer complaints, your predictive models influence renewal decisions, or your recommendation engines shape customer experiences, the liability implications extend far beyond technical malfunctions. Every AI decision becomes a potential point of legal exposure, making governance frameworks essential risk management tools rather than optional compliance exercises.

Perhaps most intriguingly, Gayle's perspective on governance policies as competitive differentiators challenges the common view of compliance as a business burden. In the customer success playbook framework, transparency becomes a trust-building mechanism that strengthens customer relationships rather than merely checking regulatory boxes. Companies that proactively communicate their AI governance practices position themselves as trustworthy partners in an industry where trust remains scarce.

The legal profession's response to AI—requiring disclosure to clients and technical proficiency from practitioners—offers a compelling model for other industries. This approach acknowledges that AI literacy isn't just a technical requirement but a professional responsibility. For customer success leaders, this translates into a dual mandate: understanding AI capabilities enough to leverage them effectively while maintaining enough oversight to protect customer interests.

The EU AI Act's implementation challenges that Gayle describes reveal the complexity of regulating rapidly evolving technology. Even comprehensive regulatory frameworks struggle to keep pace with innovation, reinforcing the importance of internal governance structures that can adapt quickly to new AI capabilities and emerging risks. This agility becomes particularly crucial for customer-facing teams who often serve as the first line of defense

Kevin's offering

Please Like, Comment, Share and Subscribe.

You can also find the CS Playbook Podcast:
YouTube - @CustomerSuccessPlaybookPodcast
Twitter - @CS_Playbook

You can find Kevin at:
Metzgerbusiness.com - Kevin's person web site
Kevin Metzger on Linked In.

You can find Roman at:
Roman Trebon on Linked In.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Kevin Metzger (00:10):
All right.
Welcome back to the CustomerSuccess Playbook podcast.
I'm Kevin Metzger again.
Robin is unable to join us, butwe are continuing our
conversation with Gail Vete, uh,fractional, uh, legal counsel
and leading expert on AIgovernance.
Gail, before we dive in.
Let's let our audience get toknow you a little bit better.

(00:32):
You up for that?

Gayle Gorvett (00:33):
Sure.

Kevin Metzger (00:34):
What's one city, uh, you could see yourself
living in that maybe you'velived in in the past or
somewhere you'd wanna live in inthe future?

Gayle Gorvett (00:43):
That maybe I wanna live in again?
That's, yeah.
I've lived a lot of places, so,um, I just got back from Paris
and lived there for a long timeand I don't know if I want to
live there full time again.
I.
But I've thought about the ideaof maybe getting a little kind

(01:05):
of, um, pie there, um, forretirement to be able to go back
and forth.

Kevin Metzger (01:11):
Nice.

Gayle Gorvett (01:12):
Yeah.
Do

Kevin Metzger (01:13):
you speak French and Japanese?
I believe

Gayle Gorvett (01:16):
I speak French fluently.
Um, Japanese not so fluently.
Um, at one point I knew 500kanji.
I cannot say that I know the 500kanji anymore.
Um, I, their Japanese has threedifferent written alphabets, and
I know the two that are used totranslate foreign words, but my

(01:39):
kanji has really slipped.

Kevin Metzger (01:42):
How did you get into that?
I mean,

Gayle Gorvett (01:43):
I, I went to a school in Virginia when I was
young that taught French veryearly, and I started going there
at a later age, but I startedstudying French.
At that at 12 and I continuedand then I went to Europe to
study a couple of times in highschool and college.
And I just became then veryinterested in, um, international

(02:06):
business and studying languages.
And while I was in college, theEuropean Union was just really.
Getting delayed in their, youknow, unionizing, let's say.
Um, so I thought, I think I needa different language if I really
wanna do this internationalbusiness.
I decided to start studyingJapanese'cause they seem to be

(02:27):
the ones kind of pushing the,the economic ball forward in
international business.
So I started studying Japaneseand then I went to work in Japan
after I graduated.
On the JET program,

Kevin Metzger (02:40):
what's a favorite book, uh, that you have?

Gayle Gorvett (02:43):
Oh, yeah.
You know what?
That's a great question.
I love to read.
I have not had a lot of time toread for pleasure lately.
I, I will say, when I was reallybusy in New York and I needed to
try to relax to calm down, afterworking a lot of hours, I read

(03:04):
all the Harry Potters like.
In a row, but that, I wouldn'tsay those are my F they're good,
but those are not my favoritebooks.
I love historical fiction and Ilove books like Memoirs of a
Geisha.
I recently read, oh, I'm tryingto remember the name of the
book.

(03:24):
They made it into a mini seriesand now I can't remember the
name of it.
Oh, A Gentleman in Moscow.
Oh, cool.
That was a really good book.

Kevin Metzger (03:32):
Very cool.
I like historical fictions aswell.
I tend to, I, I actually like alot of the legal thrillers as
well.

Gayle Gorvett (03:40):
Oh, do you like John GREs

Kevin Metzger (03:41):
stuff and all that?
Yeah.
Yeah.
But I, I gotta imagine thatprobably isn't very relaxing for
you.

Gayle Gorvett (03:48):
Well, it's kind of funny, you know,'cause it's
so unrealistic.

Kevin Metzger (03:52):
Well, let's get back into the, uh, the realm of
ai, which is, uh, where we'refocusing these days.
And when an AI system fails orcauses unintended harm, how do
we determine who's reallyaccountable?
What are the rules around thatnow?
And now?
How are you seeing that from a

Gayle Gorvett (04:11):
Well, I think we're still kind of figuring out
what the rules around that are,and that's one of the reasons
why.
Having a good governance policyis so important and why it's a
good idea to using innovationslike AI is, is of course, you
know, great, um, to help you inyour work and to help you, you

(04:34):
know, kind of, uh, do researchand things like that, but also.
If you're using this kind oftechnology to do something
sensitive or to advance yourbusiness in an area that's of
strategic importance, thequestion that you ask, it makes,
uh, all the difference in termsof look before you leap because

(04:55):
this is an area where in some,especially in the US for
example, we don't have.
Um, comprehensive regulationaround this.
Um, and there are states that dohave AI regulation.
California.
We have some biometricregulation in Illinois and New
York, um, and some other stateshave coming online with AI

(05:16):
regulation.
But in the past few weeks, thefederal government has put
forth.
Legislation to pause state lawsin AI for up to 10 years to give
the federal government the timeto enact comprehensive federal
AI regulation.

(05:37):
10 years, huh?
Yes.
Yeah.
To, we'll all have chips in ourhead and it will be completely
irrelevant, so it makes it evenmore important to kind of, um,
as much as you.
As much as you can within therealm of possibility, do what we
call, you know, sort of private,uh, regulation, which is
obviously engaging in, uh, agood contract with a vendor or

(06:01):
making, making sure that youread the terms and conditions of
the service that you're usingand making sure that you're
comfortable with what they say.
Um, so that, you know.
Um, that you're not, um, puttingdata into an LLM that you know,
is then going to be, become the,the of that, um, entity or being

(06:21):
reused for training purposes,for example, or, you know, those
types of things.
Because right now, um, in theUnited States in particular, a
lot of the, um, power is goingto a lot of these companies as
opposed to, um, the users oftheir technology.
Um, you know, in the EU they'veintroduced E EU AI Act, um,

(06:46):
which puts some guardrailsaround the use of ai, especially
in the high risk, um,categories, which are biometric.
Mass or mass use of areas where,um, they would automate
gathering or scraping of data orpersonal data in a large way.

(07:07):
Um, things like that.
But they haven't fully rolledout the, um, EU AI act yet.
And there is, um, talk ofmodification of the act and the
act being, um, put pausedbecause it's.
Too complex and too burdensome.

(07:28):
Um, so we're even seeing some,some, um, in other jurisdictions
that already have regulation.

Kevin Metzger (07:35):
Yeah, I mean, it's interesting.
Stuff is moving so fast.
I have a question too, if Ihave, um, you know, a set of
guidelines in place for mycompany.

Gayle Gorvett (07:45):
Yeah.

Kevin Metzger (07:46):
Does that provide any actual protection for me?
Because I can show I'm usingwithin the guidelines that we
set, or is like, um, somethinggoes sideways.
Is it not necessarily provideprotection?

Gayle Gorvett (08:00):
Well, I would say a couple things.
If you're in a regulatedindustry.
Unlike mine, for example, um,you may have a legal requirement
to have guidelines.
You may have a legal requirementto inform your customers of your
use of ai.
For example, in in the legalprofession, the A BA, which is

(08:23):
interesting because they're not.
Um, the coercive ability over,um, lawyers, they have a sort
of, you know, it's kind of likeNIST or CISA on the federal
level for security.
They put out guidelines thatbecome very strong suggestions,
and the a BA, um, the AmericanBar Association is a federal,

(08:46):
you know, bar association, but.
Uh, the legal profession isregulated by the states, so they
put out these sort of blanket,you know, statements that are
not coercive, but then theybecome adopted in, in different
forms by individual states.
And what they've said is.
You know, lawyers who are usingai, um, have a duty to disclose

(09:11):
that to their clients and theyhave a duty to become
technically proficient in ai.
Um, and, uh, you know, I wouldsay you should use it as an
opportunity to communicate withyour customers and instead of
waiting for there to be a law tothat compels you.

(09:34):
To create a, a guideline or, ora governance policy.
Use it as a way to be the firstadopter in your industry Having
a governance policy.
It is good business.
It's also, you know, good commonsense because you don't want
your, your, your employees doingwhatever they want, right?

(09:57):
You want to be the one who setsthe tone.
For your employees, you wanna bethe one who sets the rules.
And then you also want yourcustomers to know that you're
responsible with theirinformation.
You want them to know that youcare about their information.
And so in some circumstances,yes, it can, you know, put you

(10:18):
in a better legal footing.
It can provide you what's calledan affirmative defense.
If you have a policy, you have awritten policy and you train
your employees on that policy,and you can show that you're
doing all of that on a regularbasis, and then something
happens and you can say, well,we've been doing this, we've
been doing this in good faithand we've been complying with

(10:40):
it.
You know, you can put yourselfin a much better.
Um, situation, but then alsowith your customers, if you, you
adopt kinds of policies, youalso then put that on your
website along with your privacypolicy.
Then it, it makes feel betterabout you as.
Uh, a service provider

Kevin Metzger (11:01):
makes sense.
And really, I mean, basicallythe responsibility for this
isn't just, it's the technical,it's organizational, it's making
sure you're getting back toeverybody so that they
understand how you'reintentionally using AI at this
point.
Thank you.
Thanks for helping us tackle inthis, this talk.
It's a tough one.
It's not, it's, we're not withall the details yet.

(11:24):
We don't know where it's going.
I mean, but.
Thank for sharing your knowledgewith us on this.
Our next show we explore how towithout stifling innovation.
Advertise With Us

Popular Podcasts

Stuff You Should Know
24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.