All Episodes

July 7, 2025 10 mins

Send us a text

Ready to navigate the complex world of AI governance without getting lost in legal jargon? This episode delivers a masterclass in building ethical AI frameworks that actually work for your business. Global tech lawyer and fractional general counsel Gayle Gorvett breaks down the essential guardrails every company needs before diving headfirst into AI implementation. From her work with Duke University's AI working groups to real-world enterprise applications, Gayle reveals why treating AI like the "shiny new toy" without proper governance is a recipe for disaster. Whether you're protecting customer data or safeguarding your company's future, this customer success playbook episode provides the foundational knowledge to approach AI adoption with confidence and compliance.


Detailed Analysis

The AI revolution isn't just changing how we work—it's fundamentally reshaping the legal and ethical landscape of business operations. Gayle Gorvett's expertise in AI governance comes at a crucial time when companies are rushing to implement AI solutions without adequate safeguards. Her comparison of current AI hype to the blockchain frenzy of a decade ago serves as a sobering reminder that sustainable innovation requires thoughtful planning, not just technological enthusiasm.

The multidisciplinary approach Gayle advocates represents a significant shift in how businesses should structure their AI initiatives. Gone are the days when technology decisions could be made in isolation. Modern AI governance demands collaboration between business functions, technical teams, and legal counsel—creating a new paradigm for cross-functional leadership in customer success organizations.

For customer success professionals, the implications extend far beyond internal operations. When AI systems interact with customer data, handle support tickets, or predict customer behavior, the governance framework becomes a direct reflection of your company's commitment to customer trust. Gayle's emphasis on informing customers about AI usage highlights how transparency has evolved from a nice-to-have to a business imperative.

The Duke AI Risk Framework and NIST guidelines she references provide actionable starting points for organizations feeling overwhelmed by the governance challenge. These resources democratize access to enterprise-level AI governance, making sophisticated risk assessment accessible to companies of all sizes. This democratization aligns perfectly with the customer success playbook philosophy of scalable, repeatable processes that drive consistent outcomes.

Perhaps most importantly, Gayle's 26-year perspective in technology law offers historical context that many AI discussions lack. Her experience through previous technology waves—from the early internet boom to blockchain—provides valuable pattern recognition for identifying sustainable AI strategies versus fleeting trends. This wisdom becomes particularly relevant for customer success leaders who must balance innovation with the reliability their customers depend on.

Now you can interact with us directly by leaving a voice message at ht

Kevin's offering

Please Like, Comment, Share and Subscribe.

You can also find the CS Playbook Podcast:
YouTube - @CustomerSuccessPlaybookPodcast
Twitter - @CS_Playbook

You can find Kevin at:
Metzgerbusiness.com - Kevin's person web site
Kevin Metzger on Linked In.

You can find Roman at:
Roman Trebon on Linked In.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Gayle Gorvett (00:05):
Customer success.

Kevin Metzger (00:10):
Hello everyone and welcome to the Customer
Success Playbook podcast, wherewe bring you actionable insights
for the customer success team oftoday and tomorrow.
I'm your host, Kevin Metzker.
Unfortunately, Roman is unableto join us this week, but we are
thrilled to welcome Gail Vete tothe show.
Gail's a seasoned global techlawyer and fractional general

(00:32):
counsel with deep expertise inSaaS, cross-border transactions
and AI governance.
She's helped companies expandinto markets from China to
Istanbul.
And currently leads g GovettConsulting.
Gail, welcome to the show.
Would you like to share a littlebit more about your background?

Gayle Gorvett (00:50):
Thank you.
Um, yes, I would.
Before forming my company, I wasan associate at a large law firm
in New York, and I was in-housecounsel at two public companies,
um, in France, nal, which is anAltel spinoff, and Brinks, EMEA.

(01:10):
Where I was internationalcorporate counsel for the BGS
business division andresponsible for EMEA and Asia
Pacific.
And now in addition to helpingclients with their day-to-day,
um, legal matters, I do quite alot of AI governance work.
Which is a very interesting areato be in.

Kevin Metzger (01:30):
Yeah, and I'm, I'm super excited about that
because that's really what thetopic of today's show is.
It is kind of getting into whatyou're doing in AI governance
and how it kind of applies.
And, you know, we do look atthis from a customer success
perspective, but quite frankly,this is whole company type
stuff, right?
So how, how people are using AIand what's happening.

(01:53):
And how the governancestructures are coming in.
It's important to understandthat information and how to
structure a governance programwithin a company because you're.
Working on your customer's data.
You're working on protectingyour company's data.
This new AI scenario, this newAI workflow is something that we

(02:15):
need to consider all of these,these priorities.
And so with that said, like Isaid, I'm super excited to kind
of talk to you about this and Ithink you're working or you're
working with a program forgovernance structure.
Can you kind of get into that alittle bit more?

Gayle Gorvett (02:32):
Yes, I've been a member of two, um, AI guardrails
working groups with the DukeCenter on Law and Technology
for, I guess it's a little morethan a year now.
And, um, duke is actually in,you know, similar to a lot of
big universities like Stanfordand MIT.
Um, creating a lot of workinggroups in this area.

(02:54):
They have five.
Um, the, the two I'm workingwith are focused on two
different user groups.
One is end users of ai, youknow, just the general
population, and the other isusers.
So lawyers that are either inlaw firms or in-house who would

(03:14):
be using ai.
And we've been focusing onproducing, um, AI guardrails for
those two user groups.
In the working groups that I'mpart of.

Kevin Metzger (03:25):
Can you get into kind of how those guidelines are
getting developed?
What, what can you get intoaround

Gayle Gorvett (03:31):
that?
Yeah.
Um, so Duke basically, you know,put out a call to have
volunteers to anyone who wasreally interested in
participating in, of course gota, a pretty, um, large response.
Um, and a lot of.
The people who volunteered forthe user group.
Um, the working groups that I'mpart of are, um, lawyers or

(03:55):
people who work in, um,different, uh, nonprofit
organizations.
Some, some are professors, um,and they're interested in making
sure that we have, um.
Guidelines and, and, um,documentation for the general

(04:16):
population and also especiallyfor lawyers to help clients and
other lawyers, um, to reallyprovide ethical and compliance
guardrails for ai, especially inthe United States where we have
a, a real lack of federalregulation in this space.

(04:38):
Uh, to make sure that people useai, but that they have some
ethical and governanceguidelines to help them do that.

Kevin Metzger (04:48):
Thank you for the background.
And if we get into the, what'syour number one SH tip, which,
so first show always is aboutwhat's your number one tip for
ensuring that we kind of likethe foundational rule for
ensuring you have those ethicalguidelines in place.

Gayle Gorvett (05:07):
You know, I've been a, a technology for, uh,
going on 26 years now, and, youknow, I've been working with,
um, companies in this spacesince, you know.
The beginning with likemonster.com and, um, you know,
the, the battle between, uh,Microsoft and um, and, and
Google in the old days.

(05:28):
And, um, and I, I think AI has alot of promise.
It's very innovating.
It's a kind of the, the shinynew toy right now.
But I, I see a lot of the, thehype in this space as similar to
what we saw when blockchain was.
Everywhere about 10 years ago.

(05:48):
And, and, um, people who are,you know, thinking about using
this in a, in an enterprisecontext.
I would say two things.
I would say yes.
It, it definitely, it has a lotof promise.
It, it shows a lot of, um, youknow.

(06:09):
Um, innovation in terms ofhelping, uh, in, in a lot of
administrative tasks, a lot ofpotential productivity tools.
But think about the use case,the specific use case for your
business, um, before, you know,potentially investing, uh,

(06:31):
financial resources or making AIa really big part of your.
Uh, you know, planning in, inany strategic way on a business
level.
Then when you think about AI andhow to, um, look at a, a

(06:52):
governance policy or an ethical,uh, use of ai, you always have
to think about how you're usingit.
Of course, think about yourcustomers and how they want to
be, um, informed of your use ofai.
And then you have to approach itfrom a multidisciplinary way.

(07:14):
Um, make sure that you areinvolving the business function,
the the tech people in yourteam.
If you do have in-house counselinvolved, the them.
If you, if you're not big enoughto do that, maybe think of
someone you know, like me, who'sa fractional, um, general

(07:35):
counsel to help you.
Um, come up with those kind ofguidelines.
You know, there are resourcesout there, um, to help you go go
through this process and thinkabout the considerations and the
risk that you might be, uh,facing in your company, um, as
you're going through that.
So, one that I would recommendto people in the US is the nis

(07:59):
NIST AI risk framework.
That's NIST.
And the other one for legalteams is the one that we've
developed through Duke, which isthe, the Duke AI risk Framework,
through which we've, we'vedeveloped, you know, a
comprehensive, um, AI riskassessment, which allows legal

(08:23):
teams to develop their owngovernance policies that are.
Um, really app appropriate totheir business, their industry,
their sector, and the use casethat they're using it for.

Kevin Metzger (08:36):
Fantastic for, and thank you for helping kick
off today.
I think we're gonna probably getinto more detail on some of
these in our show on Wednesday.
Um, kind of what happens on whenyou go deep and who's
responsible or accountable.
When AI goes wrong, don't missit.

(08:57):
Like, share, subscribe, anduntil then, keep on playing.
Advertise With Us

Popular Podcasts

Stuff You Should Know
24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.