All Episodes

September 10, 2025 • 14 mins

I spoke with Diane Homolak, the VP of Technology Solutions for Integreon, Sylvain Magdinier, an independent consultant specializing in legal transformation and innovation, and Gayle Gorvett, the CEO and Managing Director of GGorvett Consulting. They are all part of a workstream within RAILS that helped further the deployment and development of AI in the legal profession. We discussed how in-house counsel can use the RAILS Risk Framework to better approach AI risk assessment and governance, and ways that legal teams are managing AI risk, along with privacy and cybersecurity.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:01):
Welcome to Reinventing Professionals,a podcast hosted by industry analyst
Ari Kaplan, which shares ideas,guidance, and perspectives from market
leaders shaping the next generationof legal and professional services.
This is Ari Kaplan and I'm speakingtoday with Diane Hoak, the Vice President

(00:24):
of Technology Solutions for IntegrionSilvan Maier, an independent consultant
specializing in legal transformationand innovation, and Gail Govett.
The CEO and managing director of gGovett Consulting, they're all part of
a work stream within Rails that helpedfurther the deployment and development
of AI in the legal profession.

(00:46):
Hi everyone.
How are you?
Good morning.
Good to see you.
Hey, Aaron.
Great.
Thanks for having us.
I'm looking forward to this conversation.
Diane, tell us about your background andthe genesis of your Rails work stream.
I come from a 30 year in-housecareer at Hewlett Packard.
I do legal solutions now forIntegrion as a legal service provider.

(01:10):
And I see the struggles that our clientsare having with creating policy in
the use of AI in the law profession,in either in-house teams, law firms.
Even in academia havingit as a conversation.
The Rails Organization is a nonprofitthat was created by Duke University
and Dr. Jeff Ward to bring togetherall the elements of the industry, be it

(01:35):
academia, law firms, law departments,and crowdsource, solutioning,
some of these kind of concerns.
One of the things that the organization.
Professors is start a work stream andsee if you can get people together to
help create the materials needed to makethese things better and more effective.
So last year I started a work stream.

(01:58):
In the Rails organization putout the topic that I wanted to
do things to further guidancearound draft drafting policy and
assessing risk for AI use in legal.
I was fortunate enough to have 40 peoplesign up from across the profession.
Sivan and Gail were two ofour very active participants,

(02:18):
along with a number of others.
And the output of that was a riskmanagement framework to give people.
Guidance on using or looking atit and assessing risk in creating
their policy for use of ai.
So that's how we got there and it's beenreally exciting to work with all of these
folks and produce such a great output.

(02:42):
Sylvan, how does the guidance thatyour work stream produced help users
understand and approach the risks ofdeploying artificial intelligence?
So the guidance itself says that thestarting point for the production
of a risk management framework isto develop a good understanding of
the risks that need to be managed.
In other words, what type of riskscould be faced by an organization that's

(03:05):
building or deploying AI technology?
And from that point, you then need tocalibrate and prioritize those risks.
So as a new technology, AI is anew area for legal risk management,
but that doesn't mean that all therisks associated with it are new.
So in fact, when you break down therisks, the majority can be understood
within existing risk frameworks.

(03:25):
For example, general IT risk failurerisk if a software system fails to
perform, that can impact the companies.
Operations and therefore itsability to generate revenue.
Same applies to ai.
If you're using it to automate abusiness workflow, for example,
and the output is incorrect, thenthe business workflow can fail.
But of course, generative AI also doesraise new types of risk, partly because

(03:48):
of its ability to produce human-likedigital content that can then be relied
upon to influence behavior of individualsand groups and even societies as a whole.
Also the ability of AI to create contentin the form of software code means
that the technology can respond tonatural language instructions to operate
autonomous agents that control machinesand other, software powered systems.

(04:11):
Now, on top of that, you havethe problem of obscurity in how
generative AI produces its outputs.
So even AI expert experts don't alwaysknow exactly why an AI application
answers a question or producesoutput in the way that it might do.
Compounding that problem is thefact that the data upon which the
AI models are trained, could beflawed, could be out of date by the

(04:33):
time the model is being put to use.
And that includes the risk of humanbias creeping into the thinking process.
I'm putting air quotesaround the word thinking.
When we think about risk, conceptually,there's an important point for any
organization trying to establisha risk management framework.
And that's actually the casepretty much in any area of
corporate risk, not just for ai.

(04:54):
Conversations about risk managementoften confuse cause and effect, and what
I mean by that is that there are risksthat certain things might go wrong.
Let's call that the cause, andthere are risks of certain impacts.
Being felt by the business.
That's the effect.
And very often the discussiontends to focus on cause.
Taking the example in ai, we hear alot about the risk of hallucination

(05:17):
with generative ai, it's a genuineproblem, particularly because AI
output can look very convincing eventhough it has invented something
that is presenting as fact.
But the risk of hallucination doesn'ttell you how serious the impact might be.
Because it depends on the workflowagain, and how the output is being used.
So to take another example, if you'reusing AI to review a contract and

(05:41):
you ask the tool to summarize allof the indemnities in the agreement,
the impact of the error could beminimal if a human lawyer is quality,
controlling output thoroughly.
But of course, in that scenario, you'rethen losing the benefit of the automation.
The other way to think aboutrisk is in terms of impact
and what the guidance does.
It suggests that the impact areas can becategorized broadly into three domains.

(06:02):
You've got human risk that individualsor groups of people could be harmed.
That includes social andenvironmental impacts.
You've got operational risk where anorganization cannot function fully or
partially, and for a duration that'smaterial to the organization's purpose.
And then you've got regulatoryrisk where applicable laws or
regulations could be breached.

(06:23):
For example, particular laws onprivacy or AR regulation itself.
That's all stuff that the guidanceactually covers and then starts to create
a methodology for managing those risksand understanding them conceptually.
Gail, how can in-house counsel use theRails risk framework to better approach

(06:46):
AI risk assessment and governance?
In-house counsel are struggling withhow to approach AI risk and governance
with often very limited budgets.
They're being asked to incorporatethis into their compliance and
governance frameworks with.

(07:06):
Usually no additionalresources given to them.
They're supposed to do thatwithin existing governance
and compliance protocols.
It's often part of the privacy areaor cybersecurity or often those are
lumped together in some organizations.
Given how quickly AI moves there's not alot of time to catch up to the technology.

(07:32):
This is an invaluable tool for in-houselegal departments to assess the use
case that they are using AI for.
And the risks that are potentiallyassociated with those use cases and
to begin to develop their own internalgovernance and compliance protocols

(07:55):
based on those risks and use cases.
What we encourage them to dois to develop multidisciplinary
teams in order to assess.
The risks that are associatedwith their internal use cases.
That would also look at specificindustry regulations and protocols.

(08:16):
There are, regulatory requirementsthat are already in place.
For example, healthcare financialregulations that might overlay in
some of these areas where theremight be data mapping requirements
that might have to be layered in.
In addition to the AI risk elementthat we're referring to today.

(08:37):
That add a layer of complexity.
So having a human in the loop is vital.
And we discuss all of that in the riskframework that we've developed through
the Rails Risk framework document.
We go through all of that analysis stepby step and assist in-house teams in how
to go about analyzing that and puttingtogether those governance protocols.

(09:03):
Sylvan, how can legal teams manage AIrisk along with privacy and security,
while also adapting your guidance to theirparticular organization, large and small?
The guidance is elastic in how it appliesto different scales of organization.
The methodology.

(09:24):
Allows an organization to firstunderstand the risk in the context of
its business, but then to calibrate andprioritize those risks and that exercise.
Of calibration and prioritization isalways done in the context of the sector
, that you operate in and the scale ofbusiness that you're operating at.
So the methodology is the same, but theresults are gonna be different depending

(09:46):
on the kind of business that you are.
The second area where there'selasticity is around the.
Existence of existing frameworks.
So a very large corporationvery well resourced with many
years of risk management.
Activity already existingthere is going to be prepared.
It's gonna have templates and approachand organizational structure around

(10:08):
risk management that can't simply beignored when you are introducing a
new domain of risk management like ai.
Conversely, if you're a smallorganization, you may be starting
from, maybe not a blank sheet, but arelatively light touch infrastructure
in terms of risk management.
In either case, you are goingto be taking advantage of the
approach to risk management.
That exists within your corporation.

(10:31):
Then the third area of elasticity isby not seeking to reinvent the wheel.
So if you have an approach to riskmanagement in existing domains like
privacy or cyber, for example, then you'renot gonna look to simply superimpose
a new approach where there's overlap.
For example, if an AI systemis used in a workflow that uses

(10:52):
personal data, there's gonna beclear overlap with the data privacy
risk frameworks that exist already.
Cybersecurity is another area, butsimilarly, even if you've got a
cybersecurity policy for your corporation.
It does need to be reviewed and itneeds to be updated with reference to
those risks that have been understood,calibrated and prioritized for ai.

(11:12):
There are new types of cybersecurity risksthat are understood in the context of ai.
For example, data poisoning,which is where malicious or bias
data injected into training sets.
Then there's the manipulationof AI outputs via.
What's known as prompt injection, sothese are new types of cyber risk that are
brought about by the use and deploymentof ai, which have to be factored in.

(11:36):
Gail, what is a key takeaway forusing this guide and approaching
AI governance effectively?
Teams and companies that are adopting ai.
Have to be flexible.
AI is not static.
The regulatory environment, asurrounding AI is not static.

(11:58):
We've seen that ourselvesin the US recently.
And you cannot work in siloswhen implementing, any governance
methodology with respect to ai.
Otherwise.
It, you will not be successful.
You have to involve legal, youhave to involve the business.
You have to involve it.
You have to all work together.

(12:18):
You have to understand how you'reimplementing ai, what your use case
is, how your sector is involved in.
And you have to understand how it'sevolving within your organization.
And I would say be multidisciplinary,and be flexible and keep a

(12:39):
human in the loop at all times.
We've seen very costlyexamples of not doing that.
Diane, what's the futurefor Rails and the framework?
Rails stands for ResponsibleArtificial Intelligence and Legal
Services and as an organization ourwork stream is just one of many.

(12:59):
So the framework that weproduced is one work stream.
They have active work streams lookingat use cases and other type of guidance
that's gonna help the industry.
And because it's cross industryparticipants, the output is
very rich and representative of.
A lot of different interests, and Ithink that's gonna be necessary given the

(13:22):
impact that AI has across the industry.
It is so much more of animpactful technology than
anything we've seen before.
So I hope that Rails will continue tosee growth in membership additional
work streams, putting out more content.
And I expect that the guide itself willbe refreshed at some point because as new

(13:44):
capabilities reach the market, policy andgovernance is going to have to respond.
We'll be wanting to update the frameworkto give additional guidance in that area.
But rails has an opportunityto bring a lot more to the
table than just this framework.
So looking forward toseeing it grow for everyone.
This is Ari Kaplan speaking withDiane Hoak, the Vice President of

(14:06):
Technology Solutions at Integrion.
Sylvan Maier, an independent consultantspecializing in legal transformation and
innovation, and Gail Govett, the CEO andmanaging director of g Govet Consulting.
They're all part of a work streamwithin Rails that helped further
the deployment and developmentof AI in the legal profession.
Thank you all very much.

(14:27):
Thank you for having us.
Thank you.
I thank
you.
Thank you for listening to theReinventing Professionals Podcast.
Visit reinventing professionals.com orari kaplan advisors.com to learn more.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Cardiac Cowboys

Cardiac Cowboys

The heart was always off-limits to surgeons. Cutting into it spelled instant death for the patient. That is, until a ragtag group of doctors scattered across the Midwest and Texas decided to throw out the rule book. Working in makeshift laboratories and home garages, using medical devices made from scavenged machine parts and beer tubes, these men and women invented the field of open heart surgery. Odds are, someone you know is alive because of them. So why has history left them behind? Presented by Chris Pine, CARDIAC COWBOYS tells the gripping true story behind the birth of heart surgery, and the young, Greatest Generation doctors who made it happen. For years, they competed and feuded, racing to be the first, the best, and the most prolific. Some appeared on the cover of Time Magazine, operated on kings and advised presidents. Others ended up disgraced, penniless, and convicted of felonies. Together, they ignited a revolution in medicine, and changed the world.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.