Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:05):
Hello, and welcome to On Boards, a deepdive at what drives business success.
I'm Joe Ayoub, and I'm herewith my co-host, Raza Shaikh.
Twice a month, On Boards is the place tolearn about one of the most critically
important aspects of any company ororganization, its board of directors
or advisors, with a focus on theimportant issues that are facing boards,
(00:28):
company leadership, and stakeholders.
Joe and I speak with a wide range ofguests and talk about what makes a board
successful or unsuccessful, what itmeans to be an effective board member.
and how to make your board one of themost valuable assets of your organization.
Before we introduce our guest, we wantto thank the law firm of Nutter McClennen
(00:51):
& Fish who are again sponsoring our On BoardSummit this year, which will take place
in October, again, in their beautifulconference center in the Boston Seaport.
They've been incrediblepartners with us in every way.
We appreciate all they'vedone to support this podcast.
Our guest today is Andrew Sutton.
(01:11):
Andrew is an attorney with the law firm ofMcLane Middleton, and as a founding member
of his firm's artificial intelligencepractice group, which focuses on
virtually every aspect of the use of AI,including policy and ethics, intelligence
applications by employees, acceptableuse policies and deployment strategy.
(01:34):
Andrew's experience also includecybersecurity, privacy, and
corporate work, including complextransactional and real estate issues.
Andrew is a co-author of AI and Ethics,A Lawyer's Professional Obligations,
which is included in the American BarAssociation's recent publication on
(01:55):
artificial intelligence, and he regularlypresents to local and national audiences
regarding matters involving the ethicaluse of artificial intelligence and
the use of artificial intelligence inconnection with the practice of law.
Welcome, Andrew.
It's great to have you asour guest today on On Boards.
(02:17):
Thanks, Joe.
Thanks, Raza.
It's great to be here.
So, let's start with how youfirst got interested in artificial
intelligence in a professional contextand came to found the artificial
intelligence group in your law firm.
Sure, sure.
I've always been a big tech guy.
(02:37):
Going back to when I was a kid, I usedto build my own computers, take apart
the VCR and hook up all my video games atthe same time and see what would happen.
My legal career really has beenthat of a bit of a renaissance man.
I handle a broad range of issues, but Ifind the core of my practice really is
(02:57):
it's somewhat corporate and transactional.
Commensurate with that, some of thework that I was doing a few years ago in
commercial real estate involved combiningtechnology with physical places in the
real world, and that led me into datasecurity and privacy and cyber issues
as I started to consider what wouldhappen if a bad actor were to hack a
(03:23):
building instead of perhaps a computer.
And then from there, as thingsdeveloped, my interest in AI grew.
AI really is a real world technology.
Once I ended up at McLane Middleton,I worked with John Weaver, who also
has similar interests, to developthe AI group so we could stay on
(03:44):
the cutting edge of these issues.
Thanks.
So, as we talked about earlier thisweek, AI is so important in how
it's going to be used in a company.
It's a discussion that should probablytake place in virtually every boardroom.
So, talk a little about the frameworkthat a board might think about as
(04:05):
they address the use of AI for theirparticular company or organization.
Yeah, I think it's really,really a top down approach.
I think there are governance andpolicy issues and managerial issues
that a board would want to take intoconsideration working closely with
their technology group and perhaps,outside consultants who are able
(04:28):
to give them some guidance on this.
The really critical piece of thisis taking the first step towards
implementation and moving forwards ina proactive and productive way, because
right out of the gate, it's going to behard to determine what the ROI is on AI,
and it will have compounding returns.
(04:50):
But taking that first step really needsto happen now and that should be really
the emphasis for every board becauseI believe that the shareholders are
expecting that the boards are goingto be on top of this and that the
businesses are going to be on top of this.
So, to the extent that that isn't alreadyhappening, it would be something that
(05:10):
very soon we would expect to see happen.
So, in terms of governance at the boardlevel, are companies creating new
committees that are focused on AI, arethey wrapping it into another company?
How are they dealing with that issue?
Because it's a pretty big issue andif you just let it come up in the
(05:30):
boardroom whenever it happens to comeup, you're probably not going to get
to it in the depth that it requires.
We are seeing some very robust hierarchiesforming within organization's committees
and committees on top of other committees,and this isn't really something that
just kicked over to the IT group.
I think that AI is reallysomething different.
(05:52):
It's a bit of a watershedmoment in terms of a shifting
paradigm about how work is done.
So, it's not really something that oneperson or one committee is going to
be able to manage organizationally.
The broader your organization, the morecommittees are going to need to manage
(06:14):
all the different pieces of this, I think.
From a governance perspective,a lot of that is coming out of
developing AI acceptable use policies.
That's sort of the document thatsets out a very broad sketch of what
the organization Is able to do rightnow with AI, and it might say, we're
(06:36):
restricting the use of AI pendingfurther testing and investigations of
the technology, but it's forward lookingin the sense that we are moving towards
an AI ecosystem at the organization.
But where did thesediscussions take place?
Did they take place atthe audit committee?
If there's a separate riskcommittee, does it go to tech?
(06:58):
Is there a separate AI and cyber group?
I mean, is there a best practicethat has really emerged as
to how to best address it?
I think it's critical for theboard to start with management
and discuss what the implicationsare of AI for the organization.
Because if there isn't a good way forwardfrom a managerial perspective, if there
(07:22):
isn't a plan that the management andboard are working towards here, whatever
you do in the rest of the organizationis going to start to fall apart.
So, there needs to be avery structured approach.
Starting at the board leveland at the managerial level,
you'll then be fed into it.
You might have a committee onthe IT side who talks about
(07:42):
legacy systems and structures.
You might have a whole new group ofpeople who come in to help assist with
AI and start to see a different divisionforming, maybe you might have a chief
AI officer that starts to be part of themanagerial structure and organization.
Are companies actuallyhiring chief AI officers?
(08:05):
You've seen that?
They are.
I think that there'sdefinitely a skills gap there.
There are only so many peoplewho have the experience of
implementation at this point.
There's a handful of organizationsthat have really begun to do
this, at least with generative AI.
I think that there are issues interms of learning in scale that are
(08:27):
going to be bottlenecks on a lot oforganizations as they seek to deploy this.
You really don't want somebody who'skind of making it up as they go, but
the sources of individuals who havethe onhand experience of dealing with
an implementation are somewhat narrow.
Just to talk about how important itis, where does the AI officer sit in
the hierarchy of senior management?
(08:51):
I'd say they sit alongside the CISO, thechief information and security officer.
Because I think that the policyand the implementation that is
going to go into this really needsto stay on track with management.
So, if I'm a chief executive andI'm a board, I want to make sure
(09:11):
that there's a person who is ableto manage this process and scale it
appropriately for the organization.
How do boards go about developing agovernance framework that will help drive
the strategy that they're going to employ?
What are the steps that they're taking?
Is it the AI officer?
(09:32):
Is it the tech people?
I mean, do they bring in outside people?
Most board members are not going tobe conversant with all the issues
regarding AI, so how does a board theneducate itself sufficiently so that it
can begin to make the decisions it'sgoing to need to make that will govern
AI policy throughout an organization?
(09:54):
I think it starts at the manageriallevel, and I think it starts
with the chief executive officertaking the board's imperative and
bringing it to the organization.
So, if the board says,"Hey, we want to do this.
We're ready to do AI.
You need to make this part of youragenda", and then from there, it
goes into the organization, and themanager would determine what the assets
(10:18):
are that he needs in terms of humancapital that are there or that are
missing and seeking to sort of fill outwhat's necessary to start making these
decisions and start thinking about,like how deeply are we going into AI?
How quickly are we moving into it?
What's happening in our industry?
Then I think that once there's a goodunderstanding of what's happening, and
(10:40):
that might involve bringing in third partyconsultants, it might involve management
consultants, it might involve technologyconsultants, you go back to the board
with a plan and you say, "Okay, nowwe're going to work with our attorney.
We're going to draft our governancepolicy. We have an idea of what we want
to accomplish and how we move forwards",and then from there, the transformation
(11:00):
and the structure begins to take place.
So, I think it's almost kind oflike there's definitely going to
be a adoption period for everyorganization as they go through this.
There's going to be a transformativeperiod and it's not going to be easy,
it's not flipping a light switch on.
But I think that organizations that takethe time to really dig into this at the
(11:20):
beginning and think about the policy willstart to answer the tough questions and
they'll be able to bring that back to theboard and make good investment decisions
based on the information that thatthey've been able to assemble, I think
Andrew, earlier you mentioned thewatershed moment, and I think it tells
us that this is different, and by this,I mean, AI or the current form of AI,
(11:47):
how do you think it is different thanany other tech or tech project decision?
What is so special about AI?
AI is different because itchanges the way that people work.
It changes how human capital isdeployed by adding a degree of
automation into processes that wereotherwise knowledge and education
(12:09):
based and human decision oriented.
I think we're movingtowards a hybrid structure.
It's not going to be a situation whereAI does everything, but I think there's
going to be a lot of delegation oftasks and information crunching to
AI that will inform decision makingon a faster basis on a more detailed
(12:30):
basis and on a more current basis.
In terms of how it's different, I thinkthat we really haven't seen anything
like this maybe since the Internet whereyou had a different group of people who
understood how to leverage networks andinformation and data on the Internet
and was able to get a really distinctadvantage competitively in their market.
(12:52):
I think that you'll see the same thinghappen here when we're looking at a
market based kind of competitive approach.
I think that it's important for boardsto understand that if you're not
moving forward with this, you're beingleft behind, and it's transformative
in that way where in 10 years, fiveyears, you might not be a relevant
(13:15):
player, just like Sears went out ofbusiness with its catalog over time
because you didn't take the appropriatesteps to invest in this shift.
Well said, and I think I'll addthat one of the things that strikes
me is that the ability to makeindependent decision was a capability
that software and tech never had.
(13:36):
Ultimately, the humans were assistingit and these were just tool, but
with AI under the right guidance andguardrails and all of that, there
are a lot of scenarios where AI willbe making decisions, and I think
that's the key thing that allowsthose workflows to be more efficient
and leverage these capabilities.
(13:58):
Speaking of AI's capabilities, what do youthink it means for the boards themselves
to be using AI or the output of AI?
How can that make the board's informationflow or decision making better?
Everybody uses AI already.
It's already in your cell phone.
It's in your browsers.
(14:19):
It's in your Google search.
So, it's been sort of lurkingbehind the scenes, even if
people don't understand it.
For years, we've had if this, then that,which is sort of kind of a form of AI.
We've had machine learning.
When we talk about boards using AI,we're really talking right now about
boards using generative AI, the specifictype of AI that statistically is making
(14:42):
determinations as to what the next wordthat you are looking for will be based
on your input to generate an output.
When boards are using information thatis gathered and parsed through AI, I
think they need to understand what thatmeans, and I think that's critical for
boards to learn as soon as possible.
(15:04):
If AI is being used to generatereports or insights or information
about their organization's processes,compliance regulation, they
need to trust that information.
So, in any instance where AI isinvolved, the biggest issue is going
to be trust, can I trust this data?
Is there something in thedata that might lead to bias?
(15:27):
Is there something in the data that mightlead to a hallucination or an inaccuracy?
Because what we're talking about here iswe want to have a high level of confidence
in the information that the board is usingwith the greater speed and breadth of
information processing that AI provides.
We don't want to sacrifice anyof the quality that we have.
(15:51):
We need to understand if areport is AI generated that maybe
we take with a grain of salt.
It's kind of like the Internet 30 yearsago where your high school teacher said,
"You know, go look at the encyclopedia.
Don't hand me something that you googled.
How does a board get comfortablewith trusting AI and how they
(16:16):
as a board are using it and howtheir company is employing it?
What should the board be doingin order to develop enough of a
trust that they can move forward?
The important part of that ishaving a robust structure in place
that allows you to trust the AI.
So, for example, if you know that yourdata is good and your AI is limited to
(16:40):
your data and your model is tweaked andtested and regularly maintained, then
you can feel like what's coming outof the AI is probably high confidence.
And if you have a person or maybe evenanother AI process confirming the accuracy
of the AI outputs, then you can say, "Allright, this has been checked and parsed
(17:02):
in the data. It started from as clean."
One of the things that is a big criticismof AI is where is the content coming from.
Is it all from nonfiction booksthat the model was trained on?
So, there's value in different models.
One thing for a board to be cognizantof is that AI is highly asymmetrical
(17:24):
based on processing power, capability,compute time, and training data.
I don't have access as a consumerto the same level of AI output
as OpenAI has in its own system.
It could ask a question that I'm noteven allowed to ask and get an answer.
(17:47):
So, there are all these differentperspectives to have as the board to
say, "What is the AI we're using? Havewe invested in it? What's it drawing
from and how does it get us to that placewe want to be with that confidence?"
How does a board balance the need toimplement it and not fall behind their
(18:07):
competitors who may be implementing it,but also not false starts, so that going
down a road could be very expensive,could also be extraordinarily frustrating?
What steps should they be taking in orderto make sure that the implementation is
moving quickly, but they're not takingrisks that are likely to lead to a
(18:29):
real setback in their implementation?
I think departmentalization isreally important in implementation.
I think that success and measurableROI is really important for boards
and management at this point.
I think that a failure of an AI deploymentwould be extraordinarily costly for any
(18:51):
organization, so it should be criticalto plan your AI implementations step by
step to ensure that as you are rollingthis out piece by piece, it is being
rolled out successfully in that on thetechnical side, it's going out without a
hitch that you can see a clear ROI, andthat on the human capital side, you're
(19:13):
managing expectations with respect towhat does it mean to have AI in our
ecosystem, that we're empowering ourworkers, not replacing our workers.
So, I think there are definitelysome really important pieces to
that, and that's going to be a trickybalance for a lot of organizations.
How do you determine whatthe ROI is in the use of AI?
(19:36):
You start off by looking at the backend of the AI to see who's using it
and what they're using it for, andyou can look at the amount of time
it takes to conduct certain tasks.
At the outset, when you're usingan AI that isn't really suited to a
particular task, you might actuallyfind that it's a negative ROI, because
the development of the workflow itselfto get the result you want is taking
(20:01):
longer than actually doing the thing thatyou're asking the AI to do on your own.
But once you have a workflow in place,and this is why I think there will
be AI divisions that will sort ofhelp you develop these workflows and
program and put everything together,you'll see exponential returns
because if the AI system is able todo what you want quickly, you now
(20:26):
have created an automation for a task,but this automation is going to be
somewhat granular at the beginning.
Aside from organizations that might say,"Hey, I've already done this, maybe I can
sell it and I've taken the risk and thenI sell it to the rest of my industry and
sort of teach people how to use what Ibuilt," starting at this point from ground
zero means you're doing a lot of testing,so it's step by step and very granular.
(20:54):
Andrew, going back to the theme of AIin the boardroom itself, thoughts on,
let's say, things that come under theheading of AI but the board itself
is using, for example, meeting notesor meeting minutes transcription.
Perhaps the board pack andmaterials software that boards
(21:16):
use is able to summarize or flagquestions for the board member.
What do you think about that,the use of AI by the board
itself to make it more effective?
The use of the AI on the board minutesand board confidential information is
absolutely not recommended at this point.
It's really important for boardsto understand that confidential
(21:38):
or non-public information couldpotentially be leaked through an AI.
It could be part of theAI's training process.
A big thing that we're seeing with a lotof AI vendor contracts is really trying to
track down where that data is going onceit sort of passes into the realm of the AI
(22:00):
process, and we found latent GDPR issues.
We've found issues that might be createliability under the California AI Act,
and we found situations where informationis being sent overseas because of some
sort of round-the-clock service that'sbeing provided and it's somewhere kind
(22:25):
of buried in the AI hierarchy and thevendors supporting the AI company.
So, it's really, really critical forboards to understand if they're going
to use any of these technologies thateverything stops at their organization
so that if you have your attorney at theboard meeting to maintain confidence and
(22:47):
give you advice, but the AI is recording,it could destroy the attorney-client
privilege because the information isbeing sent to a third party, so we want to
absolutely make sure before anything getsinto the board where the directors have
liability, that everything is 100% clean.
(23:07):
It's almost like a datasecurity assessment and privacy
assessment at that point.
So extending all this to maybe a littleextreme, which is in these cases, these
technologies are supposed to be, if andwhen used correctly, augmenting the board
or assisting the board in doing their job.
Can we also imagine and think thatin the future, there would be AI
(23:33):
itself as one of the board members ina board, and is that even possible?
What does that look like, and is it real?
Well, I would just jump in and say, notreplacing even one board member, but
maybe you say, "We're going to have AIin the room, and there's a couple of
people we don't need anymore because theywere just gathered, they had some basic
(23:56):
information about some stuff, but AI hasso much more, so we're going to have AI
here and five board members rather thanthe seven we had." I mean, that's what
I'm thinking of, that AI actually takesthe place of one or more board members.
I think that's really tricky.
Yeah, I bet.
(24:17):
From a legal perspective, I'm sure youhave an opinion or view, but I think what
we're talking about is like pretend thatthis is a board member sitting on their
chair and AI also interjects becauseit can understand the conversations,
the context, the board materials, theorganizations, everything, that it is able
(24:40):
to interject or say, "Yeah, well, haveyou guys thought about this," and that
will be extremely helpful potentially.
So just, putting it out there asa possibility and imagination.
I think there would need to be somepretty significant legal reforms
to get to a point where we'reseeing AI replacing board members.
(25:02):
As a matter of fact, because I thinkthat one of the critical things about
a board is that a board is human,it's their people and there are
people who are responsible for theorganization and to make decisions,
and if they do something illegal,they can get into trouble, whereas the
culpability of an AI is very different.
If you ever look at the userlicense agreements for OpenAI
(25:26):
or Google or Copilot, theyhave no liability for anything.
They could tell you that the skyis green and that the seas are red,
and too bad, that's your fault.
So, I don't see that as necessarilysomething that is on the near horizon,
but I do think that it could be possiblefor people who are training AIs to create
(25:55):
models of different board members tothink about how they would vote to be able
to say, "Okay, if we present somethingto this person this way, can we move
their vote in a particular direction?"
Because we're talking about data andpreferences, and every time you vote,
or you have meeting minutes, youcould have an AI that could somewhat
(26:16):
approximate what a board member might do.
And for big decisions, there might bepeople who want to head in that direction.
Having the AI help inform the boardmight be something that happens,
but again, as long as it's a closedsystem, the board and its attorneys
are going to feel much better about
Yeah, well, believe it or not, Andrew,I know at least one company where it
(26:41):
helps the law firms kind of role playand predict what a certain judge is going
to argue or how it will proceed, I don'tknow the technicalities, and be able to
say, "Here are the pitfalls with thisjudge and here's how you should argue,"
and even make a prediction saying, "Wethink you have a good shot if you do it
this way." So, it's fascinating and itis quite a slippery slope, fraught with
(27:07):
ethics risks and reliability issues.
I've spoken to some judges aboutthat, Raza, and they are not happy.
They do not like the idea that you couldfeed all of their decisions right into
an AI and say, "Okay, am I going tostart shopping for a different judge
because I know that Judge Jones has thiskind of perspective in these cases."
(27:32):
But isn't that just a more systematicapproach to what's already going on?
People are always wondering, ifthey have an opportunity, is there
a judge that's going to be morefavorable to this particular case?
This just systematizes it.
What's the difference?
The distinction is that AI can seepatterns that people don't see.
It's like when you have a doctor and theAI does a better job of seeing squamous
(27:56):
cancer cells then AI is just better at
So, why would judges be more upsetif they're being selected based on
AI feedback rather than just lawyerssitting around going, "That judge did
this in this case, you got to avoid him."
They're just saying Ishouldn't be predictable.
Yeah, everyone's predictable.
(28:17):
I think that it speaks to the asymmetryof information and asymmetry of
access that AI presents as an issue.
One thing that judges like about AIis that it could provide access to
justice, that it could help peoplewho are indigent and can't afford
attorneys to sort of go through thelegal process without falling into
(28:38):
procedural traps or something like that.
But they definitely don't like the ideaof it, say, rendering a decision in their
place or being used to sort of manipulatewhat could happen in the courtroom.
That is something thatthe judges do not like.
Of course.
Well, yeah, I mean, I understand it'sgoing to be a hard thing to balance
(29:01):
the enormous impact that AI can havein a positive way without allowing
it to basically substitute artificialintelligence for human intelligence
and making really the basic decisionsthat create the whole rule of law.
As you think about it, do you wantartificial intelligence to be processing
(29:24):
this and creating the framework forthe future of how law is actually
viewed and implemented in this country.
I mean, there's already someissues about the rule of law.
If you add the artificial intelligencefactor, where does that take us?
(29:45):
That takes us to a lot of thedue process cases and litigation
that's out there on AI right now.
I mean, it's not generative AI thathas spawned a lot of litigation.
It's systems like facial recognitionsystems or systems that are looking at
applications and making decisions aboutan applicant based on some pattern in
(30:07):
their resume, and in having an inherentbias in the system, I think, there are
cases involving unfair hiring practiceswhere AI system was parsing resumes
and it's discarding based on genderbecause the AI was trained on the data
that was already at the corporationand it said it's 75% male, so I'm only
(30:29):
going to look at male applications.
There are airport systems thathave been using facial recognition
and pulling people over becausetheir skin is a different color and
violating due process on that basis.
So, it's a very fine lineand that's why we get worried
about the bias in the system.
We get worried about the data and howit's trained and that's a challenge.
(30:53):
And all these examples have likereally highlighted that this is
fraught with ethics considerationsrisks and reliability problems.
From the board's perspective, howought they think about providing
guardrails and systems foraddressing these concerns with AI?
(31:13):
Because on the other hand,there's a lot of opportunity for
organization to take these risks.
I think that boards andorganizations as a whole need to
be smart about AI touch points.
They need to be smart about data.
You don't want the AI beinga primary source of contact.
There was Air Canada case involving anAI where there was some liability and Air
(31:38):
Canada tried to assign that liability toa company that just owned the AI asset,
and the AI essentially violated its ownpolicies, and I think it refused a refund
to a certain customer who was entitledto a refund and there was a lawsuit and
it turned into a publicity disaster.
So, when we are dealing with thesethings and the board's looking
(32:03):
at these things, you have to say,"Where does my control of this begin?
And where does my control of this end?
Is the result something that is clearand expected or traceable?" Because
you don't want a system that's just ablack box AI, so it all comes back to
(32:28):
this trust issue of building a systemthat has repeatable results that has
good information that is acting the waythat you want it to act, and that's an
expensive proposition; the upgrades andthe time and the personnel, that's a big
bite, but it's a bite that's somebody'sgoing to have to take or the organizations
(32:48):
can have to take at some point.
So, really, we're in the stage ofthe runway where we're figuring
out how much runway do we need toget into the air with this thing.
And how much time do you really haveto not lose the competitive advantage?
Right.
Andrew, it's been great speaking with you.
Thank you so much for joiningus today on On Boards.
(33:10):
It's been a pleasure and thanks guys.
And thank you all forlistening to On Boards with our
special guest, Andrew Sutton.
Please visit our websiteat OnBoardsPodcast.com.
That's OnBoardsPodcast.com.
We'd love to hear your comments,suggestions, and feedback.
And if you're not already asubscriber, please be sure to
(33:32):
subscribe at Apple Podcasts, Spotify,or wherever you get your podcasts.
Remember to leave us a five-star review.
And we hope you'll tune in forthe next episode of On Boards.
Thanks.