All Episodes

October 10, 2024 36 mins

In this episode, Professor Kevin Werbach sits with Lara Abrash, Chair of Deloitte US. Lara and Kevin discuss the complexities of integrating generative AI systems into companies and aligning stakeholders in making AI trustworthy. They discuss how to address bias, and the ways Deloitte promotes trust throughout its organization. Lara explains the role and technological expertise of boards, company risk management, and the global regulatory environment. Finally, Lara discusses the ways in which Deloitte handles both its people and the services they provide. 

Lara Abrash is the Chair of Deloitte US, leading the Board of Directors in governing all aspects of the US Firm. Overseeing over 170,000 employees, Lara is a member of Deloitte’s Global Board of Directors and Chair of the Deloitte Foundation. Lara stepped into this role after serving as the chief executive officer of the Deloitte US Audit & Assurance business. Lara frequently speaks on topics focused on advancing the profession including modern leadership traits, diversity, equity, and inclusion, the future of work, and tech disruption. She is a member of the American Institute of Certified Public Accountants and received her MBA from Baruch College. 

Deloitte’s Trustworthy AI Framework

Deloitte’s 2024 Ethical Technology Report

Want to learn more? ​​Engage live with Professor Werbach and other Wharton faculty experts in Wharton's new Strategies for Accountable AI online executive education program. It's perfect for managers, entrepreneurs, and advisors looking to harness AI’s power while addressing its risks.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Lara, really great to have you as a guest on the road to accountable AI.
Great, thanks for having me.
I'm so happy to be here.
There's been a tremendous amount of excitement about generative AI in particular, at leastsince late 2022.
But recently there have been reports that companies are not necessarily seeing the payoffsthey expected and questions about whether maybe the hype wave is passing.

(00:25):
What are you seeing and what's your perspective at Deloitte?
Generally, that's an accurate statement.
The hype is starting to settle down.
If anything, we're starting to see maybe even more impatience with seeing companies wantto unlock value.
some of it is there is experimentation going on all over.

(00:46):
You'll hear examples of use cases that are.
delivering, I'll say some benefits, most likely in the area of productivity, but thesereally large, big on lots of value, we're not seeing that happen.
And there's a variety of reasons that are driving that, but that is definitely where weare in this curve, but it can and will change quickly.

(01:10):
What's going to make a change, you think?
Well, some of the issue is really just learning.
so getting each individual company, each individual department trying to focus on theirown use case, at this point, it's a bit like whack -a -mole.
So I think part of this is starting to focus on who is going to be best positioned toidentify scalable, large ways of developing use cases.

(01:38):
That's one area.
So we're starting to see a lot of tech
providers consulting firms like ourselves are starting to look to industry.
pervasive solutions.
So there's a big area is really trying to figure out a place that we can concentrate it.
The other thing is really realizing that most companies were not in the best position toreally drive scalable, generative AI, whether it's lacking a data strategy that they

(02:06):
really need to be able to do this, having the right flexible technologies, having anenvironment where there's a, I'll just say a desire to adopt and to move quick.
focus on productivity and quite frankly, some wasted money.
And that just becomes a virtuous cycle of people getting a little bit timid and afraid ofthis.

(02:28):
it's going to require investment.
It's going to require taking some risks.
And then ultimately, it's more expensive than people probably thought it would be at theonset.
This is very different from all technologies we've seen in the past where there is no dayone and it's all over.
The cost of keeping and maintaining
models goes on for a really long time and people are starting to realize that you knowbuilding versus buying an LLM and then maintaining an LLM on their own is just more

(02:58):
costly.
So there's a lot of things coming together that are preventing sort of the day one reallybig unlock but that won't last forever.
I this is still like an amazing technology that's going to do a lot of great things butwe're not where I think CEOs and boards thought they'd be at this point in the game.
Where does trustworthy or responsible AI fit into that process of scaling up?

(03:24):
Yeah, so it's a really important part of it.
It is something that I'll just say companies are all talking about.
They understand there's a need to be trustworthy.
There are different phases of, I'll say, maturity relative to what that would mean.
And building that in on the front end is yet another thing that is preventing them frommaybe moving super fast.

(03:46):
Because not all the companies really have a framework, have the ability to have theircompliance organization set up.
mentioned, the data, the privacy, all those things need to be really built and thoughtthrough on the front end.
And then there's a certain amount of testing of these systems to make sure they'reactually creating safe and secure outcomes.

(04:06):
So that's another important barrier that companies are all learning about.
And what you're also seeing is the tech companies and consulting firms are starting tobuild that into their solutions early on, as opposed to, again, making every company try
to do it themselves.
I mean, it's obviously not something you can generalize across all the differentengagements you have, but what are the major things that companies are coming to a

(04:32):
consulting firm like yours for in this trustworthy AI area?
What is around, is this something that we do separate and distinct from developing thegenerative AI strategy, or is it something that we should be embedding?
And our philosophy is generally embed this on the front end.
So when they're coming to us, it's really around strategy of what is a gen AI strategyfeel like and implemented like into our overall strategy.

(05:01):
And then how do we then get this trustworthy AI done properly?
What are the steps we need to be taking?
as we start to build solutions and whether it's the capabilities of the people that arearound the development and deployment of the technology, how the data strategies are being
built, the compliance of the regulatory environment that this particular client is in.

(05:22):
But the key is they're looking to us to be part of it from the overall strategy, not atrustworthy AI strategy.
That in itself will never get it integrated and embedded into the technology.
How convinced are they that there's a need for that trustworthy piece?
That it's not just something that's for show or really peripheral to the business.

(05:46):
They all say the right thing.
think depending on the company, you're getting legitimately different answers.
There are companies that are global and already because of regulatory environments aroundthe world that are different than if you're a US company.
There already are requirements that they have to put in place.

(06:06):
Then you've got sort of this regulatory overhang.
Even today, there was an article about the state of California coming out with somepotential legislation, which is, again, maybe contrary to what people were thinking this
would go.
So regulation is a big thing that I think is an overhang.
The boards are asking for it.
We can talk later about where the boards are generally.

(06:26):
But if you're a tech company and you're building in a generative solution and you're nottalking about trustworthy, your board, your management team, this is your brand.
I mean, this is that you can have a vulnerability come to fruition after.
it's real, but there are other places that are talking about it, but it's not necessarilycoming with real action.

(06:49):
And, you know, I worry that we're going to have a major footfall because if you don't getthis right everywhere, you run the risk of getting it wrong everywhere because it's not
something, you know, what people are going to learn about generative AI is such a powerfultechnology.
and it's relating a lot of ecosystems coming together.

(07:10):
One place having a significant footfall will have a major impact on the technology and howit's being, I'll say, affecting other people throughout the environment.
What would you say is the biggest area of concern?
There's a whole range of issues from bias to explainability to intellectual property,privacy, and so forth.

(07:33):
Is there an area that has come to the forefront in terms of these trustworthy AI concerns?
Well, definitely bias is one, because it can mean a lot of different things.
A lot of people jump to bias as meaning in itself, it's just taking certain classes ofpeople and impacting them differently.
But the reality, biases could mean so many different things to how the data is formed, howit's used.

(07:57):
So I'll just say that's an area that comes up in almost every conversation we talk about.
And it's whether the data itself and how the technology is being run is actuallyrepresenting the same
information of the people that are affected as to what you would expect if a human woulddo it.
Now the reality is that there's bias in human behaviors as well and so we could talk aboutthat as well.

(08:20):
People I think have a higher expectation for this technology than even the human being butbias is definitely the one that I would say I hear more frequently.
Of course safety and security are other ones as well.
know, is it creating harmful outcomes?
A lot of that can come back to bias.
And what is an organization that sees that problem need to do first?

(08:44):
What's the most important thing to address those concerns about bias?
I know this is going to sound really fundamental, but the first thing is actually having aframework of what their guidelines and guiding principles are around what it means to have
trustworthy AI and what their objectives are for achieving an outcome that's free of biasand however they measure that.

(09:06):
Again, it could be the same amount of bias of a human being with doing it, but starting atthe very forefront, there needs to be an intent as to what does an organization want to do
relative to
how it builds the technology.
And then it could be a variety of who's actually sitting around the table.
A lot of companies have AI councils that they formed and the job of the council is reallyto start to think about how is our data being parsed and used and is the data actually

(09:34):
reflecting the information that we want.
And then there's all sorts of testing on the systems itself.
But it does really start at the very highest level with a set of guiding principles.
And that set of guiding principles should be
for all the elements of what's trustworthy.
And it needs to have attention to, I'll just say the commercial drive for moving AI a lotfaster.

(09:57):
And there's going to be a trade off all the time relative to the pace you can move tocreate something that's trustworthy, but maybe getting the impact that you want to get.
Yeah, I found it interesting that there was a report that Deloitte put out on generativeAI in the enterprise that talked about trust in two different senses.
Trust in terms of trusting the accuracy and reliability of the outputs of the AI system,but also the employees being trusting that this is not just going to replace them or make

(10:27):
their lives worse.
Isn't there attention there, though, that the things that might be most efficient for thebusiness might not be the things that lead in
the direction that is most either in the interest of employees or in the interest of themost accurate system.
Yeah, well, the first thing is just if you want to have the organization adopt thesechanges, there actually has to be a willingness to do it.

(10:51):
At the end of the day, you could drag a lot of time out if the employee base ultimatelydoes not really commit to the change.
So you don't really have a choice.
You do need the employee base to trust you.
It's ultimately about being really honest about what you know and don't know.
A lot of clients I talk to have said to me that this

(11:12):
This is ultimately about re -skilling and up -skilling.
And if you believe them, it's ultimately about taking the work that someone does today andtake elements of it and say, you know, this is going to be done by the technology, which
is not a novel concept.
This is already happening in automation and bots and things like that.
but that the individuals themselves are gonna be at a higher end of the value chain andthey're gonna be doing things that they wouldn't otherwise been able to do.

(11:38):
But the proof is in the pudding.
If at the end of the day, someone says that to the entire organization and then goes andlays off everybody, well then they haven't been truthful.
it is going to be, the reason we talk about it, it is going to drive the pace of change ifthe employees are not adopting it because the people who know the use cases,
or the people that are doing the processes today, you need them to be bought into helpingwith the solutions.

(12:02):
It's a circular issue.
So I think companies just need to be really honest about what they know and don't know.
And the reality is we may not have all the answers about what the benefits will be andwhat some employees' life will be after.
But it will make their job a lot easier and better.
And ultimately, we'll see if we can skill or upskill them.
And that's why it's so important.

(12:23):
Otherwise, you will not have an adoption of change.
How should a company though think about where to draw the lines?
What are the activities that it makes sense to try and outsource to the AI versus thethings that you want to ensure that there's still human involvement?
Yeah, I'll say I have a strong bias here that this is about making the human better.

(12:46):
This is not about replacing the human at the end of the day.
I know there's all sorts of videos going back decades about what AI could do, and the AIwill build the AI, and then we won't have a need for humans.
I think the reality is, as long as we all live on this earth, human beings, on one hand,need meaning and purpose.
And jobs create meaning and purpose for a human being.

(13:08):
And two, they bring things that ultimately the technology will attempt to do, but I don'tbelieve it will ultimately ever get to the level of a human being, the creativity, the
emotional quotient.
So when the technology is being built, it should ultimately be thought of as how is thetechnology enhancing what a human can do, as opposed to how is the human enhancing the

(13:29):
technology.
And so it's really about driving it to make them better at what they do.
We use the word co -pilot.
The technology is the co -pilot, but I'm ultimately flying the plane.
I've got the years of experience, the expertise.
It creates challenges down the road relative to grooming a workforce to be able to lead aplane and have a co -pilot.

(13:51):
Because sometimes you need to be the co -pilot to be the pilot.
We could talk about that too.
But ultimately, it's got to be something that makes the human perform at a higher level.
and ultimately take away things that the human, it's not really adding value.
It's just not an area that the human being can add value.
You mentioned earlier the position of boards.

(14:14):
I'm curious what you're hearing from boards that you talk to right now about trustworthyAI in particular.
Yeah, this is an area where I would say boards are, again, in different levels ofmaturity, depending on the board that they sit on.
Most boards are in the, if I put maybe sort of again, these tech companies or the peoplethat are more sophisticated in this space, they're in the place of trying to understand

(14:40):
what does it mean to actually have a board that understands generative AI?
Is the CEO and the management team doing the things that they should be doing?
They're more focused on the
the benefits and implications of generative AI and getting themselves rallied around that.
The way I see trustworthy AI comes up, they don't necessarily use these words, but a majorpart of their responsibility is around the reputation, the brand of the organization that

(15:09):
they're governing.
And so the focus they'll have is are there things that we need to be doing, are therethings that could potentially be happening that could inhibit our brand?
Now, if they're regulated back to that place, that's a core responsibility of the board.
But without that, right now, when I hear it come up in a more overt way, it's reallyaround the brand responsibility, the risk appetite of the company.

(15:33):
Is this driving more risk?
The word trustworthy AI is not used as much in boardrooms, but in essence, it's getting tothe same topics.
How are boards though going to get the expertise that they need to deal with these kindsof issues, especially if it's not a tech company or very technically.

(15:53):
It's an interesting thing.
I would say a few years ago there was a philosophy, I'll just say broadly about generativeAI, that it was really all about having everybody be deep technologists and having
everybody be deep in the understanding of generative AI.
And what people have come to understand is, are we good?

(16:14):
It said it was a...
Yeah, we can start.
you just go back to the top of that answer if you don't mind.
Okay, so what was the question that it was about?
What boards are doing?
expertise of boards gaining the necessary expertise.
so it's really interesting.
A few years ago, people, when they were talking about generative AI broadly, not just inthe boardroom, they would talk about the need to actually develop deep technologists,

(16:39):
people who had really, really deep understanding of generative AI, technology, and data.
And now if you fast forward to today, the reality is there's much less of a focus on that.
And it's really around the best athlete approach.
And what we want are people that have a domain expertise that know enough about thetechnology to know the implications.

(16:59):
And I would say you see the same thing in the boardroom.
The more sophisticated companies are having a, they may have technologists in theirboardroom, but it's not necessarily to create a 12 or eight person technology board.
It's to have some people in the room that may be super deep.
For the others, they're focused on fluency and creating enough of an understanding.

(17:22):
but without losing the domain expertise.
What makes them effective in the boardroom is their skills around governance, their skillsaround running businesses, and then ultimately the industry and the company they're
governing, knowing it so deeply that they can now question and probe those things.
The other thing I'm seeing and I'm a big supporter of is sort of on -demand learning,making sure that when in the life cycle it makes sense to create a learning environment

(17:51):
into the boardroom.
they're doing that, whether it's bringing in experts, law firms, consulting firms,technology companies to bring an outside objective voice, someone other than the
management team sort of articulating, this is what's going on in the outside world, orfinding other sort of educational things they need to do.
So I see a mosaic in the boardroom.

(18:13):
It's not eight or 12 technologists.
It's people that really understand the role of governance, business, and the client thecompanies are working with.
but that they're sufficiently steep in this technology that they can ask the rightquestions.
With generative AI though, it's interesting in that on the one hand, this is very deep,very sophisticated, very complicated frontier technology.

(18:37):
On the other hand, there's this magical experience that you have that you get theinterface and you talk to it in natural language and so forth, which can to some extent
create the illusion that it's simpler than it is or that it really is a person on theother end.
is having broad understanding of what's going on beneath the hood

(18:58):
more important in this environment or less so than with the previous waves of AI and techdevelopment?
definitely more important.
doesn't, again, it doesn't mean it's just a GEN -AI board, but you do need to understandthe example I gave you earlier that this is not a six or nine or 12 month sort of
technology implementation and then it's off to the races or some standardized.

(19:22):
off the shelf package that you're using.
This is ultimately something that's a breathing technology that could do it.
It could go off.
It could have hallucinations.
And you ultimately need to understand the risks of that from a broad, the boardperspective.
And I would say the thing that probably makes this, in my opinion, much more board worthyis there's immense opportunities, but there's also immense risks to business models.

(19:47):
than other technologies maybe have likely done where parts of a business may bedisintermediated or ultimately a new entrant can come in because they can access things
like generative AI and create a different type of workforce.
So the role of the board ultimately needs to be the long -term value of an organization.

(20:07):
And so they need this deeper understanding to really, again, objectively challenge
What's actually happening with our competitive set?
Be able to sense real time.
Are there things that are going to suggest that what we do?
We made widgets yesterday.
Are we still able to make widgets in a post -GNI world?
And really ask those questions.

(20:28):
So to your point, they do need to be deeper than just implementing a SaaS technology forpayroll.
I mean, a lot of this is at the heart, the core of the business.
You alluded earlier to the possibility that there would be a big foot fault or some bigfoot faults around AI.

(20:50):
Do companies need to think any differently than they already are about risk management toanticipate and mitigate those challenges?
Or is it something that if companies have a good risk program, then they're in the rightposition?
I don't know of any company that would say that their risk program doesn't need to beimproved.

(21:10):
I've seen a lot.
are some that are alcohol, black belt, and they're really good.
But for all of them, this changes the risk profile of a company.
It changes the type of workforce that the company is potentially going to have.
It's going to require different sorts of compliance and regulatory exposures, potentiallydifferent lawsuits.

(21:33):
coming from people coming in and saying that some technology, particularly in the earlyyears, harmed them in a certain way.
So all those things are things that are gonna suggest there's new emerging known andunknown risks that probably haven't been vetted in the past.
And again, you need people with complete expertise in risk to then know enough about whatthis technology is doing and also being sensing.

(21:58):
That's a big part of...
I'll say a change in how the board needs to be thinking about its role.
I think in the past, this idea of being so close to changes in the market weren't asrelevant.
If you're going to be managing and mitigating risk, you can't be risk adverse in thisenvironment.
We don't have a choice.
You won't be competitive anymore.
But at the same time, you need to be thinking about how do I prepare for a risk when itdoes happen?

(22:24):
you know, are there insurance companies, there thick captives?
Like, what am I going to do if at the end of the day I get sued by my workforce becausehow they get impacted by this unions?
All those things are things they need to be thinking about, which is a broad topic that wecould talk about today if you'd like, Kevin.
I think one of the things I'm not seeing happen as much is really shaping externalstakeholders.

(22:47):
You know, there's this pressure right now to have
generative AI deliver these immediate results and investors are looking for it.
You have people that are in and out of these companies stocks really quickly and if theydon't see it, they beat them senseless.
But do the investors themselves understand what it means to have trustworthy AI and what,one and how is a company making those choices?

(23:10):
That's just an example of there are a lot of constituents that need to be educated by acompany around what it's doing relative to how it's embedding gen AI and the choices it's
making.
How do companies do that effectively?
Well, a couple things.
mean, one is just starting by, I'll say old fashioned, coming up with a stakeholdermatrix.

(23:32):
Like who are the key constituents that they're going to need to deal with?
And then ultimately deciding, is this something that they do as an individual or are theybetter served doing it as a cohort?
So if it's an industry, as an example, take life sciences.
Life sciences is a regulated business.
Do they come together and talk about how they think they're applying the

(23:54):
regulatory requirements already to what they're doing, know, should they be subject toeven more requirements?
How do they take investors who invest speculatively across this industry?
Do they use trade groups to talk through this?
Do they, you know, hire like law firms to go out and then they can gauge and lobbyingeffects to basically start to bring them along the way?

(24:17):
But until you do that, you've got a group of people.
And I'm using investors as an example.
You could use credit agencies, other shareholders, the employee group.
The list will go on and on.
They need to be thinking about what they want to say to them.
But it all starts with actually, I don't think we've talked about this.
You actually need an integrated strategy that says how GenAI is going to be used in theorganization, not a GenAI strategy.

(24:42):
That's probably the one.
I'll say major thing I hear people talk about is they need a gen AI strategy.
The reality is you have a strategy.
How is gen AI going to augment what you do in the execution of your
So that sounds like it means that it'll vary a lot by organization.
It should.
mean, depending on what the opportunities are.

(25:03):
In some organizations, this could merely be about back office productivity gains and doingthings differently, leveraging GenAI.
In other organizations, could be, take life sciences.
How a pharmaceutical drug is now developed is completely different.
The whole R &D experience is completely different because of how generative AI is beingused.

(25:26):
So once it starts impacting both the
I'll say the core business as well as the back office, it has a much more pervasive effectacross the organization.
your sense about the way the regulatory environment is developing around the world?
Well, it's, mean, outside the US, it's, we clearly have a less litigious environment.

(25:46):
So there's a lot of reliance on regulation and that already exists.
We've seen it in almost every, every topic you could speak of.
Here in the US, you know, it is interesting seeing now some states start to.
to focus on it.
Many would say, as I mentioned earlier, that industries, many industries are alreadyregulated and a lot of the elements that are conserved around generative AI fall under

(26:10):
that.
And then you're seeing all of these executive actions.
It seems to me that there's a bit of swirl.
And I think ultimately the test of this will be, does this make the US not competitiveanymore?
And I think if that concern gets to a certain point that we're now going to lose pace withsome parts of the globe, that's when you're going to likely see pressure.

(26:36):
But it's like nothing else.
It's going to have a lot of political debates going on.
Like the pandemic, the states are going to want to have their say.
And you're already seeing this with California.
And you've seen this with other rules.
there's no clear answer for us.
But I think the thing that everybody's going to be focused on, the element of concern.

(26:56):
being competitive.
And if that all of a sudden puts us at a significant disadvantage, I think that will putlot of pressure on how far can we hold our companies at hostage.
I'm curious about how you and your organization address the challenges of generative AI.
How do you ensure that your people have those skills they need and are in position forthat use of AI to accentuate people as you talk to

(27:27):
Yeah, so one of the first things we did, and it took a while to actually get ourselvescomfortable that we could do it at scale, is we really started to create, I'll just say,
areas for our people to play.
And allowing them the ability to essentially go in and ideate and incubate with our owntechnology, things that we're comfortable with them using that meet our standards, not

(27:53):
necessarily some third party.
and allowing them the ability to start to get comfortable with it and start to createtheir own ideas about what they can do with it and create a, I'll call it a clearinghouse
of ways to make sure we're not doing things externally that we're not comfortable with.
But just the ability to have a place to play and starting to stimulate that that'sactually something we want our people to do, it was a big part of getting people starting,

(28:20):
getting them to be more comfortable.
We're still the same place everyone else is.
We've had some episodic outcomes where I would say a really good use case that makes a lotof sense and that people are excited by.
But we haven't seen something that all of a sudden does everything we do so differentlyacross the line.
But for our people, it's just talking about it as a comfort level.

(28:43):
We do benefit the tenure of our organizations on the younger side.
this is one of the things I think over time is going to
change this, I'll just say trepidation.
I think about if you're a junior, junior high or high school right now, you're leveraginggenerative AI and technology freely, and you're gonna enter a workforce expecting this.

(29:06):
And so we have a younger workforce, so we don't have as much, I'll just say resistancefrom that all of the workforce.
I think broadly, all companies will start to benefit from.
this push, it's going to be a natural push as people join organizations.
Almost like, why aren't you using this?
I used it every day in my college life.

(29:27):
You should be using it here.
So we're going to have benefit of like a bit of a passage of time that we're going to havenatives coming into all companies.
presumably there are some things that people use consulting firms and other professionalservices firms for that will no longer need to be outsourced or they won't need to be

(29:49):
willing to pay the same amount with the what the tools provide with generative AI.
So how do you think about the potential impacts on your business?
Yeah, well, we definitely see some things that when we do something more efficiently, willsomeone say, well, is it the same pricing that you had before?
So we are looking at that.
We're also looking at net new opportunities.

(30:10):
I mean, the area that I would say forget even just the latest firm, the professionalservices firms, the consultancy firms across the board, the big and the small have really
spent a lot of time in the last decade to 15 years helping companies.
grapple with technology and how to implement and run technologies.

(30:31):
This is going to have the same, I'll say halo effect for the companies that need thatassistance.
They'll look to places like Deloitte, they'll look to some hyperscalers because at somepoint that expertise is better served outside your organization.
But what the person outside doesn't know that you know better than anybody is

(30:53):
is your own organization.
it's this marriage of both, but I would say we've seen this across the board in almostevery wave of technology, the last wave being around the cloud, which has had a pretty
long run.
All of that came to companies coming to firms like Deloitte and others to get thatexpertise, get that assistance, and even run some of these technologies for them as

(31:19):
opposed to pay their own people to do that.
It's going to go in both directions, I think, for us.
But at the end of the day, our role is really about going where the market goes.
And the market is likely helping clients think about how this technology gets deployed.
So last thing, looking to the next few years, what's the biggest development that someoneshould look to in an organization that's going to impact development of generative AI, in

(31:47):
particular with regard to these concerns about trustworthy AI?
I think inside an organization, the work that companies are doing right now around, I'llsay, really focusing on the enablement across the board and building in at the front end
what it means to be trustworthy AI, that's going to be the biggest unlock.

(32:10):
So some companies, this is the first time you're hearing the words trustworthy AI.
Other organizations are hearing it and starting to think about what does this mean tobuild it in upfront?
And without
regulation or without a footfall of any sort, it's going to be this pressure of, can I dosomething and meet the market?
And in the next two or three years, I think it's going to be ultimately their ability toreally pull together that framework that I mentioned and the ability to start locking in

(32:41):
real examples of what being trustworthy looks like.
So when they have a use case that they are actually ready to pile it and go live intotheir
into their life system.
Have they done everything that their framework would suggest that they feel really goodabout relative to this, know, transparency, the safety, the biases, all those types of

(33:02):
things.
I think those building upon each other, people will start to realize what it means to betrustworthy.
The worries I have are the foot faults, regulatory environment pushing, and even theemployees really resisting it.
Will all those things are gonna cause people to...
their feet.

(33:23):
That's interesting.
You focused really on internal expertise in the organization as opposed to some newdevelopment in technology.
Well, you could, you mentioned inside.
Outside, I think ultimately what we're starting to see a trend of is companies, just likealmost any other business challenge saying, do I need to do this all myself?

(33:45):
And so they're starting to look to third party hyperscalers and to our consulting firmsand saying, is this something you could do for me?
So if you look at the big hyperscalers now, the idea of having trustworthy AI is part oftheir brand.
And if they can get some big unlocks in use cases in focusing on either industry specificthings or I'll say pervasive generic areas of an organization and the organization says,

(34:12):
wait, I can go to somebody who has a built out use case.
It's more cost effective and they're taking responsibility for the trustworthiness of thedata and all the outcomes.
That to me is the likely big unlock when it comes to trustworthy AI.
Other than that you're bread and butter, for the average company it's going to be hard toinvest all it needs to do to get that outcome, which is why I think the consulting firms

(34:39):
and the technology firms, the lawyers, all these folks are going to be really wellpositioned to help companies think about how they make that next big push.
Laura, thanks so much for having the conversation with me.
All right, thank you, Kevin.
Really appreciated you having me.
So thank you.
Advertise With Us

Popular Podcasts

24/7 News: The Latest
Therapy Gecko

Therapy Gecko

An unlicensed lizard psychologist travels the universe talking to strangers about absolutely nothing. TO CALL THE GECKO: follow me on https://www.twitch.tv/lyleforever to get a notification for when I am taking calls. I am usually live Mondays, Wednesdays, and Fridays but lately a lot of other times too. I am a gecko.

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.