Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
You
Welcome, really great to have you guys on the podcast.
Thanks, Kevin.
Great to be here.
Thank you.
Thank you so much.
So Meta, tell us first, what are the kinds of projects that companies usually hireMcKinsey for in the AI?
(00:21):
So maybe I'll anchor more on the AI trust topics and then I'll pass it on to Michael togive a kind of big picture also view of other AI topics that we work on.
Within AI trust, I think we've had across the spectrum different kinds of questions thatcome up.
And I think it's also evolved over time from earlier versions of when the executive ordercame out.
(00:46):
to kind of now when the questions are much more around tactical solutions, evaluations,and actually looking at third party tooling solutions to implement it in a more
operational way.
So the range is from, hey, help us think through our overarching AI strategy, thinkingthrough what use cases are the highest value, how do I measure the business value of these
(01:10):
use cases, how do I run the pilots, and how do I measure the impact of these pilots?
and how do I know when to scale these solutions?
And then on the AI trust side, there are also conversations around, okay, what should bethe governance model around actually identifying these risks?
What does, know, in the presence of evolving AI regulations, what does a high risk usecase mean versus low risk in my industry context, in my application context?
(01:41):
And then also thinking about training, education,
within the executive leaders and also designing it for specific cohorts, for instance, thelegal enterprise risk, InfoSec, product leaders, and kind of designing their education
curriculum as well.
So I would say that's kind of like a broad spectrum of questions that we've answered overthe past many years on the topic.
(02:10):
I if I just wanted to add, right, I mean, as Maeda was saying, you know, we've embeddedthis idea of AI trust, responsibility, accountability, and all the work that we do.
But, I've been around for a while, right?
I mean, oftentimes McKinsey is considered a strategy shop, right?
But actually for decades, it's been a minority of the work that we do.
it's, I barely recognized the firm that I joined, right?
(02:30):
I mean, so when I joined, had PhD in computer science.
I was sort of a weird person at the time.
We do do, as Maeda says, AI strategy work.
Right?
What should I do for, how do I deploy AI within my enterprise?
And of course, you know, we also serve investors who think of AI as an investment theme.
That's the kind of work we've done literally for almost a hundred years now.
(02:51):
But now, like, you know, like I said, it was a bit of a weird person when I arrived.
We have all these people who are hands on keyboard.
We're never be a, you know, a system integrator, right?
But not only do we provide advice, we're oftentimes
you know, we describe ourselves as an impact partner, a shoulder to shoulder with ourclients developing not only the technical applications and embedding AI trust and
(03:13):
responsibility, but building businesses with them.
And, know, it might be staffed with some McKinsey people.
And then over time, you know, it turns out to be a completely, you know, a client drivenbusiness.
so that the range of things we look at is incredibly wide and the ability to embed AItrust at the beginning and throughout the process is something that
(03:33):
our clients really demand.
And Michael, as you said, you've been in this field for a while, like me.
We've seen many waves of technology development and hype cycles.
What's different about this AI moment today?
Yeah, we've known each other for a while, but let's not go on about how old we both are.
(03:55):
Look, it is a remarkable point in time, I think, where we are now.
You know, there's a somewhat famous paper that was published that said GPTs are GPTs.
It's a funny pun, right?
But GPTs and the second part being general purpose technologies.
And so one of the things that's remarkable, of course, the technology is remarkable.
All of us had that experience when we first used chat GPTs.
(04:16):
wait, wow, look at all these things.
And we continue to have those sorts of experiences.
But it's beyond those experiences.
It's not just the breadth, but it is the breadth.
There's no industry where this stuff doesn't make a difference.
There's no business function where you can't use AI to increase your performance.
There's no role where it doesn't.
But I think one of the interesting things, if I really get down to the experience or maybeeven the political economy of it,
(04:43):
Here at Afford, lot of the technologies that affected people's jobs on a day-to-day basisprimarily affected frontline workers, oftentimes people who were paid the least in an
organization, oftentimes the roles that had the lowest requirements for education.
labor economists call it low skill.
I think that's a terrible term, but what they mean is relatively fewer years in formalschooling.
(05:09):
But when we did some research, we published this paper, The Economic Potential ofGenerative AI, and we looked at the roles in which the generative AI technologies
particularly had the most differential impact.
And it was almost the exact reverse of what the labor economists called skill biastechnological change.
It was the people who have graduate degrees.
It was the people you paid most in the organization.
(05:32):
And so suddenly the people in places of power and privilege say, wait, this affects metoo?
And so suddenly.
that raises the salience of these technologies and it has, right?
Because now the entire organization, we talk about transformational technological change.
This means transforming not just the frontline work, but the entire organization.
(05:52):
And as a result, I think it is really a transformative moment in terms of the potentialimpact these technologies can have, not just because the technology is advancing, it
really changes the way we think about an organization if we operate it and the way that,you
we all will do our work.
Meta, does that transformative aspect of generative AI impact on how firms think abouttrust and responsibility?
(06:19):
Yeah, so I think there I've had multiple conversations with enterprises where they dorecognize that to Michael's point, there is tremendous value in integrating AI into their
solutions, right?
And they also recognize that trust is a major barrier to adoption.
(06:41):
So for instance, we recently did a survey with about
you know, over 760 organizations in about 38 countries on their responsible AI practices.
And we did hear from them about how there is, you know, kind of clear gaps in theircurrent processes of audits or third party controls, or how trust will be the kind of big
(07:07):
thing that will prevent them from adopting AI solutions.
And...
On the flip side of it, there is also data to show that the AI assurance market or themarkets for solutions for these problems is consistently growing.
So one of the nuances of that fact is that it kind of varies based on industries.
(07:30):
So when we ask the question to, for instance, the technology companies or pharmaceuticalcompanies in terms of what their spend was going to be,
you know, in 2025 on operationalizing AI solutions, the range was around 10 to 13 million.
(07:50):
And the same question if you ask for kind of retail and some of the other industries, it'slow around 5 million.
But the, and that is the average spend.
So the question is also which industries will focus much more on trust as a factor.
And the other thing that I wanted to highlight within this space is
Also, how does it disrupt the startup ecosystem in terms of developing solutions?
(08:15):
So one of the big things that has come up in conversations is the need fordomain-specific, industry-specific, and use case-specific standardized third-party
evaluations.
In the absence of regulations, or when organizations are thinking about eitherself-regulations or third-party reliance, what solutions exist in the space?
(08:37):
And then the second question is,
how would evaluations vary as capabilities evolve?
So evaluations for text space or chat models, there are some that exist in terms ofbenchmarks, but what would that mean when we start talking about multimodal or multi-agent
systems?
And then the third piece is also thinking about evaluations in the enterprise context at abroad spectrum.
(08:59):
So thinking not just about safety, but thinking about safety, accuracy, correctness, ordifferent dimensions together.
And maybe at some...
point, even in competition with each other.
And so how would enterprises make selection when you're picking models across thesemultiple dimensions and what is going to be the trade off for you?
(09:19):
So I think those are some of the big kind of open questions that enterprises think aboutwhen trying to think about the value that Michael highlighted and grounding it on the
reality of trust.
You mentioned that the survey showed there was a divergence in terms of what firms areinvesting in AI deployment.
(09:40):
Does the investment or the focus on governance or trust track that or is there adivergence in terms of which firms are more concerned about the need to deal with some of
those responsible AI issues?
So just to understand the question, is it more around what kind of industries areprioritizing investments?
(10:04):
Yeah, can you characterize what kinds of firms are more focused on the AI trust issues?
So I think there is a good correlation between enterprises that are AI forward or kind ofhave been on the frontline and also enterprises that are heavily investing in responsible
(10:24):
AI, which is probably an obvious conclusion, but we do see that as an outcome from thesurvey.
I think the second thing, which is actually very interesting to me is also geographicdistinctions in terms of how different parts of the world
are investing in AI governance.
And one of the things that we found that India and US are at the forefront of our AImaturity or AI trust maturity, and their scores were about 23 % and 19 % kind of
(10:56):
respectively higher than the average scores.
So there is also merit in understanding why these geographies forward.
And I can wear my, I'm Indian, so can wear like kind of my India hat.
And one of the things that was top of mind for me when we saw those outcomes is,
India has a lot of policy think tanks, industry consortiums that have been consistentlypublishing responsible AI principles, AI procurement guidelines for public sector.
(11:25):
And the narrative of AI adoption has been linked with the narrative of responsible AI.
And I think understanding how different governments are taking that approach will becomeas important.
as understanding how different industries are taking the approach to AI trust as well.
(11:45):
just going to stop a sec.
We're getting a little bit of lag from you.
I mean, it'll probably be OK when we get the recording, because it's coming down locally.
But it's cutting out occasionally.
Do you want me to repeat that answer?
I'm happy to do it.
I the answer was totally fine.
We're just towards the end and I'm pretty sure when we get the local recording will bejust fine.
But yeah, I don't know if there's any, it may just be something with the connection.
(12:08):
yeah, I'll let you know if it's a problem.
You also froze for me for a second.
So I don't know.
Maybe it's my internet.
Yeah, okay.
It's about, mean, yeah.
on?
Is your VPN on?
I think so, yes.
Could turn it off.
We'll probably lose the connection, though.
So I don't know, Kevin, if we should do that.
(12:30):
OK.
I'll just let you know if there's anyone that I think is a problem.
That was just at the end.
And again, I'm pretty sure when we get the local recordings, it'll be fine and he canclean it up.
So yeah, I'll let you know.
I definitely wouldn't need to record it.
OK, so I'll come back and answer Michael's question.
No, was plenty.
That was plenty.
(12:50):
No, don't worry about it.
Totally fine.
OK.
Michael, what's your sense for the enterprises you're dealing with in terms of thematurity stage they're at with adoption?
Are they still trying to find those killer use cases or are they now at the next stagebeyond identifying?
(13:11):
Yeah, mean, a few things.
If we would just look overall at AI and generative AI adoption in addition to AI trust andresponsible AI as Mita was talking about.
Look, for organization of a certain size, almost everyone is doing something.
I mean, we've been surveying thousands of executives around the world in terms of how muchthey're using AI and when they're using AI.
(13:36):
And in the discussions with executives, see that it is...
it's very difficult to find an organization that isn't doing something.
But again, one of those lessons from age is oftentimes the investment in technologyprecedes capturing real value from it.
(13:56):
How many CFOs are talking about how many cents per share they've added when they're doinga quarterly earnings report?
Very few at this point as a result of generative AI.
But many organizations are doing something.
So I think everyone is at the stage of recognizing there's value here.
And in fact, real value is being created oftentimes within a business function.
(14:16):
So whether or not it's customer service, for instance.
If you look at every company is a software company now.
we're seeing software languages.
You talk about large language models.
Now you can just ask it to write some code.
And it really does.
recently heard this term vibe coding, right?
(14:38):
People who aren't coders at all are just, you know, asking it to write code for you andhaving it, you know, do different things, right?
So that technology is really moving forward, right?
And so, and again, in marketing, being able to create either brand marketing orpersonalized marketing, these things really are, you know, going into production.
But with regard to scale, we're very, very early.
(14:58):
And again, you know, there's the technology.
There's piloting or experiments, and then there's really capturing value.
And there's a lot of work that has to be done.
There are a lot of time and effort that has to go into that.
so I think most organizations of a certain size, again, we differentially look at thelargest organizations or the most complex organizations.
(15:19):
They're in that process of saying, look, what works and then how do we achieve scale?
And achieving scale, again, is, yes, there's a lot of fun technology stuff about achievingscale.
But then what do you have to do organizationally?
How do you change the incentives?
How do you train and re-skill a lot of those?
And how do you change the process?
Again, in some of our surveying, we found the organizations that are refashioning theirprocesses, the ones that are really rewiring their processes are the ones that capture the
(15:45):
most value.
But that, again, in some ways is quite disruptive to your organization to do that.
And so I think that's where we're oftentimes seeing that.
And it's interesting, we do see these differences by sector or industry or geography asMinda was talking about, but we also within an industry and within a geography, we often
find big differences with regard to the leaders versus laggards.
(16:08):
And so oftentimes you have to look at the individual enterprise level to say, how far areyou, even though we could say healthcare is here and industrial is here or what have you.
And those differences are real.
But then oftentimes we see lots of individual variation about the sophistication andmaturity of an organization in identifying the risks that they need to deal with, the
biases they might not do, or the degree of trust that their employees, their businesspartners, and their customers have in these systems that they're deploying.
(16:39):
Yeah, so Meta, given that uncertainty and the flux that's happening, how do you work withthe clients to see the value in making those investments specifically in the responsible
AI or AI governance activity?
Yeah, so I think, you know, the good thing about working in AI trust is that there are alarge variety of organizations, multi-stakeholder organizations that have come up with
(17:06):
frameworks that exist as a foundation layer on which you can base how you work withorganizations.
And, you know, some examples of that would be, for instance, NIST RMF or ISO, and theseprovide like kind of guidebooks as a starting point.
When we work with organizations, I think we look at it in multiple tiers of how theseframeworks need to be adapted.
(17:31):
And the frameworks need to be adapted at the industry level, at the organization level,and then at the application or the use case level.
So at the industry level, there are some industries that have high risk exposure, right?
Like if you think about the security incidents or if you look at the AI incident databaseor GDPR fines.
you will see that some of the industries and the financial services are just more highlyrepresented than other industries.
(17:57):
And so you have to kind of customize your solution to that industry's risk exposure aswell.
I think the second dimension is where is the organization at in terms of its ownreadiness?
And we measure it in different ways, right?
We measure it in the point of view of their overall RAI approach.
Do they have an understanding of which risks matter to them as an organization?
(18:20):
Do they have an AI governance body that is actually responsible for adapting it toregulations, adapting it to evolving AI capabilities?
We look at the second dimension, which is the processes that they have.
Do people know what their roles are across the entire AI development lifecycle?
Or is it kind of just one team that is working on innovating and deploying?
(18:43):
And then I think the third piece is around the...
the tooling solutions and the technology that exists.
So you have to think about how are you evaluating your models?
How frequently you're evaluating your models?
Which tools do you have in place?
And are those tools reliable?
Because a lot of tools right now exist are probably in startup phase.
(19:05):
And you have to think about something which is robust and reliable in the longer run.
And then the third piece is change management and training.
So those are the four ways that you think about an organizational readiness.
on the dimension of responsible AI.
And then I think we kind of start looking at the use cases.
you know, for instance, are you deploying a generative AI tool that is summarizing yourmeeting notes?
(19:30):
And what would the assessment of risk look like in that dimension versus if the same toolis helping your customers make investment decisions on an app, on a bank?
And the kind of risk associated with each is kind of very different levels.
And the controls you need for those two are also very different.
So we then kind of start adapting it to the use cases that the organization has identifiedthat are of high value in terms of assessing the risk parameters.
(19:57):
And for each of those risks, what kind of controls do they already have in place and whatdo they not have in place that we can supplement with?
So I think it's a longer drawn process.
And along that process, it's also about bringing together different stakeholders withinorganizations.
you do need to have the legal team and the enterprise risk or the procurement team or theproduct team and all of them in the same room.
(20:20):
And a lot of it is being able to translate to what different people are saying.
But if there's two companies that are in broadly the same industry and doing broadly thesame use case, do you wind up with the same structure and process for AI trust or is it
more granular to the particular organization?
(20:42):
I think it is very much grounded on their current state and their aspiration for theirfuture state.
So one example would be that, you know, we have worked with organizations where thereisn't an understanding of what does the risk mean at the level of their use case that
(21:04):
they're deploying.
And then we have also worked with organizations that have a highly mature framework.
of architectural board, architectural review board, or having, you know, the kind of thehiring a product general counsel to actually write product policies that are adapted to EU
AI Act.
And you kind of see a whole spectrum of maturity of trust as well.
(21:29):
And so you kind of have to anchor and meet them where they're at.
And that's how it varies, even if they're in the same industry, and even if they're kindof broadly deploying the same use cases.
And Michael, you talked about, McKinsey now has this range of capabilities, not just doingstrategy, but are there still distinctive ways in the AI area that your firm would engage
(21:53):
with companies doing these kinds of projects?
Yeah, I mean, you know, I've been talking about McKinsey's history, right?
I mean, people again think of strategy firm.
At one point, we started doing operations work.
And you know, sometimes we're like, what are you doing on shop floors?
That's not us, right?
And then, but that actually became something that we were able to do.
(22:13):
We can go from the shop floor to the boardroom and make that connection.
And I think that's one of the things that, again, it's not just that we have these peoplewho have these incredible technical skills.
but it connects it with a corporate strategy.
And from an AI trust standpoint, right?
It's easy to think about this as a risk management question, i.e.
avoiding the bad stuff, right?
(22:34):
I mean, there are all kinds of risks you can identify, like we need to guard against it,all those sorts of things.
But if you frame it as AI trust, then as Maita was talking about, that's part of yourvalue proposition to your employees, but it's part of your value proposition to your
customers too.
And so there's an upside to doing this stuff well.
There is an upside to avoiding bad things happening, there's an upside to avoiding cyberrisk, there's an upside to avoiding lawsuits because of bias, all those sorts of things.
(23:04):
But there's also real upside in terms of if your customers have trust in you, if yourbusiness partners have trust in you, if your employees have trust in you, then you can get
more done.
Then you can actually create value for them and that value, part of that value will accrueto your company.
I think that's part of the approach.
Again, people often talk about from the beginning or AI trust or responsibility embeddedin everything.
(23:31):
Yes, but for what purpose?
And being able to understand what the goal of the business is and the metrics and tyingthose both on the upside and as well as avoided downside, I think are some of the things
that distinguish what we do from someone who's just, number one, just executing on thetechnical stuff or number two,
(23:51):
only looking at this as a way to avoid the negative impacts.
We view this as a way, holistically, as driving the business forward.
Yeah, this goes back, I guess, to what I was asking you, Metta, before about incentives.
Certainly everyone, you know, it's appealing to say this investment in AI trust orgovernance or responsibility will actually give you some upsides, some benefit.
(24:17):
Is it possible, though, to measure that?
Can we actually demonstrate that or can you actually demonstrate that to firms?
Because obviously that's the ideal.
Michael, you want to go first?
I'm curious of both of your takes on this.
Yeah, I mean, again, there's as for many things, the question about attribution, right?
(24:37):
You look at what your EPS was and what all are the things that went into it.
certainly on the negative side, it's very obvious, right?
When negative things happen, there's a cyber breach or whatever, and suddenly your revenuegoes down or your market cap goes down because the reputational hits and all those things
are definitely true.
But you can also do other things to measure.
(25:00):
There are these correlations between how much do you trust this brand?
Did you use the following application?
As I said, for instance, if you're getting some sort of advice, financial advice, oryou're getting a next product to buy recommendation, how helpful was that to you?
Those sorts of things, we can find correlations with those as well to revenue, forinstance.
(25:24):
And so again, it is hard to do attribution.
But we do see these types of things really making a difference.
Yeah, I think the one thing I would add is that organizations do think about quote unquotethe ROI of REI, right?
(25:45):
Like what is my return?
I you could look at it in two ways.
You could look at it to Michael's point in terms of what are the kind of GDPR fines or theEU AI Act regulatory fines or the incidents that you're avoiding that is actually helping
benefit your brand.
and also helping you from financial loss point of view.
(26:07):
I think the second way you could look at it in terms of actually evaluating is when youconduct evaluations.
So if you do actually do red teaming of models or if you actually make selections ofmodels based on benchmarks that exist, whether it's MMLU or it's the safety benchmarks, AI
illuminate, et cetera.
(26:28):
How do you, you know, that's a measurable way to look at what is the problem that you'regetting into.
And from a data layer perspective, you know, there are solutions that exist.
For instance, the data nutrition label that tells you what kind of data is going into themodel.
So I think there are measurable ways to look at the input and there are measurable ways tolook at the output from the downside perspective.
(26:51):
But there is also a measurable way to look at it from a value perspective, but that isself-decided in terms of what is the value that you're actually aspiring to achieve from,
which is your own organizational risk appetite as well.
Yeah, so we're talking about, you know, to being able to explain and understand thebenefits of these kinds of investments.
I want to ask you, Metta, about something that you've written about, which is explainingthe impact and the causes of AI decisions, the whole area of explainability and
(27:21):
interpretability.
Why is that important for trust in AI?
And where are we in terms of the capability, especially with
deep learning systems and generative AI to get a good understanding of why certain outputsare resulting.
(27:42):
Yeah, so I think it's pretty much evolving.
I would, you even before answering that question, make a distinction between, and we dotalk about it in terms of what is explainability, what is interpretability, and what is
transparency.
And I think sometimes people conflate all of them.
So explainability is kind of thinking about it in terms of when a model is sort of ablock.
(28:07):
a black box and you don't understand what kind of input leads to what kind of output.
And can you explain that in a human understandable way to the end user, right?
Interpretability is actually understanding the internal workings of the model andembedding that aspect in the development of models.
So you're not thinking about it once the model is deployed, but it's actually embedded intraining of the model.
(28:30):
And then when you're thinking about it from the prospect of transparency, it's kind ofmuch more broader, which is
once you're deploying the model, how are you informing the users where the model is usedand what are the harms associated with the model?
And then even broader approach to transparency, which I have been reading more recently ina lot of journals, is transparency in the format of disclosing the training data sets or
(28:56):
weights or actually the idea of openness in the model ecosystem.
And I think there is that distinction in the different contexts.
I think the explainability aspect is very important just in the context of one, winningtrust of not only your consumers, but it is also important from a regulatory disclosure
(29:19):
perspective.
And there are a lot of tools that exist.
We talk about it from the point of view of how do you think about explainability in ahuman-centric way?
And so how do you kind of think about explainability in terms of the end user?
If there is an end user,
that is a government employee or a regulator, you have to be able to understand the modelso that it displays that you are complying with the regulation.
(29:45):
But if you're thinking about explainability from the point of view of your internalbusiness leader, they need to be able to understand how the model is making decisions and
how it leads to kind of business insights.
And so I think that aspect of human centricity where explainability is
designed to the context of the different stakeholders who interact with the model isprobably not as evolved as it can be.
(30:12):
And I think there's a lot of opportunity and scope over there.
Got it.
Michael, what's the biggest or what are the biggest stumbling blocks that you run intowith clients in these areas?
Everything.
mean, to a certain extent, there's this old saying that history doesn't repeat itself, butit rhymes.
(30:35):
And so I think a lot of the things that we've seen before are true here, too.
And look, if you just start at a funnel of understanding, just being aware of some of thechallenges, you're not going to manage for an AI trust or a responsibility or safety
question if you don't know that it's a problem yet.
(30:56):
And so, you know, for a lot of people just understanding, are things, issues, number one,to be addressed.
then number two, know, Mayda was just talking about all the things you can do withinterpretability, transparency, explainability, not knowing that there, what are the full
range of different ways in which you can address some of those topics, issues, questions.
(31:16):
And by the way, some of those are technical.
You know, there are these tools that we can bring to bear.
But if you talk about hallucination, for instance, as another challenge around a lot ofgenerative AI, again, I'm sure you've talked about this too, right?
But if you view the performance of these things as being like a summer intern or somewherebefore they go to work or whatever, we still have ways to make workers productive even if
(31:47):
they're a new joiner.
And what do we do?
mean, yeah, they're
technologies you can use, but also we have management and we have all these other humanways in which we can manage the way that different outputs are used, again, for safety,
for accuracy, for bias, and for testing.
I mean, it is funny too, because there's a linguistic thing that happens.
(32:10):
mean, for many years we talked about benchmarks, but now around here where I live in SanFrancisco, evals.
And you know that if someone says eval, they actually have a type of sophistication.
And so there are these linguistic markers of someone understanding something more.
And as Minda was talking about, what are the evals that you're using in order to test forsafety?
(32:32):
And so again, not just the awareness of being able to do them.
And then I think of the other thing that, again, there's the list, this is as long as myarm.
But there's a difference.
Minda made reference to it before.
If you say, look, I'm going to use the NIST framework.
Well, great.
But there's a big difference between this PIPA
piece of paper that says NIST framework and me stating that and operationalizing it,right?
(32:56):
The execution, not just whether or not you're executing, which is a big deal, but how wellyou do it is it makes a huge difference, right?
And that is where the learning curve matters, right?
You sometimes people talk about speed as a strategy.
It's not just because you're fast, but if you get there earlier, you start learningearlier.
It takes time to learn how to do this and do it well.
(33:20):
And so again, you you need to be aware, you need to know the types of things you can do,but then you need to learn and then you need to achieve scale because again, you know,
again, as long as the day is, people will say, I had a successful pilot.
Okay, when is the CFO going to talk about it?
Right?
And all of that, all of that practice, all of that discipline with getting from asuccessful pilot.
(33:46):
both in terms of the AI value, but also the AI trust, transparency, interpretability,explainability, all of those things, getting that to scale are some of the things that are
both hard to do.
The flip side of that is when you achieve it, then you'll be running faster.
So anyway, I went on a little bit there, Mehta, you should jump in too.
(34:10):
Yeah, well, let me ask you both one last questions we're going to have to wrap up, whichsort of dovetails from that.
Obviously, things do change really fast in this area and there's a lot of uncertainty outthere.
But let's say we project, say, three years into the future from where we are today.
As of today, we're what, about two years, two plus years in since since ChatGPT.
(34:34):
So let's go about that much more time into the future.
Will we be at a point where there's more clarity and stability and widespread adoption,especially in these areas of AI trust governance and so forth?
Or will we still be having this discussion that it's variable, which companies areadopting and there's still uncertainty about the standards and so forth?
(34:59):
Mehta, you first.
I think my dream would be, I wish I said that there'll be 100 % clarity on entirely whattrust means and everyone would be on the same page.
Maybe I'm not the most optimistic on that dimension, but I think what I'm optimistic aboutis the attention that people will give to trust with evolving capabilities of AI.
(35:26):
I think there are a lot of enterprises that are looking at multimodality.
I had told myself that I will count the number of times I say agentic in the podcast, butI didn't say so many times, but, or deep seek, but I think with the enterprise focus on, I
want to get value from the most evolved AI capabilities.
(35:50):
The focus on governance of it from multiple stakeholders is going to rise a lot.
I think the second change, which I would love to see in
more of public discourse is how do you bring enterprises into the conversation aroundtrust?
So I think there are a lot of multi-stakeholder discussions that I've had the privilege ofparticipating in.
(36:11):
And I think there is a really good representation of large technology companies, policythink tanks, and governance bodies.
I think there is room or opportunity for more enterprise, know, CLOs, CROs, CIOs.
and creating space for them to understand their trust concerns and voice theirinnovations, I'm optimistic that that will happen more.
(36:36):
Mike Lee gets the last word.
Yeah, I mean, look, if I look at trend lines, right, you know, they're all in some ways,quote unquote, moving in the right directions, right, the technology to, you know, to to
to achieve responsible AI continues to move forward.
The awareness as we've been doing surveys and talking with the corporation and corporateleaders continues to increase and the actions being taken in order to implement AI trust
(37:01):
and transparency and responsibility continue to increase.
And yet.
It takes a lot of effort and time.
And so do I think all these problems will be solved in two or three years?
I do not.
I am very hopeful that more and more leaders and organizations will be taking concretesteps to make things better and to really advance forward.
(37:22):
But I think there'll still be work to be done in two to three years.
All right, well maybe we'll have to have you back on and we'll see if we take a bet, whowins the bet.
But I would be on your side as well.
Thank you both so much for this conversation.
Thank you, Kevin.
Thank you so much.
This was super fun.