Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:01):
Navrina, it's a pleasure to have you onthe podcast.
People have been talking about andstudying questions about risks of AI and
bias and so forth for a long time,relatively speaking.
What gave you the idea in 2020 to set up acompany to do the kind of work that you
do?
Kevin, thank you so much for having me.
(00:23):
So, you know, I've spent almost twodecades in high tech building products
focused on mobile and artificialintelligence.
And I would say that, especially in thepast decade, I saw not only the
proliferation of this very powerfultechnology impacting pretty much every
individual, every citizen you canpotentially think about, but also I saw a
(00:46):
massive shift in the enterprises.
where the way we were building machinelearning applications had a massive
oversight deficit.
So let me just unpack that for you whatthat means.
Most of the teams that I've managed in mycareer are product and data science teams.
And as you can imagine, the incentives forthose teams are really to just create the
(01:08):
fastest performing models, put them out inthe market, and get to the business
outcomes that we are looking for.
But given...
the impact of artificial intelligence andgiven that it is a socio -technical
technology, that means it has a farreaching impact than some of the
traditional software has.
What we found was that the way machinelearning was getting built in organization
(01:31):
needed more compliance, needed more policyperspective, needed more understanding of
business objectives so that you couldactually put out this machine learning
capabilities out in the market.
that was not only serving the users, butalso the businesses.
And I would say that that's what I foundthat in the way ML Ops was showing up in
(01:53):
the market, there was an increasingdeficit in terms of accountability, in
terms of how we were building thesesystems, designing these systems, testing
these systems, especially in production.
And so what the idea really was, is therea standardized and scalable way?
to bring that oversight and accountabilityto the socio -technical technology, which
(02:18):
is going to have such a far reachingimpact in the world that we've not even
studied and grappled with with the pasttechnological revolutions.
And that's what really led me to creationof Credo AI in 2020.
What do you say to companies that arestill of the view that they just want to
do what you said, get the best businessbenefits out of their AI?
(02:39):
Of course, maybe they need to test it.
Maybe they'll do some bias evaluation, butthey don't want to invest the resources or
take the time to do that kind ofoversight.
You know, Kevin, I'm a firm believer thatin this new age of human productivity
powered by artificial intelligence, theonly organizations and enterprises that
(03:05):
are going to exist are the enterprisesthat are going to lead with trust, that
are going to be the enterprises thatdouble down on governance and use that as
a competitive advantage, not as just acheckbox.
And so the thing that I would like to sayto those enterprises is you won't exist
tomorrow if you don't invest in governanceand oversight today so that you can have
(03:31):
accountable technology that is actuallyserving not only your business objectives,
but also consumer needs.
So maybe you could talk a little bit morespecifically about what kinds of
functionality you provide at Credo AI andhow does it actually promote that kind of
trust that you described.
Yeah.
So Kevin, as I mentioned, AI is a verysocio -technical challenge.
(03:54):
And so one of the things that we arefinding is defining what you're going to
measure to make sure that the AI systemsactually works in your behalf and in your
favor is very challenging.
So for us at Credo AI, we sort of steppedback and really went back to first
principles and thought about...
(04:14):
what is really socio -technical mean andhow do we actually figure out what to
measure?
And as you can imagine, this was a veryinteresting interplay, not just in
building the tech, but also humans who areinvolved in building that tech.
So the three things we do really well withour software solution is first, alignment.
(04:36):
As you can imagine, in AI, and especiallybecause it is so context dependent,
we are seeing that with foundation models,which are multimodal and can, you know,
are dual purpose and can be adapted to anyuse case.
It is really important to understand thecontext of use.
And within that context of use, it isreally important to understand what are
(05:00):
you going to be testing these systemsagainst.
And this is where the alignment problembecomes really critical.
So the first step in Credo AI product thatwe've spent years,
building and pioneering is reallyalignment capabilities where we are
helping organizations define andunderstand and align on what does good
(05:21):
look like.
And within our platform, the way that weare making that happen is first and
foremost, really providing out of boxpolicy intelligence where we have codified
what the world is saying what good lookslike.
And this might come from regulations, itcan come from standards like NIST, ISO.
It can come from industry best practices,whether it is defense innovation units,
(05:45):
responsible AI guidelines, or datainterest alliances, requirements, or it
can come from the companies themselves.
As you can imagine, there is a need forthe AI forward organizations to define
what they are willing to be heldaccountable to.
So how do you codify that within theproduct is one critical thing.
(06:07):
And I think for us at Credo AI, the way weare solving that alignment problem is
really bringing these stakeholderstogether around these codified set of
guardrails.
Once you've done that codification, Kevin,what we end up doing next is using those
codified policies as a set of controls toreally test your entire stack.
(06:29):
And this goes back to the socio -technicalnature because the entire stack does not
just mean your...
data sets, your models, your use cases,but it also means the processes within the
organizations.
It also means having an understanding ofwho is reviewing these systems, who is
getting impacted by these systems.
So in the second phase of Credo AIproduct, we basically do interrogation of
(06:54):
your tech stack as well as your processstack against those requirements.
And then lastly, as you can imagine,something that I'm deeply passionate about
is
How do you actually get those diversevoices actually engaging on understanding
is this AI working in our behalf?
So in the third step of Credo AI product,there is a massive translation engine,
(07:18):
which is basically taking all the outputsof this testing and interrogation against
a set of guardrails and translating it sothat Kevin, the stakeholders were on both
sides, whether they are coming fromtechnical backgrounds or they're coming
from business backgrounds.
can really understand what's happeningwithin those systems.
And so the outputs could be things likeaudit reports, impact assessment reports,
(07:42):
model cards, governance dashboards, riskdashboards.
But we are really making a very, I wouldsay, concerted and intentional effort so
that these stakeholders actually can haveconversations at the right level, even
though they might not have the same levelof expertise in AI, in risk, in policy, in
compliance.
but they are really coming together tosolve a collective problem.
(08:05):
Is this AI working for the end users andmaking sure that they have the confidence
in deployment of these AI systems?
So what's really different in doing thisin the AI context from what many
organizations have already been doing fordata governance, compliance with GDPR and
so forth, or even just overall good ITgovernance?
(08:27):
Is it those particular regulatoryobligations or is there something
fundamentally new in the AI context?
You know, that's a great question and weget asked this question quite a lot.
I think the way I would like to answer itis having foundation of technology
governance, corporate governance, datagovernance helps because that just
(08:49):
indicates that your organization hasalready thought about what does oversight
for those different functions look like.
And you have individuals who areincentivized to do that oversight and
governance in those different areas.
The reason AI governance is verydifferent, and I would say that certainly
does build on that foundation, but needs anew kind of thinking, is primarily because
(09:12):
of three reasons.
The first reason is it really needs a verymulti -stakeholder engagement, because
there are so many nuances to how thesesocio -technical systems are going to
impact users and businesses.
that it is not just one person who willhold the keys to understanding how this
(09:34):
very complicated technology works.
And so I would say the first big changethat we are seeing is you really do need
perspective of data science, machinelearning, risk, compliance, audit, you
name it, into this new governancemethodology, which can be built on the
foundation of what you have.
(09:54):
The second thing that is critical, Kevin,is...
Unlike the previous technologies, which,you know, if you think about privacy and
cloud, yes, there were evolutions in theirjourney, but they weren't changing.
They weren't as dynamic.
With artificial intelligence, especiallywith these large frontier models and
(10:15):
foundation models, the dual nature ofthose actually is something that we should
be very concerned about, but also...
be aware that there's a lot of moreunknown unknowns.
As a result of this, this technology ismoving so fast, it's getting new
capabilities exponentially that thegovernance methodologies and frameworks
(10:36):
that you might have needs to be constantlyevaluated and adapted to make sure that
they are making the right guardrailshappen for AI.
So that's the second reason I think AIgovernance is different.
And I would say that the third reason AIgovernance is different,
is absolutely because of the kind ofguardrails.
(10:56):
And as you can imagine, those guardrailsfirst and foremost are still coming to
fruition, whether it is throughregulations or standards.
But the second thing is because of themassive increase in capabilities, new kind
of guardrails really need to startsurfacing very quickly.
And this is something we call as emergentproperties of these models.
(11:19):
there are things that we don't even knowabout these systems that are going to
change very drastically.
And so how do you actually have agovernance practice that can keep pace,
not only with the regulatory changes, butalso capability changes is really
critical.
So I think just to summarize, Kevin,certainly if you have privacy, data
(11:39):
governance, cloud, you can build on thosegovernance capabilities, but AI governance
does need.
a new accountability leader.
It does need new kind of thinking.
It does need this multi -stakeholderperspective to really make sure that you
can keep pace with this evolution of AIthat's happening.
You've used the term sociotechnical anumber of times, and I had another
(12:03):
conversation with Elhan Tabassi of NIST,and as you know, they very much have that
orientation as well in their AI riskmanagement framework.
What does sociotechnical mean to you, andwhat's so important about that framing?
Yeah, you know, the way I define socio-technical is when you have a
technological change or capability setthat are brought to the market in a way
(12:26):
that literally is going to change the waysociety, this world, our democracy
operates.
And the impact of that is really farreaching than just a set of small
stakeholders.
So for us, socio -technical reallyrequires two things.
And we think about it as inputs andoutputs.
But in terms of outputs, the impact ofthis technology, whether it is ML models
(12:50):
being used in hiring or whether they areused in climate change, it is really far
reaching.
And it really has a very importantconsequence that we need to, as
stakeholders think about, which goesbeyond just getting the next customer.
So I would say the output part is reallyimportant to think about.
And then from the input part, this goesback to...
(13:13):
The development of this technology isdependent on so many different components.
As an example, where are you getting thesedata sets for training these systems from?
Are those data sets demographicallydiverse?
And as you can imagine, most of the datathat is feeding even these large language
models is coming from internet.
And, you know, does it represent all partsof different demographics that exist on
(13:38):
this earth?
No.
So how do we actually make a verythoughtful decision in what these inputs
to these systems are?
So for us, social technical really is thisinput output understanding of where and
how these systems are getting built andwho's building them, what is their
accountability, but also thinking aboutthe very pervasive nature of impact that
(14:00):
these systems are gonna have on this worldand on the businesses and on the impacted
users, which with the previoustechnological changes,
we didn't have to pay as much attentionto.
Given that incredible importance though,is a compliance orientation enough?
Shouldn't companies really be thinkingabout ethics and values and how to raise
(14:25):
the level of what they're doing?
Isn't there a danger that this becomesjust checking a box?
Absolutely.
And I think this is where, you know, westarted this conversation with who's going
to win in this age of AI.
And if you are going to use AI governanceand responsible AI as a checkbox
mechanism, you can already count yourselfout.
(14:47):
I think organizations that are reallygoing to start emerging as winners are
really taking a very comprehensiveapproach.
They are thinking through their not onlystructures, whether it is...
Who is actually on the table?
Who's looking and reviewing these systems?
But all the way from, are you sourcing thedata from the right compliant individuals?
(15:08):
Are you making sure that the testing isdone in a comprehensive way?
Are you making sure once these systems areput in production, that the reviews and
incident reporting is happening in acomprehensive way?
So I would say that if you're going to useit just to get to, MI -EUA Act compliance
checkbox, MI New York City law number 144checkbox.
I think you're really missing out anopportunity to build that trust, which
(15:31):
actually can really power your businessgrowth in ways that you can't even
imagine.
And we are already seeing classically,like in the companies that we work with at
Credo AI, that organizations who aretaking that comprehensive approach, which
is not just a rubber stamping, are alreadyestablishing, I would say, core
(15:52):
competencies within their organizationsthat's going to really set them up for
success.
to really be able to innovate and iteratefast because that's what this new age of
AI will need so that you can respond tothe technological advances very quickly
within the organization.
How does one measure that or evaluatethat?
If you're looking at a company or someoneelse is looking at it, what should we look
(16:16):
to to evaluate if they've got that realcompetency?
Yeah, great question, Kevin.
And this is where at Credo AI, we've comeup with a readiness index.
And that readiness index is really focusedon a couple of core components, which not
only looks at the AI maturity of theorganization, but also the organizational
commitment, leadership commitment, as wellas accountability stakeholders within the
(16:41):
organization.
And the readiness index basically...
categorizes businesses where we'vesurveyed over thousands of businesses and
looked at some of these core vectors ofwhat defines sort of like a solid AI
governance structured company.
I would say in that readiness, what we arefinding is if you're at these early stages
(17:05):
where you are still exploring, the signalsthat we get from still exploring are.
those organizations are trying to baselinethemselves with the rest of the world.
They are saying, they're looking andreading what the responsible AI principles
that might be emerging in the world are.
They are trying to do maybe some lightform of audits.
(17:25):
They're trying to bring in some outsideconsultants and they're trying to bring in
some outside competencies to really doassessments of their systems.
Because there are these very early stagesof exploration.
They don't have a stakeholder who'saccountable for it.
And they're just trying to figure out,
who they want to be in this age of AI.
Now, when you go beyond this explorationstage, you start really aligning.
(17:49):
And this is when you're like, you knowwhat?
AI governance is going to be critical forus to become leaders in this age of AI.
You at least had that understanding.
Now you start forming or formulating a setof responsible AI guidelines for yourself.
You actually maybe have created acommittee and that committee is starting
to discuss.
What does good look like based on thebaseline that you've done in the earlier
(18:12):
stages?
And this is, I would say the alignmentstage is a very important stage because
that is where really you start to seesignals from an organization.
Are they going to take actions to furthertheir people processes tooling so that
they can start becoming the leader in AIgovernance?
And then as you can imagine, there are twoother stages, but finally, when you get
(18:36):
to...
becoming an optimized stage or aleadership stage in AI governance, you
have actually an accountable stakeholder.
It could be a chief AI officer.
You actually have a budget, which is ashigh as your cybersecurity and privacy
budget, which is now going to be dedicatedto AI governance.
You actually have a team of individualswho are responsible for that oversight and
(18:59):
accountability across your entirebusiness.
You have not only your ops tooling,
in place for MLOps, LLM Ops.
But now you also have this pain ofgovernance where many of the companies are
using Credo AI, but to really have thatholistic tool chain that is providing that
continuous oversight.
And this is the time you actually startexternally sharing assets with your
(19:23):
customers.
As an example, if you are a financialservices organization selling a fraud
model to a bank, now along with that fraudmodel, you will also maybe share your
governance report.
and the governance report would have allthe transparency around how did this fraud
model get built, how was it tested, whoreviewed it, what kind of industry
standards is it compliant to, et cetera.
(19:45):
And that's when you start seeing thebenefits because now the customer, if I am
an enterprise making a choice betweenvendor A and vendor B, and vendor A
provides me AI products and vendor Bprovides me AI products, but also
transparency reports and also governancereports.
I as the user, I'm going to choose vendorB because they are giving me more
(20:08):
confidence that there are thesetransparent practices to really make sure
that that AI is going to work for me atthe end of the day.
So Kevin, like, you know, in this AIreadiness journey, what we are finding is
there are different signals.
Some early stage organizations areposturing, they're writing blog posts
about responsible AI, but actually theystart to move the needle when they start
(20:31):
to provide assets.
that can help them land more customers,that can help faster procurement cycle,
that can help retain customers longer.
They can actually adopt AI and Genitive AIfaster to increase employee productivity.
Their employees actually have a very goodsense of what they can and cannot use the
Gen.
AI systems for.
And I think that clarity exists.
(20:52):
So that journey is where we are seeingorganizations need to get to rather than
just using compliance as a mechanism to doresponsible AI governance.
In financial reporting, there's a wholeinfrastructure of external audit and that
gets regulated and filed with the SEC andso forth and standardized in addition to
(21:16):
the kinds of things that companies aredoing internally.
Do you think that we need to get tosomething similar to that in AI?
Absolutely, Kevin.
I think they're going to see again ajourney to getting to that level.
And the journey, in my view, has alreadystarted.
So as an example, we are seeing a veryinteresting ecosystem of auditors emerge.
(21:38):
And these auditors, as you can imagine,are primarily third -party assessment
capability providers.
Because right now, there are no standardsagainst which you can audit an AI system.
So audit is a wrong word to use right now.
But think about them as a third partyassessors who have already been brought in
(22:00):
to come and assess some of these modelsand AI use cases.
Classic example of that is New York Citylaw number 144 that came into effect last
July.
And it requires a third party review,whether that third party is within your
organization or external of your ML -basedsystems, which are used for hiring.
(22:21):
Now, an evolution of that that's happeningin the ecosystem is emergence of standards
like ISO 400001.
Now, ISO 400001 is the first, I would say,certifiable AI risk management standard
that has emerged in the market.
And there needs to be a second standard,which is coming out later this year, which
is ISO 400006, which actually will provideauditors the capability to certify against
(22:46):
400001.
That hasn't happened yet.
But we are already seeing a lot ofauditors showing up in this market who are
like, I can get you to ISO 42001compliance, which I think is a good
starting point.
But as you can imagine, it is very broad.
It does not have understanding of use casecontext.
It is really around sort of like a carryforward of, do you as an organization have
(23:11):
a risk management framework for AI inplace?
And that's what ISO 42001 does.
The evolution beyond that you're going tosee of auditors is when actually more and
more of these standards show up, that AIaudits and audit auditors will actually
start to emerge.
And this is where you as an auditor cansay, not only am I certified by a
(23:33):
certification body, but now I can actuallyprovide you an independent review and
assessment and audit, similar to SOC 2audits that have existed in that category.
And then lastly, the vision that I havefor this world,
is especially if you think about publicand private side.
On the public side, my hope is that thereis responsible AI disclosures that are
(23:57):
mandated by SEC to really make sure thatan organization when they are doing their,
you know, sort of like earnings reporting,they're also required to disclose how they
are building these AI systems.
How are they making sure that theresponsibility is central and paramount to
it?
And I think those responsible AIdisclosures are not very far out.
(24:20):
We are already activating the ecosystem sothat on the public side, not only are the
investors holding companies accountable,but also bodies like SEC are holding these
public entities accountable.
And I would say a flip side that we'vealready started to see on the private
sector side is we actually are foundingmembers of something called as Responsible
(24:42):
Innovation Labs, where we came up with ourown set of commitments.
that the private investors will hold earlystage startups accountable to as they are
building or buying AI capabilities.
And these responsible innovation labs,really the initial focus was how do we
have that contract between investors andfounders who are starting companies in AI
(25:05):
to be responsible by design?
And if we do our jobs right and thesecompanies become super successful, then
when they go IPO on the flip side, theywill have these
requirements around public disclosures forresponsible use, which should not be a new
requirement because you've done that fromthe beginning.
So I know long answer to your questionaround audit ecosystem, but I am thrilled
(25:29):
about the formation of the auditecosystem.
It's just going to be a while till itcomes to fruition, but there are bits and
pieces that we at Credo AI are alreadyactivating or we are working with the
ecosystem to make happen.
What's different for AI governance withthe rise of generative AI?
Yeah, a lot.
(25:50):
So a couple of things again, the way Iwould like to tackle this is in terms of
companies that care about AI governance asa first vector.
What has been really interesting for us iswhen I started the company four and a half
years ago, we were focused on predictiveML governance and it was really primarily
focused on regulated sector.
(26:13):
12 months ago, everything changed.
And the reason everything changed isgenerative AI became truly a
democratization of artificial intelligencewhere anyone and everyone in any industry
could have access to these powerfulsystems and use it to their benefit and
mostly for their own productivity.
(26:33):
So I would say that what has shifted in AIgovernance because of Gen .AI first is we
are seeing organizations who are not evenin regulated sector.
actually caring about AI governancebecause they recognize that with or
without them, their employees are going tobe using generative AI systems and their
(26:53):
risk surface area has drasticallyexponentially increased.
And how do they not only manage risk, butis there a way that they can actually use
it to their advantage and build that trustthat we've been talking about?
So that's one thing that has shifted.
The second thing that has shifted with AIgovernance is an acceleration in
what I like to call adaptive policymaking.
(27:16):
We have never seen our government orglobal governments or global legislatures
move as fast as in the past 12 monthsbecause there truly is a requirement for
how do we not only rein in this technologybut make sure that this technology can be
adopted very fast by businesses with theright intentional guardrails.
(27:38):
And so I would say that the second bigmovement that we have seen is,
in this rise of, let's make sure thatthere are some guardrails in place with an
understanding that these guardrails willchange.
And so I've been a pretty vocal advocatefor this adaptive policymaking where you
don't have to get everything right fromday one.
You really have to work with some of thecore AI experts like Credo AI and the
(28:02):
hyperscalers to really understand whatdoes at least the beginning stages of good
policy look like.
And then we need to have...
sort of like new mechanisms in ourgovernment to be able to iterate very fast
on it.
And, you know, EUAI Act does, you know, Ithink they've done a fantastic job in
really thinking through what that mightlook like.
Obviously it has lots of gaps andchallenges, but rather than focusing on
(28:25):
gaps and challenges, I would say that atleast it is the first, you know, global
framework that organizations have startedto pay attention to.
And then the third big thing that hasshifted, I would say is,
It has put enterprises and stakeholders onalert that now they can't just rest on a
(28:45):
technology that's not going to change foranother decade.
This technology literally is changing onmonthly and quarterly basis.
So how do you actually embed AI literacywithin your organization to be able to
respond to the speed of capabilities thatare emerging in generative AI?
And I think, Kevin, that is reallycritical.
(29:06):
because what we are going to see is anevolution on what kind of skills and jobs
emerge in this ecosystem becauseindividuals who are able to learn quickly,
try out new technologies, are creativeproblem solvers, are not just like holding
on to their expertise forever, butdeploying their expertise in AI are the
(29:26):
ones who are going to actually startshowing up and their jobs are not going to
be at risk.
So I think we are seeing a very new modelof AI literacy emerge within
organizations.
to really respond to this very fastchanging AI technology.
Just following up on one thing you said,you seem pretty optimistic that regulation
will actually promote better development.
(29:50):
Are you worried at all about some of theconcerns about chilling innovation or
especially now in the US where we have somany states passing legislation that
there's going to be this thicket of lawsthat actually make it difficult,
especially for the smaller companies tocomply?
You know, Kevin, the way I think aboutregulation, it's not that like I wake up
(30:10):
in the morning and say, my God, I'm allfor regulation.
No, I think what regulation is reallydoing is bringing the focus on how
important it is for us to put guardrailsaround this technology.
So I see regulation as a way to wake upenterprises and stakeholders that you need
to take action because unfortunately ashumans,
(30:34):
I always say this, we are motivated by twothings.
We are motivated either by ambition orfear, right?
And the ambition here is, my God,Genitivia is going to bring in so much
prosperity for humans and for our world,and it's gonna make sure that this planet
continues to be a thriving planet.
And I truly, 100 % believe that.
(30:54):
However, on the flip side, there areindividuals who take time to respond to
technological changes.
And the only way to get their attention isthrough fear.
And I think that's what unfortunatelyregulatory movements I use as a fear that
you need to be compliant otherwise thereare these fines.
I think what I'm hoping to do is changethat entire narrative.
(31:18):
The narrative really is think about whoyou are going to be as an individual in
this new age of AI, what characteristicsand skills you need as an individual.
But secondly, what characteristics youneed as an enterprise to actually be a
thriving business in this age of AI.
And for both of those, I would say thereare certain core characteristics on
(31:38):
adaptability, being able to learn andbeing able to build trust to consistently
deliver are basically table stakes.
And I think for me, when I think aboutregulation, I think about as a wake up
call, not as a way that...
is the only way that we're going to adoptAI responsibly.
(31:58):
I don't think that's the only way.
So to answer your question, am I worriedabout regulation actually being a barrier
to innovation?
Yes, if it is not done thoughtfully.
But the good news right now in artificialintelligence is policymakers and
regulators have been very heartened withthe private -public partnership that has
(32:19):
emerged where...
policymakers are recognizing they don'thave the expertise.
They will need to lean on AI experts likeus.
At the same time, we also understand wedon't understand the nuances of actually
creating a good policy.
And this is where we need to lean on thepolicymakers and many a times sort of like
go against the bureaucracy that hasexisted and many a times go against the
(32:43):
power that has existed to make sure thatthis technology serves humanity.
So long story short,
I think regulation is a way to rally thetroops around a common mission.
Now, can this be a barrier for smallercompanies?
That's why Credo AI exists.
We want to make sure that not only are weholding big tech accountable through Credo
(33:08):
AI products, but we are reducing the costof our technology and our product so that
all the emerging innovators, wherever theyare on the globe,
can actually use governance as acompetitive advantage and compete with
some of the big tech at global scale.
And that is actually a very importantmission for us here at Credo AI, where we
(33:32):
don't want this to be a compliance burden,but as governance has an opportunity for
them to build their technical as well ascapability modes.
Okay, last question.
What's the number one thing that someoneshould look to?
Biggest development or biggest area tofocus on to understand the development of
responsible AI or AI governance in thenext few years?
(33:55):
my God, that is a hard question, Kevin,because I would say certainly tracking
some of the core frontier modeldevelopments that are coming out of some
of the large companies is really, I wouldsay, interesting to watch and track
because that is going to impact how welive, work, play.
(34:17):
I am a big fan of also really engagingwith academia.
whether it is Stanford's Human Centered AIInstitute or whether it is CMU's AI
Institute, I think they're really doingsome phenomenal work, especially on the
accountability and transparency to reallystart thinking about what are the ethical
(34:38):
challenges we are going to run into andwho needs to be on the table.
And then lastly, I would say from thecapability perspective, I think we need to
be a little bit more
open to trying out all the new tools.
And this is something that I coach prettymuch everybody, whether you're a nine year
(35:00):
old kid, like my daughter, who is likevery well versed in some of these new
emerging tools, or whether you are a highschool student or whether you are a grad
or you are an early in your employment isdon't be afraid of artificial
intelligence.
This is the moment in time to try out allthe tools that are showing up in your way,
because I'm a firm believer that
(35:21):
Again, we are going to emerge as winnersif we know how to use these tools to our
benefit much more efficiently.
And so I think getting AI literate becomesreally critical as we move forward.
Terrific, Navridha, thanks so much.
Really wonderful speaking with you.
Yeah, thank you so much for having me,Kevin.