Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:09):
Hi, I'm Kevin Werbach, professor of legal studies and business ethics at the WhartonSchool of the University of Pennsylvania.
For decades, I studied emerging technologies from broadband to blockchain.
Today, AI is promising to transform our world.
But AI needs accountability, mechanisms to ensure it's developed and deployed inresponsible, safe, and trustworthy ways.
(00:32):
On this podcast, I speak with the experts leading the charge for accountable AI.
I often say that while every company is going to have to think about AI ethics andgovernance, the way they do so will depend greatly on the nature of the organization, its
scale, its industry, its culture, and so forth.
(00:54):
That really came out in the conversation that I had with Uthman Ali.
He is the global responsible AI officer at BP and also advises the World Economic Forum'sAI Governance Alliance and the British Standards Institute.
He's an advisory board member to OxAfrica.
and Oxford University AI ethics spinoff.
In our conversation, Uthman has great perspectives on distinctive aspects of doingresponsible AI at a major global energy company, as well as the practicalities of setting
(01:25):
up a program, making the case for AI governance and responsible AI, how to create aneffective AI ethics committee, and what success means, with a little bit of robot ethics
thrown in.
Uthman, welcome to the Road to Accountable AI.
Thank you for having me.
You were one of the first, if not the first responsible AI lead executive at an energycompany.
(01:50):
How is it different, do you think, doing this kind of work at a place like BP versus acompany in, say, financial services or tech?
I I'm the first, think I'll claim it until I find someone else that might have secretlybeen doing this for years without anyone knowing.
But I think the difference for us is that we're such a massive organization, but we do somany different things as a company, whether it's more customer facing stuff or just
(02:15):
hardcore engineering.
But when you're comparing it to financial services, what's interesting is that a lot ofour systems might not necessarily be making predictions about people, they have safety
implications as well.
So when it comes to safety and just good engineering design practices.
That's one of the most interesting things about working here, I'd say.
(02:35):
And presumably safety is something that a company like DB has a lot of prior expertise on.
So does that translate over in terms of how you and the company think about it in the air?
Yeah, cause even I was thinking about this recently, right?
We don't really say responsible aviation.
oh The implicate fat is just part of what the product should be doing.
(02:59):
You just expect if you're flying commercial flight, it should be safe.
So, you know, it's the question of like, we need specific roles dedicated to this?
And the answer to that in the short term is yes, because you need to build that maturity.
And as you look at how do you actually embed.
Responsible practices within existing workflows and mitigate risks.
(03:21):
need to be doing that.
But eventually you will get to a point where this is not just responsible AI, it's justAI.
And a lot of the work that we're doing right now in terms of like bias mitigation orthinking about when do you do use a black box and more explainable model.
Eventually you reach maturity where it's kind of expected competence that this should bebeing done across the board.
(03:46):
A lot of organizations, though, see that as just something that is an additional drag ondeployment and innovation.
I would like to think that you're right, that all companies reach that point of seeingthis as being essential to actually doing development and deployment of AI.
But is it a challenge to make that case ever internally?
(04:09):
I think for us we're able to make the case pretty strongly because it is what we do as acompany, right?
Safety related stuff, it's pretty essential and we already operate in a super regulatedenvironment.
So we kind of don't have a choice and it makes sense for us as a business because it iswhat we do.
But to your point about like lots of companies, I think fundamentally just misunderstandwhat responsible AI is or what AI ethics is.
(04:31):
A lot of them I think still think it's this vague fluffy tech for good type thing.
And without it, it's just basically building good products and so on.
So for example, I use this right like if you have a healthcare AI chatbot that's doinglike diagnostic predictions.
Let's say you didn't do any ethical assessment on this at all or only like customercentric thinking at all right and your development team says this has like a 93 % accuracy
(04:57):
rate.
You might say like the mind look great on the slide deck but the implication you're sayingis that seven out of a hundred times this thing might be getting something wrong?
Even when you're looking at development metrics and best practices, how are you evencoming up with thresholds or even do technical reviews and making these decisions without
first thinking by your customer?
(05:18):
And by part of that is what are the ethical implications?
Cause if the thing isn't being used well, or if it's constantly giving incorrectpredictions that people are over relying on, it's not really solving a problem.
It's actually just creating a burden, right?
How far along are we to having the practices and standards and tools to ask thosequestions effectively?
(05:46):
I would say the window of opportunity now is probably in the next 18 months to two yearsto really go about having a comprehensive AI strategy where you actually just bake these
things in.
Because right now the EU is coming out with a new standards from Cincinnati like there'sthe high risk AI system standards.
There's the ISO standards.
Every regulator across the world is looking at, we need AI specific regulation now?
(06:09):
What are we meant to be doing?
Do we uplift existing compliance requirements?
So whilst this is happening,
This is the golden opportunity for the companies to think what is our actual AI strategyor plan?
Which regulations do we comply with?
Do we take the AI actors as our gold standard across the board?
Do we take a more federated approach?
(06:29):
But in order to do that, you need to at least have a system of tracking how systems getbuilt and deployed within the company or even procured.
And when you configure that out, you can figure out which actual guardrails do we need forspecific use cases.
Isn't it hard though at this point, given all of that uncertainty about how regulation isgoing to develop?
(06:51):
And for example, you know, the United States under the Trump administration is taking avery different approach, a much more skeptical approach to regulation.
So from a company standpoint, it might be challenging to decide in this 18 to 24 monthwindow, how we're going to move forward.
I there is a risk that sort of people go around under these talking circles with theresponsible AI.
(07:17):
Which regulation do we go with?
China's approach versus the US versus the EU.
But if you're just really pragmatic about this, the EU is such a huge market.
Eventually for most companies, they'll need to be selling stuff to it, replacing thingsonto the market, right?
So you might start there and say, this is the strictest one in terms of requirements.
What do we need to do to actually be compliant here?
(07:39):
Like what does the documentation need to do?
Even sending things for like third party audits, like conformity assessments.
What do we actually need to do to be compliant with this one to begin with?
When you figure out the most difficult one, you can look at other religions and say,actually, this is overly burdensome.
We don't need to be doing this.
We don't feel as a company to even protect our customers or our reputation.
(08:00):
But it's only when you figure out the baseline of the hardest one, you can make thatinformed decision.
Got it.
And if a company doesn't have a responsible AI or AI governance program, where do theystart given how many different elements there are now in this area?
I say in terms of where to begin, the easiest thing would be to be in, at which fields aremost adjacent within the company that which departments already exist, right?
(08:31):
Before even looking at do we hire someone external or build a whole new team?
You've got to look at like, what do we actually have right now?
So companies would probably look to date digital security or data privacy.
I was getting step one to say like, look for digital compliance.
What do we have right now?
What actually happens?
What's the processes?
Either even joined up or they're parallel sort of approval streams.
(08:52):
Then you can look at actually, do we even have an internal AI team?
Do we have internal data scientists?
Do we just buy everything off the shelf?
Do we rely on third party contractors?
But when you have those, you can figure out, okay, even how many systems do we have?
Like even when you look at actually, these might be fragmented inventories.
Companies might not realize how many high risk AI systems actually have in the companyuntil they basically do.
(09:17):
essential housekeeping.
So that's the starting point for us doing that inventory.
Yeah, the inventory, it's, it's one of the most important ones.
that's because we split governance in two ways, right?
There's what responsible AI is, which we think of like ethics, legal and compliance.
But then there's just good AI governance, which is the commercial governance, which is, weactually making money from this?
(09:42):
Is it delivering value?
Do people actually like the stuff that we're building?
And a lot of this will start from the same point, though you need a system of tracking allthe AI stuff you're doing in the company, or how your employees are using it.
Got it.
How did you personally get engaged in this work?
Cause when I was studying the law, thought this is really, really boring.
(10:02):
I didn't want to do, I really just did not want to do traditional corporate legal work.
But I was always interested in human rights, but particularly around the, and it'ssomething that really fascinated me was robot ethics and the ethics of what humans really
calls.
Cause I thought this was one that if you can figure out how to do AV safely, you couldlargely get most other technologies done safely too, because
(10:25):
There's certainly interesting ethical issues or the privacy surveillance, evenanthropomorphism of people even humanize the technology, safety issues, even like just
getting incorrect predictions, leading to accidents, job displacements.
thought that was like a really interesting case study.
look at what can you, if you can figure out your governance around this, most of thetechnologies that are super complicated, you can figure out how to govern them as well.
(10:50):
But what I found was a lot of traditional lawyers.
really did not like talking about digital ethics at all, which I found interesting becausethey said, look, it's really complicated.
It's a gray area.
There's no real right or wrong with lot of this stuff because there's no existing caselaw, right?
It's still being formed.
So what I found quite similarly, engineers and lawyers often think quite procedurally.
(11:12):
It's sort of what's the flow chart, what's the process, how do I think about this?
If then, except lawyers often do it with words.
Engineers are just flow charts and diagrams.
And part of the job of being like a global response by officer is taking the criticalthinking out of people's hands with an expert committee to help actually guide them
through what is the right decision on the company's behalf.
(11:35):
So essentially taking it and turning it into something that can be implemented in aflowchart.
Kind of yeah, flowcharts unfortunately is a big part of the job.
Cause you got to figure out the flow.
We joke about it so much, but so much of it comes down to like just the flow charts ofinternally, how do things work?
And engineers love flow charts.
And if you can explain things in flow charts, for some reason the penny drops.
(11:58):
Have you just sort of get things?
So you were like first doing this years ago, I just couldn't understand how people arewrapping their heads around it.
When you explain like even how governance processes work internally in the flowchart,people be like, oh.
That makes sense.
But for me, because as I have it from a legal background, you're trained on words andtext.
It made no sense to me, but for them, I was like, look, you give your customer what theywant.
(12:19):
They like the flow charts and diagrams.
So that's how they like even ethical guidance, things to be done diagrammatically asopposed to long blocks of text.
That's really interesting.
That's the second half after the issues have been reduced to uh procedures andrequirements.
How does a company address those hard ethical questions at the front end where there's notan obvious right or wrong answer?
(12:43):
This is obviously one the most difficult things for companies to come up to because whenit comes to even issues like bias and fairness, right?
Or even is it the right thing to be doing this or not?
The question often turns to who's going to make that call within the company.
So the first step is to look at like, what is your risk management approach?
In other areas, even outside of digital ethics or AI ethics, there already should beethics based guidance or the procedures how to do this, even with just physical systems or
(13:07):
even when it comes to just engineering stuff.
So there's already things you can tag into.
doesn't necessarily need to be reinventing the wheel.
But to do it effectively and scalably, the best thing you can get is top-down sponsorshipto create the expert committee of the interdisciplinary skills to basically create that
ethical pace law to say like, look, when it comes to digital assistants and co-pilots,this is how we're going to deploy this across the company.
(13:33):
This is the company's basically approved stance.
These are the people to speak to if you need guidance, because what you don't want isseven different teams in these huge organizations.
each having their own interpretation of what ethical AI is and just sort of doing it andit becomes very patchwork and it becomes a bit of a mess with internal inconsistencies.
And for this expert committee, one of the key things is that it's not really about justyour own personal beliefs and views.
(13:57):
You have to look at what are the company statements around this, even externally orinternally, because it has to be based around something like even the company code of
conduct, sustainability aims, ESG, human rights policies.
There's a lot of stuff
that these large organisations have already signed up to that you sort of need toharmonise.
Because the ethical guidance you give can't just be, is just my opinion, off the cuff.
(14:21):
It needs to be rooted in the company's sort of ethical stance that makes sense and is justcoherent.
Many companies have these AI ethics committees or hearing committees or governancecommittees.
um What do you think is most important to build them in ways that they actually areeffective and can reach that kind of uh convergence that you describe where it's not just
(14:42):
a group of opinions?
But the first thing you need to do is probably start with a top-down sponsorship over whatis the committee actually going to be doing.
Because even approaching C-level, that's the chance for everyone to write down what is thepoint of like this committee and is it, need one committee, does each business unit have
their own committee?
But before approaching one of the C-level executives, it forces everyone to actually writedown what is the plan?
(15:08):
What is the purpose of this?
How are we going to measure success?
Because committee to me might be something different to you.
Committee to someone might be board, right?
Some sort of governance board mechanism.
Committee to others might be some advisory committee that gives recommendations, but theydon't have teeth.
So that really needs to be ironed out because they're often working responsibly.
(15:29):
We use lots of similar words, but they mean different things to different people.
And if you have a committee, having people in meetings talking about something doesn'tsolve the problem.
If they don't have the skills, you've got to look at what skills do you actually need?
For many companies, what they'll be missing is an actual ethicist.
Someone who is skilled in negotiations and making trade-offs that has applicationexperience with AI.
(15:53):
That's often the missing component because they might have lawyers, have cybersecurityprofessionals risk management people, but they probably will struggle to find dedicated
professional ethicists that have AI experience.
And the committee as well, one of the risks I often see is that
They're sort of left to the risk management side of the company without people on theinnovation side.
(16:16):
You don't want a group of people coming up with all the reasons why you shouldn't dosomething without someone saying, this is why we should be doing it.
Right.
It needs to be a balanced approach.
Let me get back to your own experience.
You talked about your interest in robot ethics and some of these questions.
At BP, did you make the case for creating a responsible AI role or was the company wasalready going to do and they found you?
(16:43):
Interestingly, when I have joined, it was originally as a digital ethics lead within theresearch team, because the company is doing a huge transformation, investing a lot in
digital AI and other technologies too.
But they said we need an ethics specialist because lots of the dilemmas they say, evenpeople concerned with like job role changes or what does this mean for the future of work.
Like we need someone here that can basically teach us what is digital ethics, but workwith some leading universities.
(17:08):
And what happened was I worked with the university of Oxford on a research paper calledCAPAI, which was a detailed conformity assessment procedure for the EU AI Act.
And people were saying AI Act's being negotiated right now.
People are talking about it.
How do you actually do this?
Like what does ethics look like from the start of a project all the way to the end?
So working with Oxford Business School, they developed a protocol for testing projects.
(17:33):
And we worked with them and several other companies.
Now they actually create an AI governance startup out of this called Moxethica, whichsells AI governance tools, which is a risk assessment platform.
But out of doing this work, it be having a legal background, but also trained in ethicsand also knowing just being embedded within the technology teams.
was just a natural progression.
So we need a dedicated role to this to be specifically looking at AI.
(17:55):
Given that with post-gen AI, mean the world changed, right?
This went from being a side thing, which may be important.
We know it's so boring but we've got some time to...
We don't have time anymore, we need to do enough.
We need to fast track another thing that we were doing.
Right, well, clearly companies had to fast track their initiatives in this area, but it'sinteresting that the D.P.
also felt like this was the time to fast track the ethics and responsibility.
(18:20):
think for us it was such an important strategic decision for the company because there'salso clarity over again, what is our approach to AI going to be in the future?
But one thing that I really liked about even having this role at the company was sort ofplanting a flag in the sand that this is an area we're going to walk back on.
This is part of the company's future now.
It's cemented in the company's legacy.
(18:42):
Would be the first one to even create this role or start building out this department.
And I think that again, like for us, the BP was foresight, like even me had been hiredinto the R &D team.
It was more a gut feeling or acknowledgement that in the future, this is going to be superimportant for us as a company.
But even creating this role now for Responsible AI and the stuff that we're doing rightnow, five years from now, the company, you would hope that people working here would
(19:07):
really benefit from the stuff that we're doing today.
What's the hardest thing in terms of either creating or implementing this kind of program?
hardest thing is probably the size of the company of ours.
It's like when you're looking at like creating the program and embedding it, it's oftenwhere do you start?
(19:27):
Because even creating the AI inventory, we've got many different departments across thecompany, different business units, and you really have to learn how does the company work
and how the different departments work to actually make this effective and create africtionless process for projects to be screened without people thinking this is
additional red tape.
So a lot of it was actually understanding what are our current even governance approvalstreams?
(19:50):
How can we consolidate all this?
If there's any duplication hack, we just make a better customer experience and approachingit to like the customer first lens, like our employees are our customers.
If we're asking them to send projects for approval, we'll ask them to do ethics basedtechnical reviews.
Where do you go?
How can they make this as simple as possible?
And it was also just getting a lot of buying from different departments.
(20:11):
So everyone is clear on what we're trying to govern.
Because when you say AI governance to the commercial side of the AI team, they're thinkingabout reusable components.
How do we build better code?
How do we make sure projects delivering value speak to the legal team?
Like, aren't we legally compliant?
Then ethics is actually a bit of in between.
It's protecting the reputation, but building better stuff, right?
(20:33):
Digital security again, is preventing cyber risks.
So if you even be clear again on the language of this is what good looks like for all ofus.
How do we create a process or program here that basically delivers everything that weneed?
Where do we converge or when there are differences, where do we diverge?
And in the context of generative AI, how do you deal with the fact that it may beimpossible to be certain that there's no such thing as a fully unbiased system, know,
(21:03):
pollucination is a uh somewhat inherent risk, although it can be mitigated.
How do you think about the fact that you can have a very good program and yet, you know,there might still be issues?
I think the pragmatic approach is that there will always be issues but not all unintendedconsequences are just unforeseeable.
For example, if you actually think through, okay, this Gen.
(21:25):
chatbot might produce some inaccurate outputs.
Before we go deploying this to 10,000 people, let's just actually think for these usecases, what could go wrong?
Which bits of the business need targeted training or do we say this use case isinappropriate, for example, or do we say look,
use a traditional machine learning method where there isn't that hallucination rate.
One of the most effective ways to even deal with this is also just recognize that withineach different department of the company of our size, how the risks manifest or what those
(21:54):
consequences are, are vastly different.
If you're dealing with a team in marketing and you've got AI producing inaccurate outputsand you give someone the classic like extra-fingered type thing, that can be ethically
embarrassing or reputationally bad and you might need due diligence if you need them.
before anything gets put external, for example.
But if that's being used in safety critical context, that same risk of using thetechnology is manifested very differently.
(22:21):
The harm is completely different.
So you just could be very pragmatic and understand the context of how the thing is goingto be used.
And you need sector specific guidance, there's guidance that's tailored for eachdepartment.
Broadly speaking, how do you think about what success means?
Success for me is actually pretty simple, it's just are we building good stuff?
(22:45):
It's honestly what it comes down to is are we actually building the easy AI in ways that'sactually useful for the company?
Because if we don't know for example, that means okay, we don't have a system of tracking,so we can't credibly say that.
If you've got projects being built that are very cool, but you're just spewingmisinformation or leading to loads of unplanned job role changes,
(23:09):
where it's just frustrating loads of employees.
That's not great as like a business ROI outcome either.
So you need something that's this, we building good things that are reliable, accessible,inclusive, scalable?
Do people actually want to be interacting with them and using them?
And if we're doing that, that we're actually just delivering value because AI is veryexciting, but it is just a technology at the end of the day.
(23:35):
What are some of the new developments that you're looking at now terms of new technologiesor the way that AI is evolving?
think one of the most interesting things it's going to be on human AI interaction.
think this is something I find particularly interesting.
It's just because we went from even conversational agents now with the multimodal stuffwhere you could start to speak to them.
(23:58):
Then the next thing is just, they can basically be around you, AI companion, actuallyseeing things like this door in front of you, right?
That you can't see off camera.
Those sorts of interactions are going to be interesting because I think they're going tofundamentally influence us as human beings.
Like for example, I'm severely colorblind.
So sometimes before even knowing what to wear, because I've embarrassed myself publiclybefore where I've worn mismatching colors, just to be sure I take a picture and use an AI
(24:22):
to be like, do these colors match?
And the next thing would be interesting is like, if you over rely on that for every partof your life, can, it's an enabler, but it can become a crunch.
But if in the future you say, should I wear this?
the colors match?
Then the AI says, they do match, but you should buy from this brand.
year for $20,000 and x what x amount of did you find out that company has a strategicpartnership with that brand that you didn't know about that's what i find really
(24:50):
interesting is that influence and persuasion that ai is going to have on us over the nextfew years
Yeah, and that's a tough ethical challenge in a lot of ways.
The example you gave seems like a more familiar kind of issue in terms of corporateinfluence and disclosure and so forth, but the broad problem, seems like, goes well beyond
that.
Yeah.
I think it does because even when you're looking at the ethics of large scale deploymentsof, example, is it ethically acceptable to use generative AI as like a therapy bond?
(25:17):
Now for most people, it could be if you're experiencing milder it might just be a goodassistive tool.
But if you are someone that needs genuine medical help and you're relying on this and it'snot helping you, it might be making you worse.
How are we going to balance that trade off the tools that are so accessible?
for the entire population.
(25:38):
I make sure everyone is getting the experience they need.
Because if you're just printing like a blanket tool out there that's the same foreveryone, regardless of their own individual circumstances, there is a risk here that some
will benefit more than others, oh
And what about the, you know, going back to this area of robot ethics and autonomousvehicles and devices, um what do you see today in terms of the challenges in that area and
(26:07):
how far along we are in terms of coming up with good answers?
I'd in terms of what's happening next, I find really fascinating is actually going frominteracting with AIs to actually merging with them.
So the ethical dilemmas over things like transhumanism, robotic prosthetics,brain-computer interfaces, which is creating a whole new area of ethics around neuroethics
(26:33):
and AI, right?
Things like, do we need the right to free will now?
Because I don't want a brain implant that could be hacked, for example.
or the right to mental privacy, then I don't want companies to be able to give me brainadverts, that type of thing, unless I have like a premium sponsorship model.
And for anyone that's seen the latest season of Black Mirror, think that they'll know whatI'm referring to.
(26:54):
But these types of things I think are really important.
And even we can learn a lot from science fiction because they're great digital ethicstraining, because it's just, do we not get here?
It's the question we're asking ourselves, right?
How do we just not get to this scenario?
I used Black Mirror in my ethics class for a number of years, the class I've been teachingsince 2016.
(27:18):
And I would always have the students, you know, watch an episode and then relate it towhat we talked about in the class.
And one of the reasons I stopped was the distance between what was actually happening inthe real world and what you'd see in a Black Mirror episode was compressed so much that it
was not like crazy science fiction.
It was, well, sure, have temple right now.
It doesn't sound all that hard to think about.
(27:40):
um But you're right, we need to look forward.
It's interesting um when you talk about some of the transhumanism things, that I think tomost people still feels like off in the future, but it sounds like now is the time that we
need to be considering those issues.
I mean, it's already happening.
Like you already have human trials, have company selling robotic prosthetics.
(28:03):
I mean, this could be amazing if we do well.
But there are ethical questions over it.
When do you stop augmenting yourself?
Or at what point do you start becoming human?
Right?
It's like the ship of thesis parable.
It's our what point do you basically automate yourself, augment yourself into a placewhere the loss of just human essence of dignity.
(28:24):
Because we've not done this before as a species.
So we are experimenting as we go.
I if we get it right again, this could be amazing, but the risks even in terms of justinequality of like, if certain populations have access to these technologies first, which
is better medical treatment, global inequality would just be exacerbated a lot more.
(28:45):
Yeah, broadly speaking, it sounds like you're relatively optimistic that organizations andgovernments and civil society can address the risks of these new developments, not just in
terms of robots and neural technology, but AI generally.
I think this is first time I've ever been called optimistic.
(29:08):
The previous guest of this chateau was to a pretty douse.
It's more, we're looking at all the challenges.
We're looking at what's hard to do.
It's a mixture, but at least what I've heard you say seems more comfortable that there areresponses and there are ways that smart people can get to them.
(29:28):
I think there are ways of doing it.
The one thing that gives me, it's frustrating about human nature that gives meencouragement is that I think as a species, we're very good at doing things when we
absolutely have to.
Like during that assignment, when the deadline is literally around the corner, theneveryone's a genius and they're getting stuff done.
It's just sometimes you just need to kick up the backside to say, just get it done,please.
(29:49):
And can we just do it lot quicker?
And I think the world really just doesn't need credible experts working together.
Being very open and sharing what's working and what isn't.
Even when it comes to things like spec sector specific guidelines and compliance.
Like even which bias and fairness protocols even work effectively for AI and recruitment.
(30:10):
What's the difference of deploying this in the US where there might be specific regulationaround affirmative action.
Versus in the UK with the Equality Act.
As until you start being open and sharing these things is that you develop the bestpractices development guys.
And is there that level of openness that you're seeing?
(30:32):
I think you get some right?
I think that you get some because I think within energy, most of these companies areactually very open about what they're doing because they have the safety culture where
this is like a non-negotiable for a license to operate that is not seen as as much as acommercial advantage.
Although like if you can do governance well, of course, like you can deploy thingsquicker, it's seen as everyone has to be doing this as an industry.
(30:55):
And if you look at the rise of like AI safety institutes and with the Trumpadministration, as we pull back or certain things, but
These were good attempts to release signals to say, do we attract the experts fromcompanies to be working with civil society to just develop guardrails, right?
That actually effective and apparently proven.
And I think we just need more of that.
(31:16):
You see more really qualified people with experience working together, balancing ideas offof each other and being allowed to explore and implement things, but finding out how can
you just make them scalable.
I don't know, you sound like an optimist to me, but that's a compliment.
The thing is just...
There's what the field does to you over time.
This is honestly shocking.
(31:37):
I need to tell my colleagues that.
They told you I'm not as negative as you said.
Ocon, thank you.
It's been really great talking with you.
This has been the road to accountable AI.
If you like what you're hearing, please give us a good review and check out my Substackfor more insights on AI accountability.
Thank you for listening.
(32:02):
This is Kevin Wehback.
If you want to go deeper on AI governance, trust, and responsibility with me and otherdistinguished faculty of the world's top business school, sign up for the next cohort of
Wharton's Strategies for Accountable AI Online Executive Education Program.
Featuring live interaction with faculty, expert interviews, and custom-designedasynchronous content, join fellow business leaders to learn valuable skills you can put to
(32:26):
work in your organization.
Visit execed.warden.upenn.edu flash A-C-A-I for full details.
I hope to see you there.