All Episodes

November 17, 2021 30 mins

Creating trust and transparency in AI isn’t just a business requirement, it’s a social responsibility. In this episode of Smart Talks, Malcolm talks to Christina Montgomery, IBM's Chief Privacy Officer and AI Ethics Board Co-Chair, and Dr. Seth Dobrin, Global Chief AI Officer, about IBM’s approach to AI and how it’s helping businesses transform the way they work with AI systems that are fair and address bias so AI can benefit everyone, not just a few. This is a paid advertisement from IBM.

Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:05):
Hello, Hello. This is Smart Talks with IBM, a podcast
from Pushkin Industries, I Heart Radio and IBM about what
it means to look at today's most challenging problems in
a new way. I'm Malcolm Glabbo. Today I'll be chatting
with two IBM experts in artificial intelligence about the company's
approach to building and supporting trustworthy AI as a force

(00:30):
for positive change. I'll be speaking with IBMS Chief Privacy
Officer Christina Montgomery. She oversees the company's privacy vision and
compliance strategy globally, looking at things like immunity certificates and
vaccine passports. Not what could we do, but what were

(00:50):
we willing as a company to do? Where were we
going to put our skills and our knowledge and our
company brand in response to technologies that could help provide
information in response to the pandemic. She also co chairs
their AI Ethics Board. I'll also be talking with Dr
Seth Dobrin, Global Chief AI Officer at IBM. Seth leads

(01:14):
corporate AI strategy and is responsible for connecting AI development
with the creation of business value. Seth is also a
member of IBMS AI Ethics BORT. We want to make
sure that the technology behind AI is as fair as possible,
is as explainable as possible, is as robust as possible,

(01:37):
and is as privacy preserving as possible. We'll talk about
the need to create AI systems that are fair and
addressed bias, and how we need to focus on trust
and transparency to accomplish this. What might the future look
like with an open and diverse ecosystem with governance across
the industry. There's only one way to find out. Let's

(02:00):
I did. One of the things I'm curious about is
the origin of this interesting concern about the ethics and
trust component of AI, or is this a later kind

(02:22):
of of evolutionary concern. About ten years ago, when we
started down this journey to transforming business using what we
think about is AI today, the concept of trust came up,
but not in the same context that we think about
it today. The context of trust was really focused on
how do I know it's given me the right answer

(02:43):
so that I can make my decision. Because we didn't
have tools that help explain how an AI came to
a decision, you tended to have to get into these
bakeoffs where you had to kind of set up experiments
to show that the AI was at least as good
as human if if not better, and understand why over
time it's progressed as AI has in uh started to

(03:04):
come up against real human conditions. And I think that's
when we started thinking about what is going on with
AI when it relates to bias, particularly you know, about
five eight years ago there was an issue with mortgage
particularly related to the zip code, but started giving you know,
biases against people of certain races um and so I

(03:29):
think those things combined have led to us to the
point where we are today plufs. You know, the social
justice movement over the last two years has has really
accelerated a lot of the concern mm hmm. Because I
noticed you're you're a lawyer by trade. It's an interesting
subject because it seems like this is where AI experts

(03:51):
like Seth and lawyers work together. It sounds like a
kind of classic cross disciplinary endeavor. Can you talk about
that a little bit. It's absolutely cross disciplinary in nature.
For example, our AI Ethics Board, I'm the co chair.
The other co chair is our AI Ethics Global Leader
francescar Rossi, who's a well renowned researcher in AI. Ethics,

(04:15):
so she comes with that research background. So we had
a board in place, an AI ethics board in place
before I stepped into this job, and there were a
lot of great discussions among a lot of researchers and
a lot of people that deeply understood the technology, but
it didn't have decision making authority. It didn't have all stakeholders.

(04:36):
Are many stakeholders across the business at the table, and
so when I came into the job as a lawyer
and as somebody with the corporate governance background, I was
sort of tasked with building out the operational aspects of
it to make it capable of implementing centralized decision making,
to give it authority, to bring in those perspectives from

(04:58):
across the business and from people with different focuses within
the IBM corporation, lots of different backgrounds, and we have
very robust conversations, and we also engage the individuals throughout
IBM who either from an advocacy because they care very

(05:19):
much about the topic, or they're working in the space
individually and have thoughts around the topic, are doing projects
in the space, want to publish in the space. We
have a very organic way of having them be involved
as well. Absolutely necessary to have that cross disciplinary aspect.
You mentioned beginning your answer, talked book about robust conversations

(05:42):
phrase I love. Can both of you give me an
example of an issue that's come up with respect to
trust and AI. So so, one example might be the
technologies that we would employ as a company in response
to the COVID nineteen pandemic. So there are a lot
of things we could have done, and it became a

(06:05):
question not of what we're capable of deploying from a
technology perspective, but whether we should be deploying certain technologies,
whether it be facial recognition for fever detection, certain contact
tracing technologies are Digital health pass is a good example
of a technology that came through the board multiple times

(06:30):
in terms of like if we are going to deploy
a vaccine passport, which is not necessarily what this technology
turned out to be, but looking at things like immunity
certificates and vaccine passports, not what could we do, but
what were we willing as a company to do? Where
were we going to put our skills and our knowledge
and our company brand in response to technologies that could

(06:53):
help to either bring about a cure or help to
provide information in response to the pandemic. COVID is a
great example because it highlights the value and the acceleration
that good governance can bring. Because the way that we
as an ethics board laid out the rules, the guardrails,

(07:14):
if you will, around what we could would and wouldn't
do for COVID help people just do stuff without worrying
that we need to bring this to the board. It
also laid very clear for this type of use case
we need to go have a conversation with the board.
It also provided a venue for us as a company
to make decisions um and make risk based decisions where okay,

(07:39):
this isn't a little bit of the of the fuzzy area,
but we think, given what's going on right now in
the world and the importance of this, we're willing to
take this risk so long as we go back and
we clean everything up later. And so so I think
that's really important that number one, governance is set up
so that it accelerates things, not stops them. And number two,

(07:59):
that there's clear guidance into you know, it's not no,
it's here's what you can do and here's what you
can't do. And help the teams figure out how they
can still move things forward in a way that doesn't
infringe on our principles. Yeah, I want to sort of
give this there's a concrete sense about how a concern

(08:22):
about trust and transparency and such would guide what a
technology company might do. Now a real example, So, if
I want to make sure that people are wearing face
masks and then just highlight that there is someone in
this area that's not wearing a face mask and you're
not identifying the person, I think we'd be okay with that.

(08:46):
What we wouldn't be okay with that with is if
they wanted to identify the person in a way that
they did not consent to and that was very generic.
So I'm going to go through a database of unknown
people and I'm going to match them to this person,
and so that would not be okay, and a fuzzy
area would be you know, I'm going to match this

(09:06):
to a known person, so I know this is an
employee and I know this is him. This is something
that we as a board would want to have a
conversation with. If this employee is not wearing a mask,
can I match them to a name or do I
just send a security personnel over here because the employee
is not wearing a mask. That's a harder I think,
and that's a real world example that we face during COVID. Yeah,

(09:29):
let's talk a little bit about diversity and shared responsibility
as principles that matter in this world of AI. What
do what do those terms mean as applied to AI,
and what's the kind of practical effect of seeking to
optimize those goals? You know, I think first of all,

(09:49):
we need to have good representation of society doing the
work that impacts society. So A, it's just the right
thing to do. B. There's tons of research out there
that shows that diverse teams outperformed non diverse teams. There's
a Mackenzie report that says, you know, companies in the
top quartile for diversity outperformed their peers that aren't by like,

(10:12):
so tons of good research. The second thing is you
just don't get as good results when you don't have
equal representation at the table. There's lots of good examples
of this. So there was a hiring algorithm that was
evaluating applicants and passing forward, but all the applicants in
the past for this company, you know, the vast majority

(10:32):
of them were male, and so females were just summarily
wiped out, regardless to some extent of their fit for
the role. I wanted to ask Castina. A project comes
before the board, and so a conversation might be the
team you put together and the data you're looking at

(10:52):
is insufficiently diverse, We're worried that you're not capturing the
reality of the of the kind of world we're operating in.
Is that is that an example of a conversation might
you might have at the board level. Well, I think
the best way to look at what the board is
doing to try to address those issues of bias. I mean, so,

(11:13):
for example, we've got a team of researchers that work
on trusted technology, and one of the early things that
they've done is to deploy toolkits that will help detect bias,
that will help make a i AM more explainable, that
will help make it trustworthy in general. But those tools
initially very focused on bias, and they deployed them to

(11:35):
open source so they could be built on and improved.
Right and right now, the board is focused more broadly,
not looking at it an individual problem in an individual
use case with respect to bias, but instilling those ethical
principles across the business through something we're calling ethics by Design.
Bias was the first focus area of this Ethics by Design,

(12:00):
and we've got a team of folks being led by
the Ethics Board who are working on the question you
asked Malcolm about, how do we ensure that the AI
we're deploying internally or the tools and the products that
we're deploying for customers take that into account throughout the
life cycle of AI. So through this Ethics by Design,

(12:20):
the guidance that's coming out from the Board starts at
that conceptual phase and then applies across the life cycle
up through in the case of an internal use of AI,
up through the actual use and in the case of
AI that we're deploying for customers are putting into a product,
you not through that point of deployment. So it's very

(12:41):
much about embedding those considerations into our existing processes across
the company to make sure that they're thought of, not
just once and not just in the use cases that
the Board has an opportunity to review, but in our
practices as a company and in our thinking as a company.
Much like you know, we did this and companies did
this years ago, um with respect to privacy and security,

(13:06):
that concept of privacy and security by design which some
may be familiar with that stem from the g d
PR in Europe. Now we're doing the same thing with ethics.
How unusual is what you guys are doing. I mean
if I if I lined up all the tech companies
that are heavily into AI right now, would I find
similar programs in all of them? Or are you guys

(13:28):
off by yourselves? So I think we take a little
bit of a unique perspective. In fact, we were recently
recognized as a leader in the ethical deployment of technology
and responsible technology use by the World Economic Forum. So
World Economic Forum and the Marcola Center Um of Ethics
at at Santa Clara University did an independent case study

(13:51):
of IBM that did recognize our leadership in this space.
Because of the holistic approach that we take, we're a
little bit different I think in some other tech companies
that do have similar counsels in place because of the
broad and cross disciplinary nature of ours. We're not just researchers,
were not just technologists. We literally have representation from backgrounds

(14:15):
spanning across the company, whether it be you know, legal
or developers or researchers or or you know just HR
professionals and the like. So that makes us a little
bit unique the program itself. And then I think we
hear from clients that are thinking for themselves about how
do I make sure that the technology I'm deploying or

(14:36):
using externally or with my clients is trustworthy? Right, So
so they're asking us, how did you go about this,
how do you think about it as a company, what
are your practices? So on that point, we are CEO
is the co chair of a something called the Global

(14:57):
AI Action Alliance initiated by the WEF, and as part
of that, we've committed to sort of open source our approach.
So we've been talking a lot about our approach. I
think it is a little bit unique, as I said,
but we are sharing it because again, we don't want
to be the only ones that have trustworthy the AI
and that have this holistic, cross disciplinary approach, because we

(15:18):
think it's the right approach. It's certainly the right approach
for our company, and we want to share it with
the world. It's not secret or proprietary, but if you
talk to the analyst community that serves the tech the

(15:40):
tech you know, the tech sector. They say far and wide,
IBM is is is ahead in terms of things that
we're actually doing as opposed to talking about it all
while making sure that it is enforceable and impactful. So
for instance, you know we were talking about we review
you cases and we can require that the teams adjust them.

(16:04):
That's unique, right, Most of the other tech companies do
not have that level of oversight in terms of ensuring
that their their outcomes are are aligned. There's a lot
of good talk, but I think you know, the weft
use case that came out on I think it was
September really supports that that we're ahead. And then if
you look at companies just in general that have AI

(16:26):
ethics board. My experiences that with all the companies and
I know interact with hundreds of of leaders and companies
a year, less than five percent of them have a
board in place, and even fewer of those kind of
really have a rhythm going and know how they're gonna
going to operate as a board. Yet m hmm. I

(16:49):
wanted to talk a little bit about the rule of
government here, these government leading or following here. UM, I
would say they're catching up. I think we're following is
probably the most improved, right because look, I think over
the last uh couple of years, as we talked about,

(17:12):
or maybe it's been almost ten years at this point
in time, as these issues have come to light, companies
have largely been left to themselves to impose guardrails upon
their practices and their use of AI. Let's not be
to say that there aren't laws that regulate for example,
discrimination laws do would apply to technology that's discriminatory, but

(17:36):
the unique aspects, to the extent there are unique aspects
or issues that get amplified through the application of AI systems. Um,
the government is really just catching up. So we've got
the the EU proposed a comprehensive regulatory framework for AI
in the spring time frame. UM. We see in the

(17:56):
US the FTC is starting to focus on and algorithmic
bias and just in general on algorithms and that they
be fair and the like. So there are numerous other
initiatives following the EU that are looking at frameworks for
governing AI and regulating AI, and we've been involved, I

(18:17):
mentioned earlier on our Precision Regulation recommendation. So we have
something called the IBM Policy Lab, and what differentiates our
advocacy through the Policy Lab is that we try to
make concrete, actionable policy recommendations, so not just again articulating principles,
but really concrete recommendations for companies and for governments and

(18:41):
policymakers around the globe to implemented to follow um things
like you know, out of our precision regulation of AI.
That's where our recommendation is that regulation should be risk based,
it should be context specific. It should look at and
allocate responsibility on the party that's closest to the risk,

(19:01):
and that may be different at different times in the
life cycle of an AI system. So we deploy some
general purpose technologies and then our clients train those over time,
so you know, bearing the risk it should sit with
the party that's closest to the risk at the different
points in time in the AI life cycle. You know,
one of the interesting things about this issue today, we're

(19:26):
now in a situation where someone like IBM, I'm guessing
is that it would be as sensitive to public reaction
to the uses of AI as they would be to
government reaction to the uses of AI. And I wanted
to just way those you know, is that this is
a this kind of fascinating development in our age that

(19:47):
all of a sudden it almost seems like whatever form
public reaction takes can be a more powerful lever of
of in moving changing corporate behavior. Then what government are saying?
And do you do you think this is true in
this AI space? I think the government regulation that we're
seeing is responding to public sentiment. So I agree with

(20:11):
you a hundred percent that that this is being moved
by the public. And you know, oftentimes when we have
conversations at the ethics board, Okay Christina and the lawyers
say okay, this is not a legal issue, then the
next conversation is what happens if this story shows up
on the front page of the New York Times of
the Wall Street Journal. So so absolutely we consider that.

(20:34):
So I would also I would add to that, like
we've been well, probably think the oldest technology company, we're
over a hundred years old, and our clients have looked
to us for that hundred plus years to responsibly usher
in new technologies right and to manage their data, their
most sensitive data, in a trusted way. So for us,

(20:55):
it's it's not just about the the headline risk. It's
about ensuring that we have of business going forward because
our clients trust us UM and society trusts US. So
the guardrails we put in place, particularly around the trust
and Transparency principles, or the guard rails we put in
place around responsible data use. In the COVID pandemic, there

(21:17):
was nothing that from a legal perspective, said we couldn't
do more. There was nothing that said in the US
we can't use facial recognition technology and our sites. But
we made principal decisions, and we made those decisions because
we think they're the right decisions to make. And when
I look back at the Ethics Board and the analysis

(21:40):
and the use cases that have come forward over the
course of the last two years, I can think of
very few where we said we're not going to do
this because we're afraid of regulatory repercussions. UM. In fact,
I can't think of any where because it wouldn't have
come to the board if it was a league all.
But yet we did refine in some cases stop I

(22:07):
actual transactions right and solutions because we felt they were
not the right thing to do. Yeah, yeah, A question
for either of you, can you can you dig a
little more into this, into the real world applications of this.
What are some of the very kind of concrete kinds
of things that come out of this focus on untrust.

(22:32):
So so you know, some some real world examples of
how trust plays into what we're doing. Is gets back
to a couple of things Christina said earlier around how
we're open sourcing a lot of what we do. So
our research division builds a lot of the technology that
winds up in our products um uh. And then, particularly

(22:54):
related to this topic of AI ethics and trustworthy AI
are the fall is to open source the base of
the technology. So we have a whole bunch of open
source tool kits um that anyone can use. In fact,
some of our competitors use them as much as we
do in their products. And then we build value adds

(23:14):
on top of those and so that is something that
we advocate strongly for in the Ethics Board helps support
us with that, as do you know, our our product teams,
because the value is you know, AI is one of
those spaces where when something goes wrong, it affects everyone, right,
so if if there's a big issue with AI, everyone's

(23:38):
going to be concerned about all AI, and so we
want to make sure that the technology behind AI is
as fair as possible, is as explainable as possible, is
as robust as possible, and is as privacy preserving as possible.
So tool kits that address those are all publicly available,
and then we build value added capabilities on top of

(23:59):
that when we set when we bring those things to
our customers in the form of an integrated platform that
helps manage the whole life cycle of an AI. Because
AI is different than software in that the technology under
AI is machine learning. What that means is that the
machine keeps learning over time and adjusting the model over time.

(24:20):
Once you write a piece of software, it's done, it
doesn't change. And so you need to figure out how
do you continuously monitor your your AI over time for
those things I just described and integrate them into your
security and privacy by design practices so that they're continuously
updating and aligned to your company's principles as well as

(24:42):
societal principles as well as any relevant regulations. Yeah, when
this question, give me one suggestion prediction about what AI
looks like five years or ten years for now. Yeah, So,
so that is a really really good question. And you know,

(25:05):
when we when we look at what AI does today, AI,
while it's very insightful, and it helps us realize things
that as humans we may not have picked up on
our own. And so to augment our intelligence, A surfaces
insights and maybe reduce as a complexity from almost infinite
and comprehensible to humans. Two, I have five choices now

(25:27):
that I can make based on the output of an AI.
There's there's AI is unable for the most part today
to provide context or reasoning. Right, So AI provides an answer,
but there's no reasoning as we think about it as
humans associated with it. There's a new technology that's coming up,

(25:48):
that's all. There's a bunch of them that are lumped
under something called neurosymbolic reasoning. And what neurosymbolic reasoning means,
it's using mathematical equations. So AI algorithms to reason similarly
to a human does. So, for instance, you know, the

(26:09):
Internet contains all sorts of things good and bad, and
and let's let's look at something that's relevant to to
me at least being of Jewish background. Right, you want
you want algorithms to know about the Nazi regime, But
you don't want algorithms spewing rhetoric about the Nazi regime. Today,

(26:30):
when we build an AI, it's almost impossible for us
to get the algorithm to differentiate those two things. With
a tool like reasoning around it, you could exclude prevent
an algorithm from saying from learning rhetoric that is, you know,
not conducive to norms. It's just you know, an example.

(26:52):
So those are the kinds of things you'll see over
the next three to five years. I think we'll see
a lot more explainableity and transparency around AI. So for example,
whether it may be you're seeing this ad because you
you know, went on and searched for X, Y and Z,

(27:14):
You're seeing a shoe ad because you visited this site,
you know to extent it's it's that, or there'll be
more transparency you're dealing with a chatbot, you know, just
when AI is being applied to you. I think you'll
see a lot more transparency and disclosure around that. And
then the sort of uh answer, less practical, more aspirational

(27:36):
answer I think is you know, we know AI is
changing jobs, it's eliminating some, it's creating new jobs, and
I think hopefully right with principles around AI. That it
be used to augment to help humans, that it be
human centered, that it put people first at the heart

(27:57):
of the technology. UH, that it will make people better
and smarter at what they do and they'll be more
interesting work. Right. So I'm hoping that that will ultimately
be something that will come out of AI as there's
more awareness around where it's being used in your life
already day to day, more transparency around that, more explainability

(28:20):
around that, and then ultimately more trust. Um, we'll wonderful.
I think that covers our basis. This has been really
really fascinating. Thank you for joining me for this, and
I expect that we will be having both as a
company inside IBM and as a society many many, many
many more conversations about AI in the coming years. So

(28:43):
I'm glad to be on the early end of that
process because we're not done with this one, are we not?
By a long shot? The beginning, guess, just the beginning.
Thank you again, Yeah, thanks for having Thank you. Thank
you again to Christina Montgomery and Sethburn for the discussion

(29:05):
about trust and transparency around AI and for their insights
about what may be possible in the future. It will
be fascinating to see how IBM can help foster positive
change in the industry. Smart Talks with IBM is produced
by Emily Rostak with Collie Magliori and Katherine Gurda, Edited

(29:27):
by Karen Shakerge, mixed and mastered by Jason Gambrel. Music
by Gramascope. Special thanks to Molly Sosha, Andy Kelly, me
La Belle, Jacob Weisberg, had a Fine Erk Sander and
Maggie Taylor, the teams at eight Bar and IBM. Smart
Talks at IBM is a production of Pushkin Industries and
I Heart Radio. This is a paid advertisement from IBM.

(29:52):
You can find more episodes at IBM dot com slash
smart Talks. You'll find more Pushkin podcasts on the I
Heart Dio app, Apple Podcasts, or wherever you like to listen.
I'm Malcolm Gladbow. See you next time.

Stuff To Blow Your Mind News

Advertise With Us

Follow Us On

Hosts And Creators

Robert Lamb

Robert Lamb

Joe McCormick

Joe McCormick

Show Links

AboutStoreRSS

Popular Podcasts

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.