All Episodes

December 10, 2024 27 mins

To deploy responsible AI and build trust with customers, businesses need to prioritize AI governance. In this episode of Smart Talks with IBM, Malcolm Gladwell and Laurie Santos discuss AI accountability with Christina Montgomery, Chief Privacy and Trust Officer at IBM. They chat about AI regulation, what compliance means in the AI age, and why transparent AI governance is good for business.

Visit us at: ibm.com/smarttalks

Explore watsonx.governance: https://www.ibm.com/products/watsonx-governance

This is a paid advertisement from IBM.

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:02):
Hey, Malcolm Glabell Here, I'm back in your feed today
because we are re releasing an episode of Smart Talks
with IBM on a very timely topic, AI governance and
why regulation is critical to building responsible and accountable AI.
I hope you enjoy it. Hello, Hello, Welcome to Smart

(00:24):
Talks with IBM, a podcast from Pushkin Industries, iHeartRadio and IBM.
I'm Malcolm Glabwell. This season, we're continuing our conversation with
new creators visionaries who are creatively applying technology in business
to drive change, but with a focus on the transformative
power of artificial intelligence and what it means to leverage

(00:47):
AI as a game changing multiplier for your business. Our
guest today is Christina Montgomery, IBM's Chief Privacy and Trust Officer.
She's also chair of IBM's AI et SA Fix Board.
In addition to overseeing IBM's privacy policy, a core part
of Christina's job involves AI governance, making sure the way

(01:10):
AI is used complies with the international legal regulations customized
for each industry. In today's episode, Christina will explain why
businesses need foundational principles when it comes to using technology,
why AI regulation should focus on specific use cases over

(01:31):
the technology itself, and share a little bit about her
landmark congressional testimony. Last May, Christina spoke with doctor Lori Santos,
host of the Pushkin podcast The Happiness Lab, a cognitive
scientist and psychology professor at Yale University. Laurie is an
expert on human happiness and cognition. Okay, let's get to

(01:55):
the interview.

Speaker 2 (01:58):
So, Christina, I'm so excited to talk to you today.
So let's start by talking a little bit about your
role at IBM. What does a Chief Privacy and Trust
Officer actually do.

Speaker 3 (02:07):
It's a really dynamic profession and it's not a new profession,
but the role has really changed. I mean, my role
today is broader than just helping to ensure compliance with
data protection laws globally. I'm also responsible for AI governance.
I co chair or AI Ethics Board here at IBM,
and for data clearance and data governance as well for

(02:29):
the company. So I have both a compliance aspect to
my role, really important on a global basis, but also
help the business to competitively differentiate, because really trust is
a strategic advantage for IBM and a competitive differentiator as
a company that's been responsibly managing the most sensitive data
for our clients for more than a century now and

(02:52):
helping to usher new technologies into the world with trust
and transparency, and so that's also a key aspect of
my role.

Speaker 2 (02:59):
And so joined us here on smart Talks back in
twenty twenty one, and you chatted with us about IBM's
approach of building trust and transparency with AI, and that
was only two years ago. But it almost feels like
in eternity has happened in the field of AI since then,
and so I'm curious how much has changed since you
were here last time. Were the things you told us before,
you know, are they still true? How are things changing?

Speaker 3 (03:21):
You're absolutely right, it feels like the world has changed
really in the last two years. But the same fundamental
principles and the same overall governance applied to IBM's program
for data protection and responsible AI that we talked about
two years ago, and not much has changed there from
our perspective. And the good thing is we've put these

(03:44):
practices and this governance approach into place, and we've have
an established way of looking at these emerging technologies. As
the technology evolves, the tech is more powerful for sure,
foundation models are vastly larger and capable, and our creating
in some respects new issues, but that just makes it
all the more urgent to do what we've been doing

(04:06):
and to put trust and transparency into place across the business,
to be accountable to those principles.

Speaker 2 (04:13):
And so our conversation today is really centered around this
need for new AI regulation and part of that regulation
involves the mitigation of bias. And this is something I
think about a ton as a psychologist, right, you know,
I know, like my students and everyone who's interacting with
AI is assuming that the kind of knowledge that they're
getting from this kind of learning is accurate, right, But
of course AI is only as good as the knowledge

(04:35):
that's going in. And so talk to me a little
bit about like why bias occurs in AI and the
level of the problem that we're really dealing with.

Speaker 3 (04:44):
Yeah, well, obviously AI is based on data, right, It's
trained with data, and that data could be biased in
and of itself, and that's where issues could come up.
They come up in the data, they could also come
up in the output of the models themselves. So it's
really important that you build bias consideration and bias testing

(05:05):
into your product development cycle. And so what we've been
thinking about here at IBM and doing we had some
of our research teams delivered some of the very first
toolkits to help detect bias years ago now right and
deploy them to open source, and we have put into
place for our developers here at IBM and Ethics by
Design playbook that's sort of a step by step approach

(05:28):
which also addresses very fully bias considerations, and we provide
not only like here's a point when you should test
for it and you consider it in the data, you
have to measure it both at the data and the
model level or the outcome level, and we provide guidance
with respect to what tools can best be used to

(05:49):
accomplish that. So it's a really important issue. It's one
you can't just talk about. You have to provide essentially
the technology and the capabilities and the guidance to enable
people to test.

Speaker 2 (05:59):
For Recently, you had this wonderful opportunity to head to
Congress to talk about AI, and in your testimony before Congress,
you mentioned that it's often said that innovation moves too
fast for government to keep up, and this is something
that I also worry about as a psychologist, right our
policy makers really understanding the issues that they're dealing with,
And so I'm curious how you're approaching this challenge of

(06:19):
adapting AI policies to keep up with the sort of
rapid pace of all the advancements we're seeing in AI
technology itself.

Speaker 3 (06:27):
I think it's really critically important that you have foundational
principles that applied to not only how you use technology,
but whether you're going to use it in the first place,
and where you're going to use and apply it across
your company. And then your program from a governance perspective,
has to be agile. It has to be able to
address emerging capabilities, new training methods, etc. And part of

(06:51):
that involves helping to educate and instill and empower a
trustworthy culture at a company so you can spot those
issues so you can ask the right questions at the
right time if you try. We talked about during the
Senate hearing, and IBM's been talking for years about regulating
the use, not the technology itself, because if you try

(07:13):
to regulate technology, you're very quickly going to find out
regulation will absolutely never keep up with that.

Speaker 2 (07:21):
And so in your testimony to Congress, you also talked
about this idea of a precision regulation approach for AI.
Tell me more about this. What is a precision regulation
approach and why could that be so important.

Speaker 3 (07:31):
It's funny because I was able to share with Congress
our precision regulation point of view in twenty twenty three,
but that precision regulation point of view was published by
IBM in twenty twenty. So we have not changed our
position that you should apply the tightest controls, the strictest

(07:52):
regulatory requirements to the technology where the end use and
risk of societal harm is the greatest. So that's essentially
what it is. There's lots of AI technology that's used
today that doesn't touch people, that's very low risk in nature.
And even when you think about AI that delivers a
movie recommendation versus AI that is used to diagnose cancer, right,

(08:16):
there's very different implications associated with those two uses of
the technology. And so essentially what precision regulation is is
apply different rules to different risks, right, more stringent regulation
to the use cases with the greatest risk. And then
also we build that out calling for things like transparency

(08:37):
you see it today with content right, misinformation and the
like we believe that consumers should always know when they're
interacting with an AI system, so be transparent, don't hydror
AI clearly define the risks. So as a country, we
need to have some clear guidance right and globally as
well in terms of which uses of AI or higher

(08:59):
risk will apply higher and stricter regulation, and have sort
of a common understanding of what those high risk uses
are and then demonstrate the impact in the cases of
those higher risk uses. So companies who are using AI
in spaces where they can impact people's legal rights, for example,

(09:20):
should have to conduct an impact assessment that demonstrates that
the technology isn't biased. So we've been pretty clear about
apply the most stringent regulation to the highest risk uses
of AI.

Speaker 2 (09:34):
And so so far we've been talking about your congressional
testimony in terms of, you know, the specific content that
you talked about, But I'm just curious on a personal level,
you know, what was that like right Like, right now
it feels like at a policy level, like there's a
kind of fever pitch going on with AI right now.
You know what did that feel like to kind of
really have the opportunity to talk to policy makers and
sort of influence what they're thinking about AI technologies like

(09:56):
in the coming century.

Speaker 3 (09:57):
Perhaps I was really an honor to able to do
that and to be one of the first set of
invitees to the first hearing. And what I learned from
it essentially is, you know, really two things. The first
is really the value of authenticity. So both as an
individual and as a company, I was able to talk

(10:19):
about what I do. You know, I didn't need a
lot of advanced prep right, I talked about what my
job is, what IBM has been putting in place for
years now. So this isn't about creating something. This was
just about showing up and being authentic. And we were
invited for a reason. We were invited because we were
one of the earliest companies in the AI technology space.

(10:43):
We're the oldest technology company and we are trusted, and
that's an honor. And then the second thing I came
away with was really how important this issue is to society.
I don't think I appreciated it as much until following
that experience. I had outreached from colleagues I hadn't worked
with for years. I had an outreach from family members

(11:05):
who heard me on the radio. You know, my mother
and my mother in law, and my nieces and nephews
and my friends of my kids were all like, Oh,
I get it. I get what you do. Now, Wow,
that's pretty cool, you know. So that was really probably
the best and most impactful takeaway that I had.

Speaker 1 (11:22):
The mass adoption of generative AI, happening at breakneck speed,
has spurred societies and governments around the world to get
serious about regulating AI. For businesses, compliance is complex enough already,
but throw aever involving technology like AI into the mix,
and compliance itself becomes an exercise in adaptability. As regulators

(11:46):
seek greater accountability in how AI is used, businesses need
help creating governance processes comprehensive enough to comply with the law,
but agile enough to keep up with the rapid rate
of change in AI development. Regulatory scrutiny isn't the only
consideration either responsible AI governance. A business's ability to prove

(12:10):
its AI models are transparent and explainable is also key
to building trust with customers, regardless of industry. In the
next part of their conversation, Laurie asked Christina what businesses
should consider when approaching AI governance. Let's listen.

Speaker 2 (12:29):
So what's a particular role that businesses are playing in
AI governance? Like why is it so critical for businesses
to be part of this?

Speaker 3 (12:36):
So I think it's really critically important that businesses understand
the impacts that technology can have, both in making them
better businesses, but the impacts that those technologies can have
on the consumers that they are supporting. You know, businesses
need to be deploying AI technology that is in alignment

(12:59):
with the goals that they set for it, and that
can be trusted. I think for us and for our clients,
a lot of this comes back to trust in tech.
If you deploy something that doesn't work, that hallucinates, that discriminates,
that isn't transparent, where decisions can't be explained, then you
are going to very rapidly erode the trust at best

(13:21):
right of your clients and at worst for yourself. You're
going to create legal and regulatory issues for yourself as well.
So trusted technology is really important, and I think there's
a lot of pressure on businesses today to move very
rapidly and adopt technology. But if you do it without
having a program of governance in place, you're really risking
eroding that trust.

Speaker 2 (13:42):
And so this is really where I think a strong
AI governance comes in. Talk about from your perspective, how
this really contributes to maintaining the trust that customers and
stakeholders have in these technologies.

Speaker 3 (13:53):
Yeah. Absolutely, I mean you need to have a governance
program because you need to understand that the technology, particularly
the AI space that you are deploying, is explainable. You
need to understand why it's making decisions and recommendations that
it's making, and you need to be able to explain
that to your consumers. I mean, you can't do that
if you don't know where your data is coming from,

(14:15):
what data are you using to train those models, if
you don't have a program that manages the alignment of
your AI models over time to make sure as AI
learns and evolves over uses, which is in large part
what makes it so beneficial that it stays in alignment
with the objectives that you set for the technology over time.

(14:38):
So you can't do that without a robust governance process
in place. So we work with clients to share our
own story here at IBM in terms of how we
put that in place, but also in our consulting practice
to help clients work with these new generative capabilities and
foundation models and the like, in order to put them

(14:59):
to work for their business in a way that's going
to be impactful to that business, but at the same
time be trusted.

Speaker 2 (15:05):
So now I wanted to turn a little bit towards
Watson X governance, and so IBM recently announced their AI platform,
Watson X, which will include a governance component. Could you
tell us a little more about WATSONX dot governance.

Speaker 3 (15:18):
Yeah, I mean before I do that, I'll just back
up and talk about the full platform and then lean
into Watson X because I think it's important to understand
the delivery of a full suite of capabilities, to get data,
to train models, and then to govern them over their
life cycle. All of these things are really important. From

(15:42):
the onset you need to make sure that you have.
For our WATSONX Dot AI for example, that's the studio
to train new foundation models and generative AI and machine
learning capabilities, and we are populating that with some IBM
trained foundation models, which we're curating and tailoring more specifically

(16:07):
for enterprises. So that's really important. It comes back to
the point I made earlier about business trust and the
need to have enterprise ready technologies in the AI space,
and then the watsonex dot data is a fit for
purpose data store or a data lake, and then watsonex
dot gov. So that's a particular component of the platform

(16:32):
that my team and the AI Ethics Board has really
worked closely with the product team on developing, and we're
using it internally here in the Chief Privacy Office as
well to help us govern our own uses of AI
technology and our compliance program here. And it essentially helps

(16:53):
to notify you if a model becomes biased or gets
out of alignment as you're using it over time. So
companies are going to need these capabilities. I mean they
need them today to deliver technologies with trust. They'll need
them tomorrow to comply with regulation which is on the horizon.

Speaker 2 (17:11):
And I think compliance becomes even more complex when you
consider international data protection laws and regulations. Honestly, I don't
know how anyone on any company's legal team is keeping
up with this these days. But my question for you
is really how can businesses develop a strategy to maintain
compliance and to deal with it in this ever changing landscape.

Speaker 3 (17:30):
It's increasingly more challenging. In fact, I saw statistic just
this morning that the regulatory obligations on companies have increased
something like seven hundred times in the last twenty years,
so it really is a huge focus area for companies.
You have to have a process in place in order
to do that, and it's not easy, particularly for a

(17:52):
company like IBM that it has a presence in over
one hundred and seventy countries around the world. There's more
more than one hundred and fifty comprehensive privacy regulations, there
are regulations of non personal data, there are AI regulations emerging,
so you really need an operational approach to it in

(18:14):
order to stay compliant. But one of the things we
do is we set a baseline, and a lot of
companies do this as well, So we define a privacy baseline,
we define an AI baseline, and we ensure then as
a result of that that there are very few deviances
because it incorporates in that baseline. So that's one of
the ways we do it. Other companies, I think are

(18:34):
similarly situated in terms of doing that. But again, it
is a real challenge for global companies. It's one of
the reasons why we advocate for as much alignment as
possible on the international realm as well as nationally here
in the US, as much alignment as possible to make

(18:56):
compliance easier for easier and not just because companies want
an easy way to comply. But the harder it is,
the less likely there will be compliance. And it's not
the objective of anybody, governments, companies, consumers to have to
set legal obligations that companies simply can't meet.

Speaker 2 (19:18):
And so what advice would you give to other companies
who are looking to rethink or strengthen their approach to
AI government.

Speaker 3 (19:24):
You need to start with, as we did, foundational principles,
and you need to start making decisions about what technology
you're going to deploy and what technology you're not, What
are you going to use it for, and what aren't
you going to use it for? And then when you
do use it, align to those principles. That's really important.
Formalize a program, have someone within the organization, whether it's

(19:45):
the chief Privacy officer, whether it's some other role, a
chief AI ethics officer, but have an accountable individual and
accountable organization. Do a maturity assessment, figure out where you
are and where you need to be, and really start,
you know, putting it into place today. Don't wait for

(20:05):
regulation to apply directly to your business because it'll be
too late.

Speaker 2 (20:10):
So Smart Talks features new creators, these visionaries like yourself
who are creatively applying technology in business to drive change.
I'm curious if you see yourself as creative.

Speaker 3 (20:20):
You know, I definitely do. I mean, you need to
be creative when you're working in an industry that evolves
so very quickly. So you know, I started with IBM
when we were primarily a hardware company, right, and we've
changed our business so significantly over the years. And the
issues that are raised with respect to each new technology,

(20:44):
whether it be cloud, whether it be AI now where
we're seeing a ton of issues, or you look at
emergent issues in the space of things like neurotechnologies and
quantum computers. You have to be strategic and you have
to be creative and thinking about how you can adapt

(21:04):
agilely quickly a company to an environment that is changing
so quickly, and.

Speaker 2 (21:11):
With this transformation happening at such a rapid pace. Do
you think creativity plays a role in how you think
about and implement specifically a trustworthy AI strategy.

Speaker 3 (21:22):
Yeah, I absolutely think it does, because again, it comes
back to these capabilities, and there are ways. I guess
how you define creativity could be different, right, but I'm
thinking of creativity in the sense of sort of agility
and strategic vision and creative problem solving. I think that's
really important in the world that we're in right now,

(21:44):
being able to creatively problem solve with new issues that
are rising sort of every day.

Speaker 2 (21:53):
And so, how do you see the role of chief
privacy officer evolving in the future as AI technology continues
to advance, Like what steff should CPOs take to stay
ahead of all these changes that are come in their way.

Speaker 3 (22:04):
So the role is evolving in most companies, I would
say pretty rapidly. Many companies are looking to chief privacy
officers who are ready understand the data that's being used
in the organization and have programs to ensure compliance with
laws that require you to manage that data in accordance
with data protection laws and the like. It's a natural

(22:27):
place and position for AI responsibility. And so I think
what's happening to a lot of chief privacy officers is
they're being asked to take on this AI governance responsibility
for companies and if not take it on at least
play a very key role working with other parts of
the business in AI governance. So that really is changing.

(22:50):
And if chief privacy officers are in companies who maybe
haven't started thinking about AI yet, they should, So I
would encourage them to look at different resources that are
available already in AI governance space. For example, the International
Association of Privacy Professionals, which is the seventy five thousand

(23:10):
member professional body for the profession of Chief Privacy officers,
just recently launched an AI Governance Initiative and an AI
Governance Certification program. I sit on their advisory board. But
that's just emblematic of the fact that the field is
changing so rapidly.

Speaker 2 (23:30):
And so, you know, speaking of rapid change, when you're
back here on smart Talks in twenty twenty one, you
said that the future of AI will be more transparent
and more trustworthy. You know, what do you see the
next five to ten years holding? You know, when you're
back on smart Talks in you know, twenty twenty six,
you know twenty thirty, you know what are we going
to be talking about when it comes to AI technology
and governance.

Speaker 3 (23:50):
So I try to be an optimist, right, And I
said that two years ago, and I think we're seeing
it now come into fruition, and there will be requirements,
whether they're coming from the US, whether they're coming from Europe,
whether they're just coming from voluntary adoption by clients of
things like the NIST Risk Management Framework, really important voluntary frameworks.

(24:14):
You're going to have to adopt transparent and explainable practices
in your uses of AI. So I do see that happening.
And in the next five to ten years, boy, I
think we'll see more research into trust in and techniques
because we don't really know, for example, how to water mark.
We're calling for things like watermarking, there'll be more research

(24:37):
into how to do that. I think you'll see, you know,
regulation that's specifically going to require those types of things.
So I think again, I think the regulation is going
to drive research. It's going to drive research into these
areas that will help ensure that we can deliver new capabilities,
generative capabilities and the like with trust and explainability.

Speaker 2 (24:59):
Thank you so much, which Christina for joining me on
smart Talks to talk about AI and governance.

Speaker 3 (25:04):
Well, thank you very much for having me to.

Speaker 1 (25:07):
Unlock the transformative growth possible with artificial intelligence, businesses need
to know what they wish to grow into first. Like
Christina said, the best way forward in the AI future
is for businesses to figure out their own foundational principles
around using the technology, drawing on those principles to apply

(25:28):
AI in a way that's ethically consistent with their mission
and complies with the legal frameworks built to hold the
technology accountable. As AI adoption grows more and more widespread,
so too will the expectation from consumers and regulators that
businesses use it responsibly. Investing independable AI governance is a

(25:50):
way for businesses to lay the foundations for technology that
their customers can trust while rising to the challenge of
increasing regulatifky complexity. Though the emergence of AI does complicate
an already tough compliance landscape, businesses now face a creative
opportunity to set a precedent for what accountability in AI

(26:14):
looks like and rethink what it means to deploy trustworthy
artificial intelligence. I'm Malcolm Gladwell. This is a paid advertisement
from IBM. Smart Talks with IBM will be taking a
short hiatus, but look for new episodes in the coming weeks.
Smart Talks with IBM is produced by Matt Ramano, David

(26:36):
jaw nische Venkat and Royston Deserve with Jacob Goldstein. We're
edited by Lydia jen Kott. Our engineer is Jason Gambrel.
Theme song by Gramoscope. Special thanks to Carli Migliori, Andy Kelly,
Kathy Callahan and the eight Bar and IBM teams, as
well as the Pushkin marketing team. Smart Talks with IBM

(26:59):
is a production the Pushkin Industries and Ruby Studio at iHeartMedia.
To find more Pushkin podcasts, listen on the iHeartRadio app,
Apple Podcasts, or wherever you listen to podcasts
Advertise With Us

Host

Malcolm Gladwell

Malcolm Gladwell

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.