All Episodes

October 8, 2024 35 mins

In this episode of Smart Talks with IBM, Jacob Goldstein speaks with Rebecca Finlay, CEO of Partnership on AI, about the importance of advancing AI innovation with openness and ethics at the forefront. Rebecca discusses how guardrails — such as risk management — can advance efficiency in AI development. They explore the AI Alliance’s focus on open data and technology, and the importance of collaboration. Rebecca also underscores how diverse perspectives and open-mindedness can drive AI progress responsibly.

 

This is a paid advertisement from IBM. The conversations on this podcast don't necessarily represent IBM's positions, strategies or opinions.

 

Visit us at https://ibm.com/smarttalks

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
Welcome to tech Stuff, a production from iHeartRadio. This season
on smart Talks with IBM, Malcolm Gladwell and team are
diving into the transformative world of artificial intelligence with a
fresh perspective on the concept of open What does open

(00:24):
really mean in the context of AI. It can mean
open source code or open data, but it also encompasses
fostering an ecosystem of ideas, ensuring diverse perspectives are heard,
and enabling new levels of transparency. Join hosts from your
favorite Pushkin podcasts as they explore how openness in AI
is reshaping industries, driving innovation, and redefining what's possible. You'll

(00:49):
hear from industry experts and leaders about the implications and
possibilities of open AI, and of course, Malcolm Gladwell will
be there to guide you through the season with his
unique insights. Look out for new episodes of Smart Talks
every other week on the iHeartRadio app, Apple Podcasts, or
wherever you get your podcasts, and learn more at IBM

(01:09):
dot com, slash smart Talks.

Speaker 2 (01:18):
Pushkin Hello, Hello, Welcome to Smart Talks with IBM, a
podcast from Pushkin Industries, iHeartRadio and IBM. I'm Malcolm Gladwell.
This season, we're diving back into the world of artificial intelligence,
but with a focus on the powerful concept of open

(01:40):
its possibilities, implications, and misconceptions. We'll look at openness from
a variety of angles and explore how the concept is
already reshaping industries, ways of doing business, and our very
notion of what's possible. In today's episode, Jacob Goldstein sits
down with Rebecca Finley, the CEO of the Partnership on AI,

(02:03):
a nonprofit group grappling with important questions around the future
of AI. Their conversation focuses on Rebecca's work bringing together
a community of diverse stakeholders to help shape the conversation
around accountable AI governance. Rebecca explains why transparency is so
crucial for scaling the technology responsibly, and she highlights how

(02:27):
working with groups like the AI Alliance can provide valuable
insights in order to build the resources, infrastructure, and community
around releasing open source models. So, without further ado, let's
get to that conversation.

Speaker 3 (02:48):
Can you say your name and your job?

Speaker 4 (02:50):
My name is Rebecca Finley. I am the CEO of
the Partnership on AI to benefit people and society. Often
referred to as PAI.

Speaker 3 (03:00):
How did you get here? What was your job before
you had the job that you have now.

Speaker 4 (03:06):
I came to PAI about three years ago, having had
the opportunity to work for the Canadian Institute for Advance Research,
developing and deploying all of their programs related to the
intersection of technology and society. And one of the areas

(03:27):
that the Canadian Institute had been funding since nineteen eighty
two was research into artificial intelligence.

Speaker 3 (03:35):
Wow, early, they were early.

Speaker 4 (03:38):
It was a very early commitment and an ongoing commitment
at the Institute to fund long term fundamental questions of
scientific importance in interdisciplinary research programs that were often committed
and funded to for well over a decade. The AI,

(04:01):
Robotics and Society program that kicked off the work at
the Institute eventually became a program very much focused on
deep learning and reinforcement learning, neural networks, all of the
current iteration of AI, or certainly the pregenerative AI iteration

(04:22):
of AI that led to this transformation that we've seen
in terms of online search and all sorts of ways
in which predictive AI has been deployed. So I had
the opportunity to see the very early days of that
research coming together, and when in the early sort of
two thousand, twenty and tens, when compute capability came together

(04:46):
with data capability through some of the Internet companies and otherwise,
and we really saw this technology start to take off.
I had the opportunity to start up a program specifically
focused on the impacts of AI in society. There was,
as you know, at that time, some concerns both about

(05:06):
the potential for the technology, but also in terms of
what we were seeing around data sets and bias and
discrimination and potential impact on future jobs. And so bringing
a whole group of experts, whether they were ethicists or
lawyers or economists, sociologists into the discussion about AI was

(05:29):
core to that new program and continues to be core
to my commitment to bringing diverse perspectives together to solve
the challenges and opportunities that AI offers today.

Speaker 3 (05:40):
So specifically, what is your job now? What is the
work you do? What is the work that PAI does?

Speaker 4 (05:46):
I like to answer that question by asking two questions.
First and foremost, do you believe that the world is
more divided today than it ever has been in recent history?
And do you believe that if we don't create spaces
for very different perspectives to come together, we won't be

(06:06):
able to solve the challenges that are in front of
the world today. My answer to both of those questions is, yes,
we're more divided, and two, we need to seek out
those spaces where those very different perspectives can come together
to solve those great challenges. And that's what I get
to do as CEO of the Partnership on AI. We

(06:29):
were begun in twenty sixteen with a fundamental commitment to
bringing together experts, whether they were in industry, academia, civil society,
or philanthropy, coming together to identify what are the most
important questions when we think about developing AI centered on
people and communities, and then how do we begin to

(06:52):
develop the solutions to make sure we benefit appropriately.

Speaker 3 (06:56):
So that's a very big picture set of ideas. I'm
curious on a sort of more day to day level.
I mean, you talk about collaborating with all these different
kinds of people, all these different groups, what does that
actually look like. What are some specific examples of how
you do this work?

Speaker 4 (07:13):
So right now we have about one hundred and twenty
partners in sixteen countries. They come together through working groups
that we look at through a variety of different perspectives.
It could be AI, labor and the economy. It could
be how do you build a healthy information ecosystem. It

(07:34):
could be how do you bring more diverse perspectives into
the inclusive and equitable development of AI. It could be
what are the emerging opportunities with these very very large
foundation model applications and how do you deploy those safely?
And these groups come together most importantly to say, what

(07:54):
are the questions we need to answer collectively, So they
come together in working groups. I have an amazing staff
team who hold the pen on synthesizing research and data
and evidence, developing frameworks, best practices, resources, all sorts of
things that we can offer up to the community, be
they in industry or in policy, to say this is

(08:16):
how we can well, this is what good looks like,
and this is how we can do it on a
day to day basis. So that's what we do, and
then we publish our materials. It's all open. We make
sure that we get them into the hands of those
communities that can use them, and then we drive and
work with those communities to put them into practice.

Speaker 3 (08:34):
You use the word open there and describing your publications.
I know, in the world of AI, on the sort
of technical side, there's a lot of debate, say, or
discussion about kind of open versus closed AI, And I'm
curious how you kind of encounter that particular discussion. What
is your view on open versus closed AI.

Speaker 4 (08:57):
So the current discussion between open and closed release of
AI models came once we saw chat, GPT and other
very large generative AI systems being deployed out into the
hands of consumers around the world, and there emerged some

(09:18):
fear about the potential of these models to act in
all sorts of catastrophic ways. So there were concerns that
the models could be deployed with regard to different development
of viruses or biomedical weapons, or even nuclear weapons, or
through manipulation or otherwise. So this are emerged about over

(09:41):
the last eighteen months, this real concern that these models,
if deployed openly, could lead to some level of truly
catastrophic risk. And what emerged is actually that we discovered
that through a whole bunch of work that's been done
over the last little while, that releasing them openly has

(10:03):
not led and doesn't appear to be leading in any
way to catastrophic risk. In facts, releasing them openly allows
for much more greater scrutiny and understanding of the safety
measures that have been put into place. And so what
happened was sort of the pendulum swung very much towards
concerned about really catastrophic risk and safety over the last year,

(10:26):
and over the last year we've seen it swing back
as we learn more and more about how these models
are being used and how they are being deployed into
the world. My feeling is we must approach this work openly,
and it's not just open release of models or what
we think of as traditional open source forms of model

(10:48):
development or otherwise, but we really need to think about
how do we build an open innovation ecosystem that fundamentally
allows both for the innovation to be shared with many people,
but also for safety and security to be rigorously upheld.

Speaker 3 (11:04):
So when you talk about this kind of broader idea
of open innovation beyond open source or you know, transparency
and models, like, what do you mean sort of specifically
how does that look in the world.

Speaker 4 (11:17):
So I have three particular points of view when it
comes to open innovation because I think we need to
think both both upstream around the research that is driving
these models and downstream in terms of the benefits of
these models to others. So, first and foremost, what we
have known in terms of how AI has been developed,
and yes, I had an opportunity to see it when

(11:39):
I was at the Canadian Institute for Advanced Research is
a very open form of scientific publication and rigorous peer review.
And what happens when we release openly is you have
an opportunity for the research to be interrogated to determine
the quality and significance of that, but then also for

(11:59):
it to be picked up by many others. And then secondly,
openness for me is about transparency. We released a set
of very strong recommendations last year around the way in
which these very large foundation models could be deployed safely.
They're all about disclosure. They're all about disclosure and documentation

(12:19):
right from the early days pre R and D development
of these systems, right in terms of thinking about what's
in the training data and how is it being used,
all the way through to post deployment monitoring and disclosure.
So I really think that this is important transparency through it.
And then the third piece is openness in terms of
who was around the table to benefit from this technology.

(12:42):
We know that if we're really going to see these
new models having being successful deployed into education or healthcare
or climate and sustainability, we need to have those experts
in those communities at the table charting this and making
sure that the technology is working for them. Those are
the free ways I think about openness.

Speaker 3 (13:02):
Is there like a particular project that you've worked on
that you feel like, you know reflects your approach to
responsible AI.

Speaker 4 (13:12):
So there's a really interesting project that we have underway
at PAI that is looking at responsible practices squarely when
it comes to the use of synthetic media. And what
we heard from our community was that they were looking
for a clear code of conduct about what does it
mean to be responsible in this space. And so what

(13:33):
happened is we pulled together a number of working groups
to come together. They included industry representatives, They also included
civil society organizations like WITNESS, a number of academic institutions
and otherwise. And what we heard was that there were
clear requirements that creators could take, that developers of the

(13:55):
technology could take. And then also distributors. So when we
think about those generative AI systems being deployed across platforms
and otherwise, and we came up with a framework for
what responsibility looks like. What does it mean to have consent,
what does it mean to disclose responsibly, what does it
mean to embed technology into it? So, for example, we've

(14:18):
heard many people talk about the importance of water marking
systems right and making sure that we have a way
to water mark them. But what we know from the
technology is that is a very very complex and complicated problem,
and what might work on a technical level certainly hits
a whole new set of complications when we start labeling
and disclosing out to the public about what that technology

(14:39):
actually means. All of these I believe are solvable problems,
but they all needed to have a clear code underneath
them that was saying this is what we will commit to.
And we now have a number of organizations, many many
of the large technology companies, but also many of the
small startups who are operating in this base, civil society,

(15:00):
media organizations like the BBC and the CBC who's have
signed on. And one of the really exciting pieces of
that is that we're now seeing how it's changing practice.
So a year in we asked each of our partners
to come up with a clear case study about how
that work has changed the way they are making decisions,

(15:21):
deploying technology and ensuring that they're being responsible in their use.
And that is creating now a whole resource online that
we're able to share with others about what does it
mean to be responsible in this place. There's so much
more work to be done, and the exciting thing is
once you have a foundation like this in place, we
can continue to build on it. So much interest now

(15:42):
in the policy space, for example, about this work as well.

Speaker 3 (15:46):
Are there any specific examples of those sort of case
studies or the real world experiences that say media organizations
had that are interesting that are illuminating.

Speaker 4 (15:57):
Yes. So, for example, what we saw with the BBC
is that they're developing a lot of content as a
public broadcaster, both in terms of their news coverage but
also in terms of some of the resources that they
are developing for the British public as well. And what
they talked about was the way in which they had

(16:19):
used synthetic media in a very very sensitive environment where
they were hearing from individuals talk about personal experiences, but
wanted to have some way to change the face entirely
in terms of the individuals who were speaking. So that's
a very complicated ethical question, right, how do you do

(16:39):
that responsibly? And what is the way in which you
use that technology, and most importantly, how do you disclose it?
So their case study looked at that in some real
detail about the process they went through to make the
decision responsibly to do what they chose, how they intended
to use the technology in that space.

Speaker 3 (17:00):
Describe your work and some of these studies, the idea
of transparency seems to be a theme. Talk about the
importance of transparency in this kind of work.

Speaker 4 (17:11):
Yeah, transparency is fundamental to responsibility. I always like to
say it's not accountability in a complete sense, but it
is a first step to driving accountability more fully. So,
when we think about how these systems are developed, they're
often developed behind closed doors inside companies who are making

(17:33):
decisions about what and how these products will work from
a business perspective. And what disclosure and transparency can provide
is some sense of the decisions that were made leading
up to the way in which those models were deployed.
So This could be ensuring that individual's private information was

(17:54):
protected through the process and won't be inadvertently disclosed, or otherwise,
it could be providing some sense of how well the
system performs against a whole level of quality measures. So
we have all of these different types of evaluations and
a measures that are emerging about the quality of these
systems as they're deployed. Being transparent about how they perform

(18:16):
against these systems is really crucial to that as well.
We have a whole ecosystem that's starting to emerge around
auditing of these systems. So what does that look like
we think about auditors and all sorts of other sectors
of the economy. What does it look like to be
auditing these systems to ensure that they're meeting all of
those both legal but additional ethical requirements that we want

(18:37):
to make sure that are in place.

Speaker 3 (18:40):
What are some of the hardest ethical dilemmas you've come
up against in AI policy?

Speaker 4 (18:48):
Well, the interesting thing about AI policy right is what
it works very simply in one setting, can be highly
complicated in another setting. And so, for example, I have
an app that I adore. It's an app on my
phone that allows me to take a photo of a bird,
and it will help me to better understand, you know,
what that bird is, and give me all sorts of

(19:10):
information about that bird. Now, it's probably right most of
the time, and it's certainly right enough of the time
to give me great pleasure and delight when I'm out walking.
You could think about that exact same technology applied. So,
for example, now you're a security guard and you're working
in a shopping plaza, and you're able to take photos

(19:31):
of individuals who you may think are acting suspiciously in
some way and match that photo up with some sort
of a database of individuals that may have been found,
you know, to have some sort of connection to other
criminal behavior in the past. Right, So what goes from
being a delightful Oh, isn't this an interesting bird? To
a very very creepy What is this say about surveillance

(19:55):
and privacy and access to public spaces? And that is
the nature of AI. So much of the concern about
the ethical use and deployment of AI is how an
organization is making the choices within the social and systemic
structure they sit. So so much about the ethics of

(20:16):
AI is understanding what is the use case how is
it being used, how is it being constrained? How does
it start to infringe upon what we think of as
the human rights of an individual to privacy, And so
you have to constantly be thinking about ethics. What could
work very well in one situation absolutely doesn't work in another.

(20:39):
We often talk about these as socio technical questions. Right,
just because the technology works doesn't actually mean that it
should be used and deployed.

Speaker 3 (20:49):
What's an example of where the partnership on AI influence
changes either in policy or in industry practice.

Speaker 4 (20:59):
We talked a little bit about the Framework for Synthetic
Media and how that has allowed companies and media organizations
and civil society organizations to really think deeply about the
way in which they're using this. Another area that we
focused on has been around responsible deployment of foundation and
large scale models. So, as I said, we issued a

(21:21):
set of recommendations last year that really laid out for
these very large developers and deployers of foundation and frontier models,
what does good look like right from R and D
through to deployment monitoring. And it has been very encouraging
to see that that work has been picked up by

(21:42):
companies and really articulated as part of the fabric of
the deployment of their foundation models and systems moving forward.
So much of this work is around creating clear definitions
of what we're meaning as the technology evolves and clear
sets of responsibilities. So it's great to see that work
getting picked up. The NTIA in the United States just

(22:04):
released a report on open models and the release of
open models. Great to see our work sited there as
contributing to that analysis. Great to see some of our
definitions and synthetic media getting picked up by legislators in
different countries. Really just it's important, I think, for us
to build capacity, knowledge and understanding and our policy makers

(22:26):
in this moment as the technology is evolving and accelerating
in its development.

Speaker 3 (22:32):
What's the AI Alliance and why did Partnership on AI
decide to join?

Speaker 4 (22:37):
So you had asked about the debate between open versus
closed models and how that has evolved over the last year,
and the AI Alliance was a community of organizations that
came together to really think about, Okay, if we support
open release of models, what does that look like and

(22:59):
what does the community the need. And so that's about
one hundred organizations IBM one of our founding partners is
also one of the founding partners of the AI Alliance.
It's a community that brings together a number of academic
institutions many countries around the world, and they're really focused
on how do you build the resources and infrastructure and

(23:22):
community around what open source in these large scale models
really mean. So that could be open data sets, that
could be open technology development. Really building on that understanding
that we need an infrastructure in place and a community
engaged in thinking about safety and innovation through the open lens.

Speaker 2 (23:44):
This approach brings together organizations and experts from around the
globe with different backgrounds, experiences, and perspectives to transparently and
openly address the challenges and opportunities today. I poses the
collaborative nature of the AI Alliance encourages discussion, debate, and innovation.

(24:05):
Through these efforts, IBM is helping to build a community
around transparent open technology.

Speaker 3 (24:12):
So I want to talk about the future for a minute.
I'm trures what you see as the biggest obstacles to
widespread adoption of responsible AI practices.

Speaker 4 (24:24):
One of the biggest obstacles today is an inability and
a really a lack of understanding about how to use
these models and how they can most effectively drive forward
a company's commitment to whatever products and services it might
be deploying. So I always recommend a couple of things

(24:46):
for companies to really to think about this and to
get started. One is think about how you are already
using AI across all of your business products and services.
Because already AI is integri into our workforces and into
our workstreams and into the way in which companies are
communicating with their clients every day. So understand how you

(25:08):
are already using it and understand how you are integrating
oversight and monitoring into those One of the best and
clearest ways in which a company can really understand how
to use this responsibly is through documentation. It's one of
the areas where there's a clear consensus in the community.
So how do you document the models that you are using,

(25:29):
making sure that you've got a registry in place. How
do you document the data that you are using and
where that data comes from. This is sort of the
first system, first line of defense in terms of understanding
both what is in place and what you need to
do in order to monitor it moving forward. And then secondly,
once you've got an understanding of how you're already using
the system, look at ways in which you could begin

(25:51):
to pilot or iterate in a low risk way using
these systems to really begin to see how and what
structures you need to have in place to use it
moving forward. And then thirdly, make sure that you structure
a team in place internally that's able to do some
of this. Cross departmental monitoring, knowledge sharing and learning boards

(26:12):
are very very interested in this technology, So thinking about
how you can have a system or a team in
place internally that's reporting to your board, giving them a
sense of both the opportunities that it is identifies for
you and the additional risk mitigation and management you might
be putting into place. And then you know, once you
have those things into place, you're really going to need

(26:33):
to understand how you work with the most valuable asset
you have, which is your people. How do you make
sure that AI systems are working for the workers, making
sure that they're going into place. The most important and
impressive implementations we see are those where you have the
workers who are going to be engaged in this process.

(26:53):
Central to figuring out how to develop and deploy it
in order to really enhance their work, gets a core
part of a set of shared Prosperity guidelines that we
issued last year.

Speaker 3 (27:05):
And then from the side of policy makers, how should
policy makers think about the balance between innovation and regulation.

Speaker 4 (27:17):
Yeah, it's so interesting, isn't it that we always think of,
you know, innovation and regulation as being two sides of
a coin, when in fact, so much innovation comes from
having a clear set of guardrails and regulation in place.
We think about all of the innovation that's happened in
the automotive industry, right we can drive faster because we

(27:43):
have breaks, we can drive faster because we have seat
belts in place. So I think it's often interesting to
me that we think about the two as being on
either side of the coin, but in actual fact, you
can't be innovative without being responsible as well. So I
think from a policy maker perspective, what we have been

(28:04):
really encouraging them to do is to understand that you've
got foundational regulation in place that works for you nationally.
This could be ensuring that you have strong privacy protections
in place. It could be ensuring that you are understanding
potential online harms, particularly to vulnerable communities and then look

(28:24):
at what you need to be doing internationally to being
both competitive and sustainable. There's all sorts of mechanisms that
are in place right now at the international level to
think about how do we build an interoperable space for
these technologies moving forward.

Speaker 3 (28:40):
We've been talking in various ways about what it means
to responsibly develop AI, and if you're going to boil
that down, you know the essential concerns that people should
be thinking about, like what are the key things to
think about in responsible AI?

Speaker 4 (29:00):
So if you are a company, if we're talking specifically
through the company lens, when we're thinking about responsible use
of AI, the most important difference between this form of
AI technologies and other forms of technologies that we have
used previously is the integration of data and the training

(29:22):
models that go on top of that data. So when
we think about responsibility, first and foremost, you need to
think about your data. Where did it come from, What
consent and disclosure requirements do you have on it? Are
you privacy protecting? You can't be thinking about AI within
your company without thinking about data, and that's both your

(29:43):
training data. But then once you're using your systems and
integrating and interacting with your consumers, how are you protecting
the data that's coming out of those systems as well?
And then secondly is when you're thinking about how to
deploy that AA system, the most important thing you want
to think about is are we being transparent about how

(30:06):
it's being used with our clients and our partners. So
you know, the idea that if I'm a customer, I
should know when I'm interacting with an AI system, I
should know when I'm interacting with a human. So I
think those two pieces are the fundamentals. And then of
course you want to be thinking carefully about, you know,

(30:27):
making sure that whatever jurisdiction you're operating in, you're meeting
all of the legal requirements with regard to the services
and products that you're offering.

Speaker 3 (30:37):
Let's finish with the speed round, complete the sentence. In
five years, AI will will.

Speaker 4 (30:45):
Drive equity, justice and shared prosperity if we choose to
set that future trajectory for this technology.

Speaker 3 (30:55):
What is the number one thing that people misunderstand about AI?

Speaker 4 (31:00):
AI is not good, and AI is not bad, but
AI is also not neutral. It is a product of
the choices we make as humans about how we deploy it.
In the world.

Speaker 3 (31:15):
What advice would you give yourself ten years ago to
better prepare yourself for today?

Speaker 4 (31:25):
Ten years ago, I wish that I had known just
how fundamental the enduring questions of ethics and responsibility would
be as we developed this technology moving forward, So many
of the questions that we ask about AI are questions

(31:47):
about ourselves and the way in which we use technology
and the way in which technology can advance the work
we're doing.

Speaker 3 (31:57):
How do you use AI in your day to day life?

Speaker 4 (32:00):
I use AI all day every day, So whether it's
my bird app when I go out for my morning walk,
helping me to better identify birds that I see, or
whether it is my mapping app that's helping me to
get more speedily through traffic to whatever meeting I need
to go to. I use AI all the time. I

(32:22):
really enjoy using some of the generative AI chatbots more
for fun than for anything else. As a creative partner
in thinking through ideas and integrating it into all aspects
of our lives. Is just so much about the way
in which we live today.

Speaker 3 (32:38):
So people use the word open to mean different things,
even just in the context of technology. How do you
define open in the context of your work.

Speaker 4 (32:49):
So there is the question of open as it is
deployed to technology, which we've talked a lot about. But
I do think a big piece of PAI is open minded.
We need to be open minded truly to listen to,
for example, what a civil society advocate might say about
what they're seeing in terms of the way in which

(33:11):
AI is interacting in a particular community. Or we need
to be open minded to hear from a technologist about
their hopes and dreams of where this technology might go
moving forward. And we need to have those conversations listening
to each other to really identify how we're going to
meet the challenge and opportunity of AI today. So open

(33:34):
is just fundamental to the partnership on AI. I often
call it an experiment in open innovation.

Speaker 3 (33:44):
Rebecca, thank you so much for your time.

Speaker 4 (33:46):
It is my pleasure. Thank you for having me.

Speaker 2 (33:51):
Thank you to Rebecca and Jacob for that engaging discussion
about some of the most pressing issues facing the future
of AI. As Rebecca emphasized, whether you're thinking about data
privacy or disclosure, transparency and openness are key to solving
challenges and capitalizing on new opportunities by developing best practices

(34:13):
and resources Partnership on AI is building out the guardrails
to support the release of open source models and the
practice of post deployment monitoring. By sharing their work with
the broader community, Rebecca and Pai are demonstrating how working responsibly,
ethically and openly can help drive innovation. Smart Talks with

(34:38):
IBM is produced by Matt Romano, Joey Fishground, Amy Gaines McQuaid,
and Jacob Goldstein. We're edited by Lydia gene Kott. Our
engineers are Sarah Bugaier and Ben Holliday. Theme song by Gramoscope.
Special thanks to the eight Bar and IBM teams, as
well as the Pushkin marketing team. Smart Talks with IBM

(34:59):
is a production of Pushkin Industries and Ruby Studio at iHeartMedia.
To find more Pushkin podcasts, listen on the iHeartRadio app,
Apple Podcasts, or wherever you listen to podcasts. I'm Malcolm Glapwell.

(35:19):
This is a paid advertisement from IBM. The conversations on
this podcast don't necessarily represent IBM's positions, strategies or opinions.

TechStuff News

Advertise With Us

Follow Us On

Hosts And Creators

Oz Woloshyn

Oz Woloshyn

Karah Preiss

Karah Preiss

Show Links

AboutStoreRSS

Popular Podcasts

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.