All Episodes

November 14, 2023 53 mins

In order to stay competitive in a rapidly changing marketplace, businesses need to adapt to the potential of generative AI. In this special live episode of Smart Talks with IBM, Malcolm Gladwell is joined onstage at iHeartMedia’s studio by Dr. Darío Gil, Senior Vice President and Director of Research at IBM. They chat about the evolution of AI, give examples of practical uses, and discuss how businesses can create value through cutting edge technology. 

Watch the live conversation here: https://youtu.be/WOwM__St6aU

Hear more from Darío on generative AI for business: https://www.ibm.com/think/ai-academy/

Visit us at: ibm.com/smarttalks

This is a paid advertisement from IBM.

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
Welcome to Tech Stuff, a production from iHeartRadio. Today we
are witnessed to one of those rare moments in history,
the rise of an innovative technology with the potential to
radically transform business and society forever. That technology, of course,

(00:24):
is artificial intelligence, and it's the central focus for this
new season of Smart Talks with IBM. Join hosts from
your favorite Pushkin podcasts as a talk with industry experts
and leaders to explore how businesses can integrate AI into
their workflows and help drive real change in this new
era of AI. And of course, host Malcolm Gladwell will

(00:46):
be there to guide you through the season and throw
in his two cents as well. Look out for new
episodes of Smart Talks with IBM every other week on
the iHeartRadio app, Apple Podcasts, wherever you get your podcasts
and learn more are at IBM dot com slash smart Talks.
All Right, Welcome everybody, you guys excited, Here we go.

Speaker 2 (01:13):
Hello, Hello, Welcome to Smart Talks with IBM, a podcast
from Pushkin Industries, iHeartRadio and IBM. I'm Malcolm Gladwell. This season,
we're continuing our conversations with new creators visionaries who are
creatively applying technology and business to drive change, but with
a focus on the transformative power of artificial intelligence and

(01:34):
what it means to leverage AI as a game changing
multiplier for your business. Today's episode is a bit different.
I was recently joined on stage by Dario Gill for
a conversation in front of a live audience at the
iHeartMedia headquarters in Manhattan. Dario is the senior vice president
and director of IBM Research, one of the world's largest

(01:56):
and most influential corporate research labs. We discussed the rise
of generative AI, what it means for business and society.
He also explained how organizations that leverage AI to create
value will dominate in the near future. Okay, let's get
on to the conversation. Hello everyone, welcome, and I'm here

(02:19):
with doctor Dario Gil and I wanted to say before
we get started. This is something I said backstage that
I feel very guilty today because you're the you know,
you know, arguably one of the most important figures in
AI research in the world, and we have taken you
away from your job for a morning. It's like, if

(02:42):
you know, Oppenheimer's wife in nineteen forty four said let's
go and have a little getaway in the Bahamas. It's
that kind of thing. You know, What do you say
to your wife, I can't We have got to work
on this thing I can't tell you about. She's like
getting me out of Los Alamos. No, So I do
feel guilty. We've set back AI research by by about

(03:05):
four hours here. But I wanted to you've been up
with with ibo for twenty years, twenty years this summer.
So and how old were you when you Not to
give away your age, but you were how old when
you started?

Speaker 3 (03:17):
I was twenty eight?

Speaker 2 (03:18):
Okay, yeah, So I want to go back to your
twenty eight year old self. Now, if I asked you
about artificial intelligence, I asked twenty eight year old Dario,
what does the future hold for AI? How quickly will
this new technology transform our world? Et cetera, et cetera.
What would twenty eight year old Darigo have said?

Speaker 4 (03:37):
Well, I think the first thing is that even though
AI as a feel has been with us for a
long time since the mid nineteen fifties, at that time,
AI was not a very polite word to say, meaning
within the scientific community, people didn't use sort of that
term that would have said things like, you know, maybe
I do things relate to machine learning, right, or statistical

(03:58):
techniques in terms of classific fires and so on.

Speaker 3 (04:01):
But AI had a mixed.

Speaker 4 (04:03):
Reputation, right, it had gone through different cycles of hype,
and it's also if moments of you know, a lot
of negativity towards it because of lack of success. And
so I think that would be the first thing we
probably say, like AI is like what is that?

Speaker 3 (04:19):
Like, you know, respectable scientists are not working.

Speaker 4 (04:21):
On AI the fin as such, and that really changed
over the last fifteen years. Only, right, I would say,
with the advent of deep learning over the last decade
is when that re enter again the lexicon of saying
AI and that that was a legitimate thing to work on.
So I would say that that's the first thing I
think we would have noticed a contrast twenty years ago.

Speaker 2 (04:40):
Yeah, So what point in your twenty year tenure at
IBM would you say you kind of snapped into present
kind of wow mode.

Speaker 4 (04:51):
I would say in a late two thousands, when IBM
was working on the Jeopardy project and just seeing the
demonstrations of what could be done in question answering.

Speaker 2 (05:09):
It's literally Jeopardy is this crucial moment in the history
of Yeah.

Speaker 4 (05:14):
You know, there had been a long and wonderful history
inside IBM on AI. So so for example, like you know,
in terms of like these grand challenges, at the very
beginning of the field founding, which is this famous Dartmouth
conference that actually IBM sponsored uh TO to create, there
was an IBM and there called Nathaniel Rochester and there

(05:35):
were a few others who right after that they started
thinking about demonstrations of this field. And then for example,
they created the first you know game to play checkers
and to demonstrate that you could do machine learning on that.
Obviously we saw later in the nineties like chess that
was a very famous example of that, deep Blue with
Deep Blue right and playing with Caspar and then but

(05:58):
I think the moment that was really the ones felt like,
you know, kind of like brute force anticipating, sort of
like moves ahead. But this aspect of dealing with language
and question answering felt different, and I think for for
us internally and many others, was when a moment of
saying like wow, you know, what are the possibilities here?
And then soon after that connected to the sort of

(06:19):
advancements in computing and with deep learning. The last decade
has just been an all out, you know, sort of
like front of advancements and that, and I just continued
to be more and more impressed. And the last few
years have been remarkable too.

Speaker 2 (06:30):
Yeah, So I ask you three quick conceptual questions before
we dig into it, just so I sort of get
a we all get a feel for the shape of AI.
Question Number one is where are we in the evolution
of this? So you know the obvious question. We we're
all suddenly aware of it, we're talking about it. Can

(06:52):
you give us an analogy about where we are in
the kind of likely evolution of this is a technology?

Speaker 4 (06:59):
So I think we're on a significant inflection point that
it feels the equivalent of the first browsers when they
appear and people imagine the possibilities of the Internet or
more imagined experience the internet. The Internet had been around,
right for quite a few decades. AI has been around
for many decades. I think the moment we find ourselves

(07:21):
is that people can touch it, and they can. Before
they were I systems that were like behind the scenes,
like your search results or translation systems, but they didn't
have the experience of like, this is what it feels
like to interact with this thing.

Speaker 3 (07:34):
So that's what I mean.

Speaker 4 (07:36):
I think maybe that analogy of the browser is appropriate
because it's all of a sudden, it's like whoa, you know,
these network of machines and content can be distributed and
everybody can self publish, and there was a moment that
we all remember that, and I think that that is
what the world has experience over the last nine months
or so on. So but fundamentally, also what is important
is that this moment is where the ease of the

(07:57):
number of people that can build and u use AI
has skyrocketed. So over the last decade, you know, technology
firms that had large research teams could build AI that
worked really well, honestly, but when you went down into
say hey, can everybody use it? Can a data science
team in a bank, you know, go and develop these applications.

Speaker 3 (08:19):
It was like more complicated. Some could do it, but
it was more the barrier of entry was high.

Speaker 4 (08:24):
Now it's very different because of foundation models and the
implications that that has.

Speaker 2 (08:28):
For at the moment where the technology is being democratized.

Speaker 4 (08:32):
In demarketized, frankly, it works better for classes of problems
like programming and other things, is really incredibly impressive what
it can do. So the accuracy and the performance of
it is much better, and the ease of use and
the number of use cases we can pursue it much bigger,
So that democratization is a big difference.

Speaker 2 (08:49):
But when you say, when you make it an analogy to
the first browsers, if you if we do another one
of these time travel questions back at the beginning of
the first browsers, to say, many of the potential uses
of the Internet and such we hadn't even begun. We
couldn't even anticipate, right, right, So we're at the point
where the future direction is largely unpredictable.

Speaker 4 (09:11):
Yeah, I think that that is right, because it's such
a horizontal technology that the intersection of the horizontal capability,
which is about expanding our productivity and tasks that we
wouldn't be able to do efficiently without it, has to
marry now the use cases that reflect the diversity of
human experience, our institutional diversity. So as more and more

(09:32):
institutions said, you know, I'm focused on agriculture, you know,
to be able to improve seeds, you know, in these
kinds of environments. They'll find their own context in which
that matters that the creators of a I did not
anticipate at the beginning. So I think that that is
then the fruit of surprises will be like why I
wouldn't even think that it could be.

Speaker 3 (09:49):
Used for that?

Speaker 5 (09:50):
And also clever people will create.

Speaker 4 (09:52):
New business models as associated with that, like it happened
with the Internet of course as well, and that will
be its own source of transformation, unchanged in its own right.

Speaker 5 (10:01):
So I think all of that is yet to unfold.

Speaker 2 (10:03):
Right.

Speaker 4 (10:03):
What we're seeing is this catalyst moment of technology that
works well enough and it can be democratized.

Speaker 2 (10:09):
What next sort of conceptual question?

Speaker 3 (10:12):
You know?

Speaker 2 (10:12):
We can loosely understand or categorize innovations in terms of
their impact on the kind of balance of power between
haves and have nots. Some innovations, you know, obviously favor
those who already have will make the rich richer. Some

(10:32):
some it's a rising tie the lift cell boats and
some bias in the other direction, they close the gap between.
Is it possible to say to predict which of those
three categories AI might fall into.

Speaker 3 (10:47):
It's a great question, you know.

Speaker 4 (10:49):
A first observation I would make on your first two categories.

Speaker 5 (10:53):
Is that it will be both likely be true that
the use.

Speaker 4 (10:58):
Of AI will be highly demarketized, meaning the number of
people that have access to its power to make improvements
in terms of efficiency and so on will be fairly universal,
and that the ones who are able to create AI
UH may be quite concentrated. So if you look at
it from the lens of who creates wealth and value

(11:19):
over sustained periods of time, particularly it's saying a context
like business, I think just being a user of AI
technology is an insufficient strategy and UH. And the reason
for that is, like, yes, you will get the immediate
productivity boost of like just making API calls and you
know that would be a new baseline for everybody, but
you're not accruing value in terms of representing your data

(11:43):
inside the AI in way that it.

Speaker 3 (11:44):
Gives you a sustainable competitive advantage.

Speaker 4 (11:47):
So I always try to tell people is don't just
be an AI user, be an you know, AI value creator.
And I think that that will have a lot of
consequences in terms of the haves and have nots as
an example, and that will apply both to institutions and
regions and countries, et cetera. So I think it would
be kind of a mistake, right to just develop strategies

(12:07):
that are just about.

Speaker 2 (12:09):
Usage, but to come back to that question from them
and to give you a specific suppose I'm a I'm
an industrial farmer in Iowa with ten million dollars of
equipment and move and I'm comparing it to a subsistence
farmer somewhere in the developing world who's got a cell phone.

(12:29):
Over the next five years, who's whose well being rises
by a greater amount.

Speaker 4 (12:37):
Yeah, I think, I mean, it's a it's a good question,
but it might be hard to do a one to
one sort of like attribution to just one variable in
this case, which is AI. But again, provided that you
have access to a phone, right and some kind to
you know, be able to be connected. I do think
so for example, in that context we've developed, we don't

(12:57):
work with NASA as an example, to build you spatial
models using some of these new techniques, and I think
for example, or ability to do flood prediction, I'll tell
you an advantage of why it would be a democratization
force in that context. Before to build a flowed model
based on satellite imagery was actually so.

Speaker 3 (13:15):
Onerous and so complicated and difficult.

Speaker 4 (13:17):
That you would just target to very specific regions and
then obviously countries prioritize their own right. But what we've
demonstrated is actually you can extend that technique to have
global coverage around that. So in that context, I would
say it's a four stores democratization that everybody sort of
would have access if you have some connectivity, as.

Speaker 2 (13:34):
Today, Iowa farmer might have a flood model. The guy
in the developing world definitely didn't, and now he's a
shot at getting one.

Speaker 3 (13:41):
Yeah, but now it has a shot of getting one.

Speaker 4 (13:42):
So there's aspects of it that so long as we
provide connectivity and access to it, that they can be
democratization forces.

Speaker 3 (13:49):
But I'll give you.

Speaker 4 (13:49):
Another example that that can be quite concerning, which is language. Right,
So there's so much language in the you know, in English,
and there is sort of like the reinforcement loop that
happens that the more you concentrate because it has obvious
benefits for global communication and standardization, the more you can
enrich like base AI models based on that capability. If

(14:11):
you have very resource cars languages, you tend to develop
less powerful AI with those languages and so on, So
one has to actually worry and focus on the ability
to actually represent that in that case is language as
a piece of culture. Also in the AI sets that
everybody can benefit from it too. So there's a lot

(14:33):
of considerations in terms of equity about the data and
the data sets that we accrue and what problems are
we trying to solve. I mean, you mentioned agriculture or
healthcare and so on. If we only solve problems that
are related to marketing as an example, that would be
a less rich world in terms of opportunity that if
we incorporate many many other broader server problems.

Speaker 2 (14:53):
Who do you think what do you think are the
biggest impediments to the adoption of a as you would
like as you think aire to be adopted. I mean,
if you look, what are the sticking points that.

Speaker 4 (15:05):
You would look In the end, I'm going to give
a non technological answer as a first one has to
do with workflow, right, So even if the technology is
very capable, the organizational change inside a company to incorporate
into the natural workflow of people or how we work
is It's a lesson we have learned over the last
decade is hugely important.

Speaker 3 (15:26):
So there's a lot of design considerations.

Speaker 4 (15:29):
There's a lot of how do people want to work right,
how do they work today?

Speaker 3 (15:34):
And what is the natural entry points for AI? So
that's like number one.

Speaker 4 (15:38):
And then the second one is, you know, for the
broad value creation aspect of it is the understanding inside
the companies of how you have to curate and create
data to combine it with external data says that you
can have powerful AI models that actually fit your need.
And that aspect of what it takes to actually and

(16:00):
curate the data for this modern AI, it's still working progress, right.
I think part of the problem that happens very often
when I talk to institutions is that they say yah, yeah, yeah, yeah,
I'm doing it. I've been doing it for a long time.
And the reality is that that answer can sometimes be
a little of a cop out, right, is like, I
know you were doing machine learning, you were doing some

(16:22):
of these things.

Speaker 5 (16:23):
But actually the leader's version of AI.

Speaker 4 (16:25):
What was happening with foundation models, Not only is it
very new, it's very hard to do and honestly, if
you haven't been you know, assembling very large teams and
spending hundreds of millions of dollars of compute, and so
you're probably not doing it right. You're doing something else
that is in the broad category. And I think the
lessons about what it means to make this transition to

(16:45):
this new wave is still in early phases of understanding.

Speaker 2 (16:48):
So what would you say? I want to give you
a couple of examples of people with kind of real
work in real world positions of responsibility. Imagine I'm sitting
right here, So imagine that I am the president of
a small liberal arts college and I come to you
and I say, Dario, I keep hearing about the AI
my college has. You know, I don't make it. You know,
I'm I'm I'm making this much money. If that every

(17:11):
year my enrollments declining, I feel like this maybe is
an opportunity. What is the opportunity for me? What would
you say?

Speaker 3 (17:20):
So it's probably in a couple of segments around that.
Right one has.

Speaker 4 (17:24):
To do is well, what is the implications of this
technology inside the institution itself instead of the college, and
how we operate and can we improve for example, efficiency,
like if you have in very low levels of sort
of margin to be able to reinvest, is you know
you run it, You run you know, infrastructure, you run

(17:46):
many things inside the college. What are the opportunities to
increase the productivity or automate and drive savings such that
you can reinvest that money so into the mission of
education right as an example.

Speaker 2 (17:56):
So number one is operational efficiency.

Speaker 4 (17:58):
Operational efficiencyage is is a big one. I think the
second one is within the context of the college, there's
implications for the educational mission on its own, right, how
will you know how.

Speaker 3 (18:08):
Does a curriculum need to evolve or not?

Speaker 4 (18:10):
What are acceptable use policies or of some of these
AI I think we've all read a lot about like
what can happen in terms of exams and so on,
and cheating and not cheating, or what are the actually
positive elements of it in terms of how curriculum should
be developed and professions sustain around that. And then there's
another third dimension, which is the outdoor oriented element of it,
which is like prospect students right, so, which is frankly speaking,

(18:32):
a big use case that is happening right now, which
in the broader industry is called customer care or client
care or citizen care. So in this question will be education,
like you know, hey are you reaching the right students
around that that may apply to the college. How can
you create them, for example, an environment to interact with
the college and answering questions that could be a chat
bought or something like that to learn about it, and personalization.

Speaker 3 (18:54):
So I would say there.

Speaker 4 (18:55):
Is like at least three lenses with which I would
give advice right the positive.

Speaker 2 (19:00):
Second, because it's really interesting. So I really can't as
sign an essay anymore? Can I?

Speaker 3 (19:07):
Can I sign an essay?

Speaker 2 (19:09):
Can I say, rend me a research paper and come
back to being three? Can I do that anymore?

Speaker 3 (19:13):
I think you can?

Speaker 2 (19:14):
How do I do that?

Speaker 3 (19:15):
And then you can that.

Speaker 4 (19:16):
Look, there's there's two questions around that. I think that
if one goes.

Speaker 5 (19:22):
And explains in the context like what is it?

Speaker 3 (19:24):
Why are we here?

Speaker 5 (19:25):
Why are in this class? What is the purpose of this?

Speaker 4 (19:28):
And and one starts with assuming like an element of
like decency and people are people are there like to
learn and so on, and you just give it disclaimer. Look,
I know that one option you have is like just
you know, put the essay question and click go and
like and give an answer. You know, but that is
not why we're here, and that is not the intent
of what we're trying to do. So first I would
start with the sort of like the norms of intent

(19:51):
and decency and appeal to those as step number one.
Then we all know that there will be a distribution
of use cases of people like that will come in
one year and come one or the other and do that.
And so for a subset of that, you know, I
think the technology is going to have all in such
a way that we will have more and more of
the ability to discern right, you know, when that has

(20:11):
been AI generated right and uncreated.

Speaker 3 (20:14):
It won't be perfect, right.

Speaker 4 (20:15):
But there's some elements that you can imagine in putting
the essay and you say, hey, this is likely to
be generated right around that. And for example, one way
you can do that, just to give you an intuition,
you could just have an essay that you write with
pencil and paper. At the beginning, you get a baseline
of what you're writing is like. And then later when
you you know generated, there will be obvious differences around

(20:37):
what kind of writing has been generated.

Speaker 2 (20:39):
On the other way, but you've turned it's everything you're
describing makes sense. Put it greatly in this In this respect,
at least it seems to greatly come complicate the life
of the teacher, whereas the other two use cases seem
to kind of clarify and simplify the role. Right suddenly,
you know, reaching student perspective students, sounds like I can

(21:01):
do that much more kind of efficienty a lot. Yeah,
I can bring out administration costs, but the teaching thing
is tricky.

Speaker 4 (21:07):
Well, until we developed the new norms, right, I mean again,
I mean, I know it's not abuse analogy, but calculators
we deal. We've done with that too, right, And it says, well, calculator,
what is the purpose of math?

Speaker 3 (21:18):
How are we going to do this?

Speaker 2 (21:19):
And so can I tell you my dad's calculator story?

Speaker 3 (21:22):
Yes? Please.

Speaker 2 (21:23):
My father was a mathematician, taught mathematics at University of Waterloo, Agada,
and in the seventies when people started to get pocket calculators,
his students demanded that they'd be able to use them,
and he said no, and they took him to the
administration and he lost. So he then changed completely throughout
all of his old exams and introduced new exams where

(21:46):
there was no calculation. It was all like deep think,
you know, figure out the problem on a conceptual level
and describe it to me. And they were all students
deeply unhappy that he'd made their lives for complicated. But
to your point, I mean he probably the result was
probably a better education. He just removed the element that

(22:08):
they could game with their pocket calculators. I suppose it's
a version.

Speaker 4 (22:11):
Of I think it's a version of that, And so
I think they will develop the equivalent of what your
father did. And I think people say, you know what
if like these kinds of things, everybody's doing it generically
and none of us have any meaning because all you're
doing is pressing buttons, and like the intent of this
was something which was to teach you how to write
or to think or something.

Speaker 5 (22:26):
There may be a variant of how we do all
of this. I mean, obviously some.

Speaker 4 (22:29):
Version of that that has happened is like Okay, we're
all going to sit down and doing with pencil and
paper and computers in their classroom. But there'll be other
variants of creativity that people will put forth to say,
you know what, you know, that's a way to solve
that problem too.

Speaker 2 (22:41):
But this is interesting because to stay on this analogy,
we're really talking about a profound rethinking just using a
college as an example, a real profound rethinking of the
way there's no part of this college it's unaffected by
aia B. In one case, I've made everyone's job easier.

(23:02):
In one case, I've made I'm asking us to really
rethink from the ground up what teaching means. In another case,
I've automated systems that I didn't think of it. I mean,
it's like, that's right, that's it's not that's a lot
to ask someone who got a pH d in medieval
language literature, you know, forty years ago.

Speaker 4 (23:20):
Yeah, but you know, I'll tell you a positive sort
of development that I'm seeing the sciences around this, which
is you're seen as you see more and more examples
of applying AI technology within the context of like historians
to as an example, Right, you have archival and you know,
and you have all these books and being able to
actually help you as an assistant right around that, but

(23:41):
not only with text now, but with diagrams, right. And
I've seen it in anthropology too, write and archaeology with
examples of engravings and translations and things that can happen.
So so as you see in diverse fields people applying
these techniques to advance and how to do physics or
how to do chemistry. They inspire each other, right, and

(24:02):
this said, you know, how does it apply actually to
my area? So once as that happens, it becomes less
of a chore of like, my god, you know, how
do I have to deal with this? But actually it's
triggered by Curiosity's triggered by you know, there'll be like,
you know, faculty that will be like, you know what,
you know, let me explore what this means for my area,
and they will adapt it to the local context, to
the local you know, language, and the professional itself. So

(24:25):
I see that as a positive vector that is not
all going to feel like homework.

Speaker 5 (24:30):
You know, it's not going to feel like, oh my god,
this is so overwhelming.

Speaker 4 (24:33):
But rather to be very practical to see what works,
What have I seen others to do that is inspiring?

Speaker 3 (24:38):
And what am I inspired to do? You know what?
What is how is this going to help my career?

Speaker 4 (24:42):
I think that that's going to be an interesting question
for you know, those faculty members, for the students.

Speaker 2 (24:46):
The professionals. Sorry, I'm gonna stick with this example along
because it's really interesting. I'm curious following up on what
you just said, that one of the most persistent critiques
of academic but also of many of many corporate institutions
in recent UITs, has been siloing. Right, different parts of
the of the organization are going off on their own

(25:08):
and not speaking to each other. Is a potential is
a real potential benefit to AI the kind of breaking
down a simple tool for breaking down those kinds of barriers,
is that a very is that an elegant way of
sort of saying what.

Speaker 3 (25:24):
I really think?

Speaker 4 (25:25):
And I was actually just having a conversation with Provos
stuff and very much on this topic, very recently exactly
on that, which is all these this you know, this
appetite right to collaborate across disciplines. There's a lot of
attempts stores a goal, right, creating interdisciplinary centers, creating dual
degree programs or dual appointment programs. But actually in a

(25:46):
lot of progress in academia happens by methodology too. Write
like a new you know, when when some methodology gets adopted,
I mean the most famous example of that is a
scientific method as an example of that. But when you
have a methodology that gets adopted, it also provides a
way to speak to your colleagues across different disciplines. And
I think what's happened in AI is linked to that

(26:08):
that within the context of the scientific method as an example,
the methodology about we about what we do discovery, the
role of data, the role of these neural networks, of
how we actually find proximity to concepts to one another
is actually fundamentally different than how we've traditionally applied it.
So as we see across more professions, people applying this

(26:32):
methodology is also going to give some element of common
language to each other. Right And in fact, you know,
in this very high dimensional representation of information that is
pressent to neural networks, we may find amazing adjacencies or
connections of themes and topics in ways that the individual
practitioners cannot describe, but yet will be latent.

Speaker 3 (26:53):
In these large calural networks.

Speaker 4 (26:56):
We are going to suffer a little bit from causality,
from the problem of like, hey, what's the root cause
of that? Because I think one of the unsatisfying aspects
that this methodology will provide is they may give you
answers for which they don't give you good reasons for
where the answers came from, and then there will be
the traditional process of discovery of saying, if that is

(27:16):
the answer, what are the reasons? So we're going to
have to do this sort of hybrid way of understanding
the world. But I do think that common layer of
AI is a powerful new thing.

Speaker 2 (27:27):
Yeah, well, a couple of random questions. I couldn't manage
you talk in the In the Writer's strike that just
ended in Hollywood, one of the sticking points was how
the studios and writers would treat AI generated content. Good
writers get credit if their material with somehow the source
for A but more broadly, did the writers need protections

(27:49):
against the use of I could go on? You know what,
I'll be worth familiar with all of this. Had you been,
I don't know whether you were, but had either side
called you in for advice? During that the writers, the
writers called you and said, Daria, what should we do
about AI? And how should we that should be reflect
should that be reflected in our contract negotiations? What would

(28:10):
you have told them?

Speaker 5 (28:12):
The way I think about that is that I divided.

Speaker 3 (28:15):
I would divide it into two pieces.

Speaker 4 (28:16):
First, is what's technically possible right and anticipate scenarios like
you know, what can you do with voice cloning for example?
You know, now, for example, it is possible this being
dubbing right legis take that topic right around the world.
There was all these folks that would dub people in
other languages. Well, now you can do these incredible renderings.

(28:38):
I mean, I know if you've seen them, where you
know you match the lips is your original voice, but
speaking any language that you want. As an example, so
busy that has a set of implications around that. I mean,
just to give an example, So I would say, create
a taxonomy that describes technical capabilities that we know of
today and applications to the industry and to examples of

(28:58):
like hey, you know I could film you for five
minutes and I could generate two hours of content of
you and I don't have to you know, then if
you'll get paid by the hour, obviously I'm not paying
you for that other thing. So I would say technological
capability and then map with their expertise consequences of how
it changes the way they work or the way they
interact or the way they negotiate and so on.

Speaker 3 (29:16):
So that would be one element of it, and then the.

Speaker 4 (29:19):
Other one is like a non technology related matter, which
is an element of almost distributed justices like who deserves
what right and who has the power to get what?
And then that's a completely different discussion. That is to say, well,
if this is the scenario of what's possible, you know,
what do we want and what are we able to get?
And I think that that's a different discussion, which.

Speaker 2 (29:40):
Is which all that life? Which when do you do?

Speaker 3 (29:42):
First?

Speaker 4 (29:43):
I think it's very helpful to have an understanding of
what's possible and how it changes a landscape.

Speaker 3 (29:51):
As part of a broader.

Speaker 4 (29:53):
Discussion, right, and a broad negotiation, because you also have
to see the opportunities because there will be a lot
of ground to say, actually, you know, if we can
do it in this way and we can all be
that much more efficient in getting these piece work done
or these filming done, but we have a reasonable agreement
about how we both sides benefit from it, right, then

(30:16):
that's a.

Speaker 3 (30:16):
Win win for everybody.

Speaker 2 (30:17):
Yeah.

Speaker 5 (30:18):
Right, so that's I think that would be a golden triangle.

Speaker 3 (30:21):
Right.

Speaker 2 (30:21):
Here's my reading, and I would like you to correct
me if I'm wrong, and I'm likely to be wrong.
When I looked at that strike, I said if they're
worried about AI. The writers are worried about AI. That
seems silly. It should be the studios who are worried
about the economic impact of AI. Does it? In the
long run AI put the studios out of business long
before it puts the writers out of business. I only

(30:43):
need the studio because the costs of production are as
high as the sky, and the cost of production are overwhelming.
And whereas if I don't, if I have a tool
which brings introduces massive technological efficiencies to the production of movies,
then why don't studio? Why would they the scared ones?

Speaker 4 (31:02):
Or maybe maybe you need like a different kind of
studio or a different kind of different kind of studies.

Speaker 2 (31:06):
But I mean the in the but in the in
this strike the fright the frightened ones with the writers
and the you know, with the studios. Wasn't that backwards?

Speaker 3 (31:18):
I haven't thought about it.

Speaker 5 (31:19):
Uh, it can be about the implications of it.

Speaker 4 (31:21):
It goes back to we're talking before the implications because
are so horizontal, it is right to think about it,
like what does.

Speaker 3 (31:27):
It do to the studios as well? Right?

Speaker 2 (31:28):
Yeah?

Speaker 4 (31:29):
But then you know the reason why that happens is
that it's the order of either negotiations or or who
first got concerned about it and did something about it, right,
which is in the context of the strike. You know,
I don't know what the equivalent conversations are going inside
the studio and whether they have a war room saying
what this is going to mean to us? Right, but

(31:50):
it doesn't get exercise through a strike, but maybe through
a task force inside you know, the companies about what
are they going to do?

Speaker 3 (31:55):
Right?

Speaker 2 (31:56):
Well, and to go back to your thing you said,
the first thing you do is you make a list
of what techological capabilities are. But don't technological capabilities change
every I mean they do. You're racing ahead so fast,
so you can't Can you have a contract? I'm sorry
for begetting in the little weeds here, but this is interesting.
Can you You can't have a five year contract if

(32:16):
the contract is based on an assessment of technological capabilities
in twenty twenty three, because by the time we get
to twenty eight twenty three eight, it's totally different. Right.

Speaker 4 (32:28):
Yeah, But like you know, I mean where I was
going is like there are some abstractions around that is like,
you know, whate can we do with my image? Right, Like,
if I generally get the category that my image can
be reproduced, generated contents and so on, it's like, let's
talk about the abstract notion about who has rights to
that or do we both get to benefit from that?

(32:48):
If you get that straight, Yes, the nature of how
the image gets alter created as something will change underneath,
but the concept will stay the same. And so I
think what's important is to get the categories right.

Speaker 2 (33:00):
Yeah. Yeah, if you had to you to think about
the biggest technological revolutions of the post war era last
seventy five years, you can all come up with a list. Actually,
it's really fun to come up with a list. I
was thinking about this when we were you know, containerized
shipping is my favorite, the green revolution, the internet is

(33:25):
Where is the I in that list?

Speaker 4 (33:29):
So I would put it first in that context that
you put forth over since World War Two, undoubtedly, like
computing as a category is one of those trajectories that
has reshaped.

Speaker 3 (33:42):
Right or world.

Speaker 4 (33:43):
And I think we think computing, I would say the
role that semiconductors have had has been increwdly defining. I
would say AI is the second example of that as
a core architecture that is going to have an equivalent
level of impact. And then the third leg I would
put to that equation.

Speaker 3 (34:03):
Will be quantum and quantum information.

Speaker 4 (34:05):
And that's sort of like I like to summarize that
the future of computing its bits, neurons, and cubits, and
it is that idea of high precision computation, the world
of neural networks and artificial intelligence, and the world of
quantum and the combination of those things is going to
be the defining force of the next hundred years in
that category of computing.

Speaker 3 (34:23):
But it makes the list for sure.

Speaker 2 (34:24):
If it's that high up on the list. This is
a total hypothetical. Would you if you were starting over
if you're starting IBM right now, would you say, oh,
our AI operations actually should be way bigger, Like how
many how many thousands of people working for you?

Speaker 4 (34:41):
So within the research division it's about like three thousand,
five hundred scientists.

Speaker 2 (34:46):
So in a perfect world, would you if it's that big,
isn't that too small as a group?

Speaker 4 (34:51):
Yeah, Well that's like in the ricer division. I mean
IBM overall, but I mean, like.

Speaker 2 (34:58):
So starting from first, so you have a we've got
a technology that you're ranking with compute and you know,
up there with as in terms of a world changer,
are we So what I'm basically asking is are we
underinvested in this huge you.

Speaker 4 (35:16):
Know, but so so yeah, it's a it's a good question.
So like what I would say is that I think
we should segment how many people do you need on
the creation of the technology itself, and what is the
right size of research and engineers and compute to do that,
And how many people do you need in the sort
of application of the technology to create better products, to

(35:38):
deliver services and consulting and then ultimately to diffuse it
through you know, sort of all spheres of society. And
the numbers are very different, and that is not different
than anywhere else. I mean, I mean, if you give
examples of since you were talking about in context of
World War two, how many people does it take to
create you know, an atomic weapon as an example, it's
a large number.

Speaker 3 (35:58):
I mean, it wasn't just loss animals.

Speaker 4 (35:59):
There was a lot of people in Okay, it's a
large number, but it wasn't a million people, right, So
you could have highly concentrated teams of people that, with
enough resources, can do extraordinary scientific and technological achievements, and
that's always by definition, is going to be a fraction
of like one percent compared to the total volume that
is going to require to then.

Speaker 3 (36:20):
Deal with it.

Speaker 2 (36:21):
But the application side is infinite almost.

Speaker 3 (36:23):
That's exactly.

Speaker 5 (36:24):
So that is where like in the end, the bottleneck
really is.

Speaker 4 (36:28):
So with you know, thousands of scientists and engineers, you
can create world class AI, right, And so no, you
don't need ten thousand to be able to create the
large language model and the generatic model and some but
you need thousands, and you need you know, very significant
amount of computer and data.

Speaker 3 (36:44):
You need that.

Speaker 4 (36:45):
The rest is okay, I build software, I build databases,
or I build a software product that allows you to
do inventory management, or I build you know, a photo
editor and so on. Now that product incorporating the AI, modifying,
expanding it and so on. Well, now you're talking about
the entire software industries. So now you're talking about millions

(37:06):
of people right who are neted, you know, who are
required to bring AI into their product. Then you go
on a step beyond the technology creators in terms of
software and you say, well, okay, now what the skills
to help organizations go undeployed in the department of you know,
the interior, right, And then I said, okay, well, now
you need like consultants and experts and people to work

(37:28):
they are to integer into the workflow. So now you're
talking into the many tens of millions of people around that.
So I see it as these concentric circles of it.
But to some degree in many of these core technology areas,
just saying like, well, I need a team of like
one hundred thousand people to create like AI or a
or a new transistor or a new quantum computer. It's
actually a diminished in return right in the end, like

(37:49):
too many people connecting with each other.

Speaker 2 (37:50):
Is very difficult. But on the application side of just
to go back to our our example of that college,
just a task of sitting down with a faculty and
working with them to reimagine what they do with these
new set of tools in mind, with the understanding that
the students coming in are probably going to know more

(38:12):
about it than they do that a lot. I mean,
that's that is a hurriculeion people problem.

Speaker 3 (38:18):
It's a people problem.

Speaker 4 (38:19):
Yeah, that's why I started in terms of the barriers
of adoption of that, I mean the context of IBM,
an example That's why we have a consulting organization, Ivan Consulting,
that complements ib and technology, and the Ivan Consulting Organization
has over one hundred and fifty thousand employees because of
this question, right, because you have to sit down and
you say, Okay, what problem are you trying to solve?

(38:41):
What is a methodology we're going to do, and here's
the technology options that we have to be able to
bring into the table. In the end, the adoption across
or society will be limited by this part. The technology
is going to make it easier, more cost effective to
implement those solutions. You first have to think about what

(39:01):
you want to do, how you're going to do it,
and how are you going to bring it into a
life of this in this context faculty member or you know,
the administrator and so on in the college.

Speaker 2 (39:10):
With that Hollywood that that notion I thought, which was
absolutely I thought really interesting that in a Hollywood strike
you have to have this conversation about a distributive justice,
conversation about how do we That's it's a really hard
conversation right to have and a So this brings me
to my netflie, which is that you we were talking
a backstage you have. You have two daughters, one in college,

(39:34):
one about to go to college. That's right, so they're
both scients minded. So tell me about the conversations you
you have with your daughter. You have a unique conversation
with your daughters because your conversation, your advice to them
is is influenced by what you do for a living.

Speaker 3 (39:51):
Yes, it's true.

Speaker 2 (39:52):
So did you warn your daughters away from certain fields?
Did you say, whatever you do, don't be No, no.

Speaker 4 (40:01):
No, that's not my style. I mean, for me, no,
I try not to be like you know, preachy about that.
So for me, it was just about showing by example
all things I love, right and things I care about,
and then you know, bringing them to the lab and
seeing things, and then the natural conversations of things working
on or interesting people I meet, so to the extent

(40:21):
that they have chosen that and obviously this has an
influence on them. It has been through seeing it, you know,
perhaps through my eyes, right.

Speaker 3 (40:29):
And what do you see me do? And that I
like my profession? Right.

Speaker 2 (40:31):
But one of your daughters, you said, is thinking that
she wants to be a doctor. But being a doctor
in a post AI world is surely a very different
proposition than being a doctor in a pre AI world.
Do you think you have you tried to prepare her
for that difference. Have you explained to her what you
think will happen to this profession she might enter.

Speaker 4 (40:51):
Yeah, I mean not in like, you know, incredible amount
of detail, but but but yes, at the level of
understanding what is changing, like lens of the information, lens
with which you can look at the world and what
is possible and what it can do, Like what is
our role and what is the role of the technology
and how that shapes At that level of abstraction, for sure,

(41:13):
but not at the level of like, don't be a radiologist,
you know, because this is what we.

Speaker 2 (41:17):
Want for you. I was gonna say, if you'ren't happy
with your current job, you could do a podcast called
Parenting Tips with Dario, which is just an AI person
gives you advice on what your kids should do based
on exactly this, Like should I be a radiologist? Dario
tell me, Like it seems to be a really important question. Yeah,
let me ask this question in a more I'm joking,

(41:37):
but in a more serious way. Surely it would If
I don't mean to use your daughter as an example.
But let's imagine we're giving advice to someone who wants
to enter medicine. A really useful conversation to have is
what are the skills that are will be most prized
in that profession fifteen years from now, and are they

(41:58):
different from the skills that are prized now. How would
you answer that question?

Speaker 3 (42:02):
Yeah?

Speaker 4 (42:03):
I think for example, this goes back to how is
the scientific method in this context, like the practice of
medicine going to change. I think we will see more
changes on how we practice a scientific method and so
on as a consequence of what is happening with the
world of computing and information. How we represent information, how
we represent knowledge, how we extract meaning from knowledge as

(42:26):
a method than we have seen in the last two
hundred years. So therefore, what I would like strongly encourage
is not about like, hey, use this tool for doing
this or doing that, but in the curriculum itself, in
understanding how we do problems solving in the age of
like data and data representation and so on, that needs
to be embedded in the curriculum of everybody you know.

Speaker 3 (42:48):
That is I would say, actually quite horizontally.

Speaker 4 (42:50):
But certainly in the context of medicine and scientists and
so on for sure, And to the extent that that
gets ingrained, that will give us a lens that no
matter what what specialty they go within medicine, they will say, Actually,
the way I want to be able to tackle improving
the quality of care, the way to do that is
in addition to all the elements that we have practiced

(43:11):
in the field of medicine, is this new lens?

Speaker 3 (43:14):
And are we.

Speaker 4 (43:14):
Representing the data the right way? Do we have the
right tools to be able to represent that knowledge? Am
I incorporating that in my own so with my own
knowledge in a way that gives me better outcomes?

Speaker 3 (43:25):
Right?

Speaker 4 (43:25):
Do I have the rigor of benchmarking too? And quality
of the results? So that is what needs to be incorporated.

Speaker 2 (43:32):
How in a perfect world, if I asked you your
team to rewrite curriculum for American medical schools, how dramatic
a revision is that? Are we tinkering with ten percent
of the curriculum or we tinkering with fifty percent of it?

Speaker 4 (43:50):
I think they would be a subset of classes that
is about the method the methodology, what has changed? Like
have these lens of it to understand? And then within
each class that methodology will represent something that is embedded
in it, right, so it will be substantive but not

(44:11):
But but doesn't mean replacing the specialization and the context
and the knowledge of each domain. But I do think
everybody should have sort of a basic knowledge of the horizontal, right,
what is it, how does it work, what tools you have,
what is the technology, and like you know what are.

Speaker 3 (44:25):
The dos and don'ts around that?

Speaker 4 (44:27):
And then every area you say, and you know that
thing that you learn, this is how it applies to anatomy,
and this is how you know how it applies to
you know, radiology if you study that, or or this
is how you apply you know, in the context of discovery,
right of self structure and this is how we can
use it. Or protein folding and this is how it
does so that way you'll see a connecting tissue through

(44:48):
throughout the whole thing.

Speaker 2 (44:49):
Yeah, I mean I would add to that because I
was thinking of the that it's also this incredible opportunity
to do what doctors are supposed to do but don't
have time to do now, which is they're so consumed
with figuring out what's wrong with you that they have
little time to talk about the implications of the diagnosisness

(45:11):
and what we really want to if we can freedom
of some of the burden of what is actually quite
a prosaic question of what's wrong with you and leave
the hard human thing of let make should you be
scared or hopeful? Should you you know? What do you
need to do? Or what? Let me put this in
the context of all the patients I've seen. That conversation,
which is the most important one, is the one that

(45:33):
seems to me. So like, if I had to, I
would add, if we're reimagining the curriculum of med school,
I'd like with whatever, by the way, very little time.
Maybe we have to add two more years to med school,
but like a whole but the whole thing about bringing
back the human side of you know, now, if I

(45:56):
can give you ten more minutes, how do you use
that ten more men?

Speaker 4 (46:00):
But in that, in that reconceptualization that you just did,
is what we should be doing around that, because I
think the debate as to like, well I'm not going
to need doctors or not, it's actually a not very
useful debate. But rather this other question is how is
your time being spent? What problems are you getting stuck.
I mean I generalize this by like the obvious observation
that if you look around in your professions, in our

(46:22):
daily lives, we have not run out of problems to solve.
So as an example of that is, hey, if I'm
spending all my time trying to do diagnosis, and I
could do that ten times faster, and it allows me
actually to go, you know, and take care of the
patients and all the next steps of what we have
to do about it. That's probably a trade off that
a lot of doctors would take, right, And then you say, well,
you know, to what degree does it allow me to

(46:43):
do that? And I can do these other things and
these other things are critically important for my profession around that.
So when you actually become less abstract and like we
get past the futile conversation of like, oh, there's no
more jobs and I's going to take it all of it,
which is kind of nonsense, is you go back to say,
in practice in your context, right, for you, what does

(47:05):
it mean, how do you work?

Speaker 3 (47:06):
What can you do differently around that?

Speaker 4 (47:08):
Actually that's a much richer conversation, and very often we
would find ourselves that there's a portion of the work
we do that we say I would rather do less
of that. This is this other part I like a lot,
And if it is possible that technology could help us
make that trade off, I'll.

Speaker 3 (47:22):
Take it in a heartbeat.

Speaker 4 (47:23):
Now, poorly implemented technology can also create another problem.

Speaker 3 (47:27):
You say, Hey, this was supposed to solve.

Speaker 4 (47:29):
Me things, but the way it's being implemented is not
helping me, right, it's making my life more more miserable,
or so on, or I've lost connection in how I
used to work, et cetera. So that is why design
is so important. That is why I also workflow is
so important in being able to solve these problems. But
it begins by, you know, going from the intergalactic to

(47:52):
the reality of it, of that faculty member in the
liberal Arts college or you know, or a you know,
a practitioner in medicine in a hospital and what it
means for them.

Speaker 3 (48:01):
Right.

Speaker 2 (48:02):
Yeah. What struck me Daria throughout our conversation is how
much of this revolution is non technical, as to say,
you guys are doing the technical thing here, but the
real the revolution is going to require a whole range
of people doing things that have nothing to do with software,
that have to do with working out new new human arrangements.

(48:24):
Talking about that, I mean, does keep coming back to
the Hollywood strike thing that you have to have a
conversation about our values is creators of of of of movies?
How are we going to divide up the exactly credit
and the like. That's a that's a conversation about philosophy,
and you know, you know it's it is.

Speaker 5 (48:46):
And is it's in the grand tradition of why you know,
a liberal.

Speaker 4 (48:51):
Education is so important in the broadest possible sense. Right,
there's no common conception of the good right that is
always tested a dialogue that happens within our society, and
technology is going to fit in that context too, right.
So that's why I personally, as a philosophy I'm not
a technological determinists, right, And I don't like when colleagues

(49:12):
in my profession right starts saying like, well, this is
the way the technology is going to be, and by consequence,
this is how society is going to be. I'm like,
that's a highly contested goal. And if you want to
enter into realm of politics or the real other ones,
go and stand up on a stool and discuss whether
that's what society wants. You will find that it's a
huge diversity of opinions and perspective and that's what makes

(49:34):
you know, you know, in a democracy, the richness of
our society, and in the end that is going to
be the centerpiece of the conversation what do we want?
You know, who gets what? And so on? And that
is actually, I don't think it's anything negative. That's acid
should be because in the end is anchored of who
we want as humans, you know, you know, as friends, family, citizens,

(49:55):
and we have many overlapping sets of responsibilities, right and
as a technology creator my only version ponsibilities not just
as a scientist and a technology creator.

Speaker 3 (50:03):
I'm also a member.

Speaker 4 (50:04):
Of a family, I'm a citizen, and I'm many other
things that I care about. And I think that that
sometimes in the debate of the technological determinists, they start
now budding into what is the realm of you know,
justice and you know, in society and philosophy and democracy,
And that's where they get the most uncomfortable because it's

(50:25):
like I'm just telling you, like, you know, what's possible,
and when there's pushback, it's like, yeah, but now we're
talking about how we live and how we work and
how much I.

Speaker 3 (50:36):
Get paid or not paid.

Speaker 4 (50:38):
So that technology is important. Technology shapes that conversation, but
we're going to have the conversation with a different language,
as it should be, and technologists need to get accustomed
to if they want to participate in that world with
the broad consequences. Hey, get a custom to deal with
the complexity of that world of politics, society, institutions, unions,

(50:59):
all that stuff. And you know, you can be like
whiny about it. It's like they're not adopting my technology.
That's what it takes to bring technology into the world.

Speaker 2 (51:07):
Yeah, well, said thank you Dario for this wonderful conversation.
Thank you to all of you for coming and listening,
and thank you.

Speaker 1 (51:19):
Thank you.

Speaker 2 (51:23):
Dario gild transformed how I think about the future of AI.
He explained to me how huge of a leap it
was when we went from chess playing models to language
learning models, and he talked about how we still have
a lot of room to grow. That's why it's important
that we get things right. The future of AI is
impossible to predict, but the technology has so much potential

(51:47):
in every industry. Zooming into an academic or medical setting,
showed just how close we are to the widespread adoption
of AI. Even Hollywood is being forced to figure this out.
Institutions have all Soortz will have to be at the
forefront of integration in order to unlock the full power
of AI thoughtfully and responsibly. Humans have the power and

(52:10):
the responsibility to shape the tech for our world. I
for one, I'm excited to see how things play out.
Smart Talks with IBM is produced by Matt Romano, Joey Fishground,
David jaw and Jacob Goldstein. We're edited by Lydia Jane Kott.
Our engineers are Jason Gambrel, Sarah Bruguier, and Ben Holliday.

(52:32):
Theme song by Gramoscope. Special thanks to Andy Kelly, Kathy Callahan,
and the eight Bar and IBM teams, as well as
the Pushkin marketing team. Smart Talks with IBM is a
production of Pushkin Industries and Ruby Studio at iHeartMedia. To
find more Pushkin podcasts, listen on the iHeartRadio app, Apple Podcasts,

(52:55):
or wherever you listen to podcasts. I'm Malcolm Gladwell. This
is a paid advertisement from IBM.

TechStuff News

Advertise With Us

Follow Us On

Hosts And Creators

Oz Woloshyn

Oz Woloshyn

Karah Preiss

Karah Preiss

Show Links

AboutStoreRSS

Popular Podcasts

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Therapy Gecko

Therapy Gecko

An unlicensed lizard psychologist travels the universe talking to strangers about absolutely nothing. TO CALL THE GECKO: follow me on https://www.twitch.tv/lyleforever to get a notification for when I am taking calls. I am usually live Mondays, Wednesdays, and Fridays but lately a lot of other times too. I am a gecko.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.