All Episodes

June 25, 2024 44 mins

In a rapidly evolving world, we need to balance the fear surrounding AI and its role in the workplace with its potential to drive productivity growth. In this special live episode of Smart Talks with IBM, Malcolm Gladwell is joined onstage by Rob Thomas, senior vice president of software and chief commercial officer at IBM, during NY Tech Week. They discuss “the productivity paradox,” the importance of open-source AI, and a future where AI will touch every industry.

 

This is a paid advertisement from IBM. The conversations on this podcast don't necessarily represent IBM's positions, strategies or opinions.

 

Visit us at ibm.com/smarttalks

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
Welcome to Tech Stuff, a production from iHeartRadio. Today, we
are witnessed to one of those rare moments in history,
the rise of an innovative technology with the potential to
radically transform business in society forever. That technology, of course,

(00:24):
is artificial intelligence, and it's the central focus for this
new season of Smart Talks with IBM. Join hosts from
your favorite Pushkin podcasts as they talk with industry experts
and leaders to explore how businesses can integrate AI into
their workflows and help drive real change in this new
era of AI, and of course, host Malcolm Gladwell will

(00:46):
be there to guide you through the season and throw
in his two cents as well. Look out for new
episodes of Smart Talks with IBM every other week on
the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts,
and learn more at IBM dot com, slash smart Talks.

Speaker 2 (01:11):
Pushkin.

Speaker 3 (01:16):
Welcome, Welcome, Welcome to Smart Talks with IBM.

Speaker 4 (01:23):
Hello, Hello, Welcome to Smart Talks with IBM, a podcast
from Pushkin Industries, iHeartRadio and IBM. I'm Malcolm Gladwell. This season,
we're diving back into the world of artificial intelligence, but
with a focus on the powerful concept of open its possibilities, implications,
and misconceptions. We'll look at openness from a variety of

(01:46):
angles and explore how the concept is already reshaping industries,
ways of doing business, and our very notion of what's possible.
And for the first episode of this season, we're bringing
you a special conversation.

Speaker 2 (01:59):
I recently said.

Speaker 4 (02:00):
That down with Rob Thomas. Rob is the senior vice
president of Software and chief Commercial Officer of IBM. We
spoke to him in front of a live audience as
part of New York Tech Week. We discussed how businesses
can harness the immense productivity benefits of AI while implementing
it in a responsible and ethical manner. We also broke

(02:21):
down a fascinating concept that Rob believes about AI, known
as the productivity paradox. Okay, let's get to the conversation.

Speaker 2 (02:38):
How are we doing good, Rob?

Speaker 4 (02:40):
This is our second time. We did one of these
in the middle of the pandemic. But now it's all
such a blur now that us can figure out when
it was.

Speaker 3 (02:49):
I know, it's hard to those are like a blurry years.
You don't know what happened, right, But.

Speaker 4 (02:54):
Well, it's good to see you, to meet you again.
I wanted to start by going back. You've been an IBM.

Speaker 3 (02:59):
Two years right, twenty five in July, believe it or not.

Speaker 2 (03:04):
So you were you were a kid when you joined.

Speaker 3 (03:06):
I was four.

Speaker 4 (03:09):
So I want to contrast present day Rob and twenty
five years ago Rob. When you arrive at IBM, what
do you think your job is going to be?

Speaker 2 (03:20):
It, your career is going?

Speaker 4 (03:21):
Where do you think the kind of problems you're going
to be addressing are?

Speaker 3 (03:26):
Well, it's kind of surreal because I joined IBM a
consulting and I'm coming out of school, and you quickly realize, wait,
the job of a consultant is to tell other companies
what to do. And I was like, I literally know nothing,
and so you're immediately trying to figure out, so how
am I going to be relevant given that I know
absolutely nothing to advise other companies on what they should

(03:48):
be doing. And I remember it well, like we were
sitting in a room. When you're a consultant, you're waiting
for somebody else to find work for you. A bunch
of us sitting in a room and somebody walks in
and as we need somebody that knows visio. Does anybody
know Visio? I'd never heard of visio. I don't know
if anybody in the room has. So everybody's like sitting

(04:09):
around looking at their shoes. So finally I was like,
I know it. So I raised my hand. They're like, great,
we got a project for you next week. So I
was like, all right, I have like three days to
figure out what visio is, and I hope I can
actually figure out how to use it now. Luckily, it
wasn't like a programming language. I mean, it's pretty much

(04:30):
a drag and drop capability. And so I literally left
the office, went to a bookstore bought the first three
books on Visio I could find, spent the whole week
in reading the books, and showed up and got their
work on the project. And so it was a bit
of a risky move, but I think that's kind of
you this well, But if you don't take risk, you'll

(04:53):
never you'll never achieve. And so does some extent. Everybody's
making everything up all the time. It's like, can you
learn faster than somebody else? Is what the difference is
in almost every part of life. And so it was
not planned, but it was an accident, but it kind
of forced me to figure out that you're going to
have to figure things out.

Speaker 4 (05:13):
You know, we're here to talk about AI, and I'm
curious about the evolution of your understanding or IBM's understanding
of my AI. At what point in the last twenty
five years do you begin to think, oh, this is
really going to be at the core of what we
think about and work on at this company.

Speaker 3 (05:33):
The computer scientist John McCarthy, he was he's the person
that's credited with coining the phrase artificial intelligence. It's like
in the fifties, and he made an interesting comedy said
he said, once it works, it's no longer called AI,
and that then became it's called like the AI effect,

(05:54):
which is it seems very difficult, very mysterious, but once
it becomes commonplace, it's just no longer what it is.
And so if you put that frame on it, I
think we've always been doing AI at some level. And
I even think back to when I joined IBM in
ninety nine. At that point there was work on rules

(06:14):
based engines, analytics, all of this was happening. So it
all depends on how you really define that term. You
could argue that you know, elements of statistics, probability. It's
not exactly AI, but it certainly feeds into it, and
so I feel like we've been working on this topic

(06:35):
of how do we deliver better insights better automation since
IBM was formed. If you read about what Thomas Watson
Junior did, that was all about automating tasks, is that AI?
Well probably certainly not by today's definition, but it's in
the same zip code.

Speaker 4 (06:53):
So from your perspective, it feels a lot more like
an evolution than a revolution.

Speaker 2 (06:57):
Is that a fair statement?

Speaker 3 (06:58):
Yes, yeah, which I think most great things in technology
tend to happen that way. Yeah, many of the revolutions,
if you will, tend to fizzle out.

Speaker 4 (07:09):
But even given that is there, I guess what I'm
asking is, I'm curious about whether there was a moment
in that evolution when you had to readjust your expectations
about what AI was going to be capable of.

Speaker 2 (07:21):
I mean, was there, you know, was.

Speaker 4 (07:24):
There a particular innovation or a particular problem that was
solved that made you think, oh, this is different than
what I thought.

Speaker 3 (07:35):
I would say the moments that caught our attention certainly
casper Off winning the chess tournament, nobody or Deep Blue
beating casper Off. I should say nobody really thought that
was possible before that, and then it was Watson winning Jeopardy.
These were moments that said, maybe there's more here than

(07:56):
we even thought was possible. And so I do think
there's there's points in time where we realized maybe way
more could be done than we had even imagined. But
I do think it's consistent progress every month and every
year versus some seminal moment. Now, certainly large language models

(08:20):
as of recent have caught everybody's attention because it has
a direct consumer application. But I would almost think of
that as what Netscape was for the for the web browser. Yeah,
it brought the Internet to everybody, but that didn't become
the Internet per se. Yeah.

Speaker 4 (08:39):
I have a cousin who worked for I'd be up
for forty one years. I saw him this weekend. He's
in Toronto. By the way, I said, you work for
Rob Thomas. He would like this, he goes.

Speaker 2 (08:52):
He said, I'm five layers down.

Speaker 4 (08:55):
But so I always whenever I see my cousin, I
ask him, can you tell me again what you do?

Speaker 2 (08:59):
Because always changing? Right, I guess this is a function
of working at IBM.

Speaker 4 (09:04):
So eventually he just gives up and says, you know,
we're just solving problems, so what we're doing, which I
sort of loved as a kind of frame, And I
was curious, what's the coolest problem you ever worked on?
Not biggest, not most important, but the coolest, the one
that's like that sort of makes you smile when you
think back on it.

Speaker 3 (09:22):
Probably when I was in microelectronics, because it was a
world I had no exposure to. I hadn't studied computer science,
and we were building a lot of high performance semiconductor technology,
so just chips that do a really great job of
processing something or other. And we figured out that there

(09:45):
was a market in consumer gaming that was starting to happen,
and we got to the point where we became the
chip inside the Nintendo, We the Microsoft Xbox Sony PlayStation,
so we basically had the entire gaming market running on
ib AND chips.

Speaker 4 (10:05):
And to use every parent basically is pointing at you
and saying you're the call.

Speaker 3 (10:12):
Probably well they would have found it from anybody, but
it was the first time I could explain my job
to my kids, who were quite young at that time,
like what I did like it was more tangible for
them than saying we solve problems or douce you know,
build solutions like it became very tangible for them, and
I think that's, you know, a rewarding part of the

(10:35):
job is when you can help your family actually understand
what you do. Most people can't do that. It's probably
easier for you. They can, they can see the books,
but for for some of us in the business, the
business world, it's not always as obvious. So that was
like one example where the dots really connected.

Speaker 4 (10:51):
There were a couple's a couple of stuck about a
little bit of this into context of of AI. I
love because I love the frame of problem solving as
a way of understanding what the function of the technology is.
So I know that you guys did something, did some
work with I never know how to pronounce it. Is
it Seville Sevia, Sevia with the football club Sevia in Spain?

(11:14):
Tell me about Tell me a little bit about that.
What problem were they trying to solve and why did
they call you?

Speaker 3 (11:21):
In? Every sports franchise is trying to get an advantage, right,
Let's just be that clear. Everybody's how can I use data? Analytics, insights,
anything that will make us one percent better on the
field at some point in the future. And Seville reached

(11:44):
out to us because they had seen some of that.
We've done some work with the Toronto Raptors in the
past and others, and their thought was maybe there's something
we could do. They'd heard all about generative AI, that
heard about large language models, and the problem, back to
your point on solving problems, was we want to do

(12:04):
a way better job of assessing talent, because really the
lifeblood of a sports franchise is can you continue to
cultivate talent, can you find talent that others don't find?
Can you see something in somebody that they don't see
in themselves or maybe no other team season them. And
we ended up building somebody with them called Scout Advisor,

(12:27):
which is built on Watson X, which basically just ingests
tons and tons of data and we like to think
of it as finding the needle in the haystack of
you know, here's three players that aren't being considered. They're
not on the top teams today, and I think working

(12:48):
with them together, we found some pretty good insights that's
helped them out.

Speaker 2 (12:51):
How what was intriguing to.

Speaker 4 (12:52):
Me was we're not just talking about quantitative data. We're
also talking about qualitative data. That's the puzzle part of
the thing that fastens me. How does what incorporate qualitative
analysis into that sort of so you just feeding in
scouting reports and things like that.

Speaker 3 (13:11):
I got to realize think about how much I can
actually disclose it. But if you think about so, quantitative
is relatively easy. Every team collects that, you know, what's
their forty yard dashable think they use that term, certainly
not in Spain. That's all quantitative. Qualitative is what's happening

(13:33):
off the field. It could be diet, it could be habits,
it could be behavior. You can imagine a range of
things that would all feed into an athlete's performance, and
so relationships. There's many different aspects, and so it's trying
to figure out the right blend of quantitative and qualitative

(13:55):
that gives you a unique insight.

Speaker 4 (13:57):
How transparent is that kind of system telling you? It's
saying pick this guy, not this guy, But is it
telling you why it prefers this guy to this guy?

Speaker 2 (14:06):
Was it?

Speaker 3 (14:08):
I think for anything in the realm of AI, you
have to answer the why question. Yeah, otherwise you've fallen
into the trap of the you know, the proverbial black box,
and then wait, I made this decision, I'd never understood
why it didn't work out, So you always have to
answer why without a.

Speaker 2 (14:25):
Doubt and how is why answered?

Speaker 3 (14:30):
Sources of data, the reasoning that went into it, and
so it's basically just tracing back the chain of how
you got to the answer. And in the case of
what we do in Watson X is we have IBM models.
We also use some other open source models, so it
would be which model was used, what was the data
set that was fed into that model, How is it

(14:51):
making decisions? How is it performing? Is it robust? Meaning
is it reliable in terms of if you feed it
two of the same data set, do you get the
same answer? These are all the you know, the technical
aspects of understanding the why.

Speaker 4 (15:05):
How quickly do you expect all professional sports franchises to
adopt some kind of are they already there? If I
went out and pulled the general managers of the one
hundred most valuable sports franchises in the world, how many
of them would be using some kind of AI system
to assist in their efforts?

Speaker 3 (15:24):
One hundred and twenty percent would meaning that everybody's doing it,
and some think they're doing way more than they probably
actually are. So everybody's doing it. I think what's weird
about sports is everybody's so convinced that what they're doing
is unique that they generally speaking don't want to work
with a third party to do it because they're afraid

(15:46):
that that would expose them. But in reality, I think
most are doing eighty to ninety percent of the same things.
So but without a doubt, everybody's doing it.

Speaker 2 (15:55):
Yeah. Yeah.

Speaker 4 (15:58):
The other that love was there was one but a
shipping line tri gun on the Mississippi River. Tell me
a little bit about that project. What problem were they
trying to solve?

Speaker 3 (16:10):
Think about the problem that I would say everybody noticed
if you go back to twenty twenty was things are
getting hold held up in ports. It was actually an
article in the paper this morning kind of tracing the
history of what happened twenty twenty twenty one and why
ships were basically sitting at seas for months at a time.
And at that stage we just we had a massive

(16:33):
throughput issue. But moving even beyond the pandemic, you can
see it now with ships getting through like Panama Canal.
There's like a narrow window where you can get through,
and if you don't have your paperwork done, you don't
have the right approvals, you're not going through and it
may cost you a day or two and that's a

(16:53):
lot of money. In the shipping industry and the tricon example,
it's really just about when you're pulled into a port,
if you have the right paperwork done, you can get
goods off the ship very quickly. They ship a lot
of food which by definition, since it's not packaged food,

(17:14):
it's fresh food, there is an expiration period and so
if it takes them an extra two hours, certainly multiple
hours or a day, they have a massive problem because
then you're going to deal with spoilage and so it's
going to set you back. And what we've worked with
them on is using an assistant that we've built in

(17:35):
Watson X called Orchestrate, which basically is just AI doing
digital labor, so we can replicate nearly any repetitive task
and do that with software instead of humans. So, as
you may imagine, shipping industry still has a lot of
paperwork that goes on and so being able to take

(17:57):
forms that normally would be multiple hours of filling it out.
Oh this isn't right, send it back. We've basically built
that as a digital skill inside of Watson X orchestraate
and so now it's done in minutes.

Speaker 4 (18:12):
They did they realize that they could have that kind
of efficiency by teaming up with you? Or is that
something you came to them and said, guys, we can
do this way better than you think.

Speaker 2 (18:23):
What's the.

Speaker 3 (18:25):
I'd say it's always it's always both sides coming together
at a moment that for some reason makes sense because
you could say, why didn't this happen like five years ago,
like this seems so obvious. Well, technology wasn't quite ready then,
I would say, But they knew they had a need
because I forget what the precise number is, but you know,

(18:46):
reduction of spoilage has massive impact on their bottom line,
and so they knew they had a need. We thought
we could solve it and the two together.

Speaker 2 (18:58):
Who did you guys go to the Now? Did they
come to you?

Speaker 3 (19:02):
I recall that this one was an inbound meaning they
had reached out to IBM and we'd like to solve
this problem. I think it went into one of our
digital centers if I recall it a literary phone.

Speaker 4 (19:13):
Call, but the other the reverse is more interesting to
me because there seems to be a very very large
universe of people who have problems that could be solved
this way and they don't realize it.

Speaker 2 (19:26):
What's your.

Speaker 4 (19:28):
Is there a shining example of this of someone you
just can't you just think could benefit so much and
isn't benefiting right now?

Speaker 3 (19:38):
Maybe I'll answer it slightly differently. I'm I'm surprised by
how many people can benefit that you wouldn't even logically
think of. First, let me give you an example. There's
a franchiser of hair salons. Sport Clips is the name.
My sons used to go there for haircuts because they

(19:59):
have like and you can watch sports, so they loved that.
They got entertained while they would get their haircut. I
think the last place that you would think is using
AI today would be a franchiser of hair salons. But
just follow it through. The biggest part of how they
run their business is can I get people to cut hair?

(20:21):
And this is the high turnover industry because there's a
lot of different places you can work if you want
to cut hair. People actually get injured cutting hair because
you're on your feet all day, that type of thing.
And they're using same technology orchestrate as part of their
recruiting process. How can they automate a lot of people
submitting resumes, who they speak to, how they qualify them

(20:44):
for the position. And so the reason I give that
example is the opportunity for AI, which is unlike other technologies,
is truly unlimited. It will touch every single business. It's
not the realm of the fun five hundred or the
fortune one thousand. This is the fortune any size. And

(21:06):
I think that may be one thing that people underestimate
about AI.

Speaker 4 (21:11):
What about I mean, I was thinking about education as
a kind of I mean, education is a perennial whipping
boy for you guys that are living in the nineteenth century, right.
I'm just curious about if a superintendent of a public
school system or the president of the university sat down

(21:31):
and had lunch with you and said, do the university first.
My cost are out of control, my enrollment is down,
my students hate me, and my board is revolting.

Speaker 2 (21:44):
Help.

Speaker 4 (21:46):
How would you how would you think about helping someone
in that situation.

Speaker 3 (21:52):
I spend some time with universities. I like to go
back and visit Alma Maters where I went to school,
and so I do that every year. The challenge I
have hall is Seeming university is there has to be
a will. Yeah, and I'm not sure the incentives are
quite right today because bringing in new technology, let's say

(22:13):
we want to go after we can help you figure
out student recruiting or how you automate more of your education,
everybody suddenly feels threatened that university. Hold on, that's my job.
I'm the one that decides that, or I'm the one
that wants to dictate the course. So there has to
be a will. So I think it's very possible, and

(22:36):
I do think over the next decade you will see
some universities that jump all over this and they will
move ahead, and you see others that do not, because
it's very possible.

Speaker 4 (22:48):
Where how does when you say there has to be
a will? Is that the kind of Is that a
kind of thing that that people at IBM think about,
Like when in this conversation you hype type a conversation
you might have with the university president, would you give
advice on where the will comes from?

Speaker 3 (23:08):
I don't do that as much in a university context.
I do that every day in a business context because
if you can find the right person in a business
that wants to focus on growth or the bottom line
or how do you create more productivity. Yes, it's going
to create a lot of organizational resistance potentially, but you

(23:29):
can find somebody that will figure out how to push
that through. I think for universities, I think that's also possible.
I'm not sure there's there's a there's a return on
investment for us to do that.

Speaker 4 (23:40):
Yeah, yeah, yeah, let's let's find some terms.

Speaker 2 (23:47):
AI years. I told you'd like to use What does
that mean?

Speaker 3 (23:52):
We just started using this term literally in the last
three months, and it was a It was what we
observed internally, which is most technology you build, you say,
all right, what's going to happen in year one, year two,
year three, and it's you know, largely by by a calendar.
AI years are the idea that what used to be

(24:14):
a year is now like a week, and that is
how fast the technology is moving. Do you give you
an example. We had one client we're working with, They're
using one of our granite models, and the results they
were getting we're not very good. Accuracy was not there.
Their performance was not there. So I was like scratching
my head. I was like, what is going on? They

(24:37):
were financial services, the bank, So I'm scratching my head,
like what is going on? Everybody else is getting this
and like these results are horrible. And I said to
the team, which version of the model are you using?
This was in February, Like we're using the one from October.
I was like, all right, now we don't precisely the
problem because the model from October is the effect useless

(25:00):
now since we're here in February.

Speaker 2 (25:02):
Serious, actually useless, completely useless.

Speaker 3 (25:06):
Yeah, that is how fast this is changing. And so
the minute, same use case, same data, you give them
the model from late January instead of October, the results
are off the charts.

Speaker 2 (25:20):
Yeah.

Speaker 4 (25:21):
Wait, so what exactly happened between October and January?

Speaker 3 (25:24):
The model got way better?

Speaker 2 (25:25):
Could dig into that? Like, what do you mean by
the way.

Speaker 3 (25:27):
We are constant We have built large compute infrastructure where
we're doing model training. And to be clear, model training
is the realm of probably in the world my guess
is five to ten companies. And so you build a model,
you're constantly training it, you're doing fine tuning you're doing

(25:48):
more training, You're adding data every day, every hour it
gets better, And so how does it do that. You're
feeding it more data, you're feeding it more live examples.
We're using things like synthetic data at this point, which
is we're basically creating data to do the training as well.
All of this feeds into how useful the model is,

(26:10):
and so using the October model, those were the results
in October, just a fact, that's how good it was then.
But back to the concept of AI years, two weeks
is a long time.

Speaker 4 (26:23):
Does that are we in a steep part of the
model learning carve or do you expect this to continue
along this at this pace?

Speaker 3 (26:32):
I think that is the big question and don't have
an answer yet. By definition, at some point you would
think it would have to slow down a bit, but
it's not obvious that that is on the horizon.

Speaker 2 (26:44):
Still speeding up. Yes, how fast can it get?

Speaker 3 (26:50):
We've debated, can you actually have better results in the
afternoon than you did in the morning. Really it's nuts, Yeah,
I know, But that's that's why we came up with
this term, because I think you also have to think
of like concepts that gets people's attention so.

Speaker 4 (27:08):
You're basically turning into a bakery. You're like the bread
from yesterday. You know you can have it for twenty
five cents. But I mean you do proferential pricing. You
could say, we'll judge you x for yesterday's model, two
x for today's model.

Speaker 3 (27:25):
I think that's dangerous as a merchandising strategy, but I
guess your point.

Speaker 2 (27:30):
Yeah, but that's crazy.

Speaker 4 (27:32):
And this, by the way, so this model is the
same true for almost you're talking specifically about a model
that was created to help some aspect of a financial
services So is that kind of model accelerating faster and
running faster than other models for other kinds of problems?

Speaker 3 (27:48):
So this domain was code. Yeah, so by definition, if
you're feeling feeding in more data some more code, you
get those kind of results depend on the model type.
There's a lot of code in the world, and so
we can find that we can create it. Like I said,
there's other aspects where there's probably less inputs available, which

(28:13):
means you probably won't get the same level of iteration.
But for code, that's certainly the cycle times that we're seeing.

Speaker 4 (28:18):
Yea, and how do you know that. Let's stick with
this one example of this model you have, how do
you know that your model is better than.

Speaker 2 (28:27):
Big company B down the street?

Speaker 4 (28:29):
Client asks you, why would I go with IBM as
opposed to some the s firm in the valley that says,
as they have a model on this, what's your how
do you frame your advantage?

Speaker 3 (28:41):
Well, we benchmark all of this, and I think the
most important is metric is price performance, Not price, not performance,
but the combination of the two. And we're super competitive there. Well,
for what we just released, with what we've done in
open source, we know that nobody's close to us right
now on code now. To be clear, that will probably change, yeah,

(29:03):
because it's like leap frog. People will jump ahead, then
we jump back ahead. But we're very confident that with
everything we've done in the last few months, we've taken
a huge leap forward here.

Speaker 2 (29:15):
Yeah.

Speaker 4 (29:16):
I mean this goes back to the point I was
making in the beginning, so about the difference between your
twenty something self in ninety nine and yourself today.

Speaker 2 (29:26):
But this time.

Speaker 4 (29:27):
Compression has to be a crazy adjustment. So the concept
of what you're working on and how you make decisions
internally and things has to undergo this kind of revolution.
If you're switching from I mean back in the day,
a model might be useful for how.

Speaker 2 (29:44):
Long years years?

Speaker 3 (29:46):
I think about you know, statistical models that set inside
things like SPSS, which is a product that a lot
of students use around the world. I mean, those have
been the same models for twenty years and they're still
very good at what they do. And so yes, it's
a completely it's a completely different moment for how fast
this is moving. And I think it just raises the

(30:08):
bar for everybody, whether you're a technology provider like us,
or you're a bank or an insurance company or a
shipping company, to say, how do you really change your
culture to be way more aggressive than you normally would be?

Speaker 4 (30:28):
Does this means it's a weird question, But does this
mean a different set of kind of personality or character
traits are necessary for a decision maker in tech now
than twenty five years ago.

Speaker 3 (30:42):
There's a there's a book I saw recently, it's called
The Geek Way, which talked about how technology companies have
started to operate in different ways maybe than many you know,
traditional companies, and more about being dated driven, more about delegation.

(31:04):
Are you willing to have the smartest person in the
room make decisions opposed to the highest paid person in
the room. I think these are all different aspects that
every company is going to face. Yeah.

Speaker 4 (31:15):
Yeah, next term, talk about open. When you use that
word open, what do you mean.

Speaker 3 (31:23):
I think there's really only one definition of open, which
is for technology is open source. An open source means
the code is freely available. Anybody can see it, access it,
contribute to it.

Speaker 4 (31:39):
And what is Tell me about why that's an important principle.

Speaker 3 (31:46):
When you take a topic like AI, I think it
would be really bad for the world if this was
in the hands of one or two companies, or three
or four, doesn't matter the number, some small number. Think
about like in history, sometime early nineteen hundreds, the Interstate
Commerce Commission was created, and the whole idea was to

(32:09):
protect farmers from railroads. Meaning they wanted to allow free trade,
but they knew that, well, there's only so many railroad tracks,
so we need to protect farmers from the shipping costs
that railroads could impose. So good idea, but over time
that got completely overtaken by the railroad lobby and then
they use that to basically just increase prices and it

(32:33):
made the lives of farmers way more difficult. I think
you could play the same analogy through with AI. If
you allow a handful of companies to have the technology,
you regulate around the principles of those one or two companies,
then you've trapped the entire world. That would be very bad.
So the danger of that happened for sure. I mean

(32:56):
there's companies in Watson in Washington every week trying to
achieve that outcome. And so the opposite of that is
to say it's going to be an open source because
nobody can dispute open source because it's right there, everybody
can see it. And so I'm a strong believer that
open source will win for AI. It has to win.

(33:17):
It's not just important for business, but it's important for humans.

Speaker 4 (33:23):
On the I'm curious about on the list of things
you worry about, actually, let me before I ask, let
me ask this question very generally. What is the list
of things you worry about? What's your top five business
related worries right now?

Speaker 3 (33:38):
Tops from those are the first question. We could be
here for hours for me to answer.

Speaker 2 (33:44):
I did say business related.

Speaker 4 (33:45):
We could leave you know, your kid's haircuts got it
out of.

Speaker 3 (33:49):
The number one is always it's the thing that's probably
always been true, which is just people. Do we have
the rights skills? Are we doing a good job of
training our people? Are our people doing a good job
of working with clients? Like that's number one. Number two
is innovation. Are we pushing the envelope enough? Are are

(34:15):
we staying ahead? Number three is which kind of feeds
into the innovation one is risk taking? Are we taking
enough risk? Without risk? There is no growth? And I
think the trap that every larger company inevitably falls into
is conservatism. Things are good enough, and so it's are

(34:37):
we pushing the envelope? Are we taking enough risk to
really have an impact? I'd say those are probably the
top three that I spend.

Speaker 4 (34:45):
Last turn to define productivity paradox something I know you've
thought a lot about what does that mean?

Speaker 3 (34:51):
So I started thinking hard about this because all I
saw and read every day was was fear about AI.
And I studied economics, and so I kind of went
back to like basic economics. And there's been like a
macro investing formula. I guess I would say it's been

(35:12):
around forever that says growth comes from productivity growth plus
population growth plus debt growth. So if those three things
are working, you'll get GDP growth. And so then you
think about that and you say, well, debt growth, we're

(35:34):
probably not going back to zero percent interest rates, so
to some extent there's going to be a ceiling on that.
And then you look at population growth. There are shockingly
few countries or places in the world that will see
population growth over the next thirty to fifty years. In fact,
most places are not even at replacement rates. And so

(35:55):
I'm like, all right, so population growth is not going
to be there. So that would mean if you just
take it to the extreme, the only chance of continued
GDP growth is productivity. And the best way to solve
productivity is AI That's why I say it's a paradox.

(36:17):
On one hand, everybody's scared after death it's going to take
over the world, take all of our jobs, ruin us.
But in reality, maybe it's the other way, which is
it's the only thing that can save us. Yeah, and
if you believe that economic equation, which I think has
proven quite true over hundreds of years, I do think

(36:37):
it's probably the only thing that can save us.

Speaker 4 (36:40):
Actually looked at the numbers yesterday for totally random reason
on population growth in Europe and received. This is a
special bonus question. We'll see how smart you are. Which
country in Europe? Condellly Europe has the highest population growth?

Speaker 3 (36:56):
It's small continental Europe, probably one of the Nordics.

Speaker 2 (37:01):
I would yes, close Luxembourg.

Speaker 4 (37:06):
Okay, something that's going on in Luxembourg. I feel like,
well all of us need to investigate there. At one
point four nine, which in the day, by the way,
would be a relatively that's the best performing country. I
mean in the day, you'd be countries had routinely had
two points something, you know, percent growth in a given year.

Speaker 2 (37:26):
Last question, you're writing a book. Now we were talking
chatting about.

Speaker 4 (37:29):
It backstage, and now I appreciate the paradox of this book,
which is in a universe with a model, is better
in the afternoon than it is in the morning. How
do you write a book that's like printed on paper?
I expected to reuseful.

Speaker 3 (37:46):
This is the challenge. And I am an incredible author
of useless books. I mean most of what I've spent
time on in the last decade of stuff that's completely useless,
like a year after it's written. And so when and
we were talking about it as I would like to
do something around AI that's timeless, that would be useful

(38:08):
ten or twenty years from now. But then to your point, so,
how is that even remotely possible if the model's better
in the afternoon than in the morning. So that's the
challenge in front of us. But the book is around
AI value creation, so kind of links to this productivity paradox,
and how do you actually get sustained value out of AI,

(38:34):
out of automation, out of data science. And so the
biggest challenge in front of us is can we make
this relevant? That's the day that it's published.

Speaker 2 (38:44):
How are you setting out to do that?

Speaker 3 (38:47):
I think you have to to some extent level it
up to bigger concepts, which is kind of why I
go to things like macroeconomics, population geography as opposed to
going into the the weeds of the technology itself. If
you're write about this is how you get better performance
out of a model, we can agree that will be

(39:08):
completely useless two years from now, maybe even two months
from now, and so it will be less in the
technical detail and more of what is sustained value creation
for AI, which if you think on what is hopefully
a ten or twenty year period. It's probably we're kind
of substituting AI for technology now, I've realized, because I

(39:30):
think this has always been true for technology. It's just
now AI is the thing that everybody wants to talk about.
But let's see if we can do it. Time will tell.

Speaker 4 (39:40):
Did you get any inkling that the pace that this
AI year's phenomenon was going to that things with the
pace of change was going to accelerate so much because
you had More's law, right, you had a model in
the technology world for this kind of exponential increase in
so thinking about that kind of a similar kind of

(40:03):
acceleration in the.

Speaker 3 (40:07):
I think anybody had said they expected what we're seeing
today is probably exaggerating. I think it's way faster than
anybody expected. Yeah, but technologies, back to your point at
More's law has always accelerated through the years. So I
wouldn't say it's a shock, but it is surprising.

Speaker 4 (40:29):
Yeah, You've had a kind of extraordinary privileged position to
watch and participate in this revolution, right, I mean, how
many other people have been in that have ridden this wave.

Speaker 2 (40:43):
Like you have.

Speaker 3 (40:44):
I do wonder is this really that much different or
does it feel different just because we're here. I mean,
I do think on one level. Yes, So in the
time I've been an IBM, internet happened, mobile happened, social
network happened, blockchain happened. AI. So a lot has happened.

(41:06):
But then you go back and say, well, but if
I'd been here between nineteen seventy and ninety five, there
were a lot of things that are pretty fundamental then too,
So I wondered, almost, do we always exaggerate the timeframe
that we're in. I don't know. Yeah, but it's a
good idea though.

Speaker 4 (41:28):
I think the ending with the phrase I don't know
it's a good idea though.

Speaker 2 (41:34):
It's probably a great way to wrap this up.

Speaker 4 (41:36):
Thank you so much, Thank you, Malcolm. In a field
that is evolving as quickly as artificial intelligence, it was
inspiring to see how adaptable Rob has been over his career.
The takeaways from my conversation with Rob had been echoing
in my head ever since. He emphasized how open source

(41:57):
models allow AI technology to be by many players. Openness
also allows for transparency. Rob told me about AI use
cases like IBM's collaboration with Sevilla's football club. That example
really brought home for me how AI technology will touch
every industry. Despite the potential benefits of AI, challenges exist

(42:21):
in its widespread adoption. Rob discussed how resistance to change,
concerns about job security and organizational inertia can slow down
implementation of AI solutions. The paradox, though, according to Rob,
is that rather than being afraid of a world with AI,
people should actually be more afraid of a world without it. AI,

(42:44):
he believes, has the potential to make the world a
better place in a way that no other technology can.
Rob painted an optimistic version of the future, one in
which AI technology will continue to improve at an exponential rate.
This will free up workers to dedicate their energy to
more creative tasks. I for one am on board. Smart

(43:09):
Talks with IBM is produced by Matt Romano, Joey Fishground
and Jacob Goldstein. We're edited by Lydia Jane Kott. Our
engineers are Sarah Bruguier and Ben Holiday theme song by
Gramscow Special thanks to the eight Bar and ib M teams,
as well as the Pushkin marketing team. Smart Talks with
ib M is a production of Pushkin Industries and Ruby

(43:32):
Studio at iHeartMedia. To find more Pushkin podcasts. Listen on
the iHeartRadio app, Apple Podcasts, or wherever you listen to podcasts.
I'm Malcolm Gladwell. This is a paid advertisement from IBM.
The conversations on this podcast don't necessarily represent IBM's positions, strategies,

(43:54):
or opinions.

TechStuff News

Advertise With Us

Follow Us On

Hosts And Creators

Oz Woloshyn

Oz Woloshyn

Karah Preiss

Karah Preiss

Show Links

AboutStoreRSS

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Cold Case Files: Miami

Cold Case Files: Miami

Joyce Sapp, 76; Bryan Herrera, 16; and Laurance Webb, 32—three Miami residents whose lives were stolen in brutal, unsolved homicides.  Cold Case Files: Miami follows award‑winning radio host and City of Miami Police reserve officer  Enrique Santos as he partners with the department’s Cold Case Homicide Unit, determined family members, and the advocates who spend their lives fighting for justice for the victims who can no longer fight for themselves.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.