Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Hey everyone, it's Robert and Joe here. Today we've got
something a little bit different to share with you. It
is a new season of the Smart Talks with IBM
podcast series.
Speaker 2 (00:09):
Today we are witnessed to one of those rare moments
in history, the rise of an innovative technology with the
potential to radically transform business and society forever. The technology,
of course, is artificial intelligence, and it's the central focus
for this new season of Smart Talks with IBM.
Speaker 1 (00:25):
Join hosts from your favorite Pushkin podcasts as they talk
with industry experts and leaders to explore how businesses can
integrate AI into their workflows and help drive real change
in this new era of AI. And of course, host
Malcolm Gladwell will be there to guide you through the
season and throw in his two cents as well.
Speaker 2 (00:43):
Look out for new episodes of Smart Talks with IBM
every other week on the iHeartRadio app, Apple Podcasts, or
wherever you get your podcasts. And learn more at IBM
dot com, slash smart Talks.
Speaker 3 (01:02):
Pushkin.
Speaker 4 (01:06):
Welcome, Welcome, Welcome to Smart Talks with IBM.
Speaker 3 (01:14):
Hello, Hello, Welcome to Smart Talks with IBM, a podcast
from Pushkin Industries. iHeartRadio and IBM. I'm Malcolm Gladwell. This season,
we're diving back into the world of artificial intelligence, but
with a focus on the powerful concept of open its possibilities, implications,
and misconceptions. We'll look at openness from a variety of
(01:37):
angles and explore how the concept is already reshaping industries,
ways of doing business, and our very notion of what's possible.
And for the first episode of this season, we're bringing
you a special conversation. I recently sat down with Rob Thomas.
Rob is the senior vice president of Software and chief
Commercial Officer of IBM. I spoke to him in front
(01:59):
of a live audience as part of New York Tech Week.
We discussed how businesses can harness the immense productivity benefits
of AI while implementing it in a responsible and ethical manner.
We also broke down a fascinating concept that Rob believes
about AI, known as the productivity paradox. Okay, let's get
(02:20):
to the conversation. How are we doing good? Rob? This
is our our second time. We did one of these
in the middle of the pandemic, but now it's all
such a blur now that us can figure out when
it was.
Speaker 4 (02:39):
I know it's hard to those are like a blurry years.
Speaker 5 (02:42):
You don't know what happened, right.
Speaker 3 (02:43):
But well, it's good to see you, to meet you again.
I wanted to start by going back. You've been at
IBM twenty years? Is that right?
Speaker 5 (02:52):
Twenty five in July, believe it or not.
Speaker 3 (02:55):
So you were a kid when you joined.
Speaker 5 (02:56):
I was four.
Speaker 3 (03:00):
W I want to contrast present day Rob and twenty
five years ago Rob. When you arrive at IBM, what
do you think your job is going to be? It,
your career is going? Where do you think the kind
of problems you're going to be addressing are?
Speaker 4 (03:17):
Well, it's kind of surreal because I I joined IBM
Consulting and I'm coming out of school, and you quickly realize, wait,
the job of a consultant is to tell other companies
what to do. And I was like, I literally know nothing,
and so you're immediately trying to figure out, so how
am I going to be relevant given that I know
absolutely nothing to advise other companies on what they should
(03:39):
be doing. And I remember it well, like we were
sitting in a room. When you're a consultant, you're waiting
for somebody else to find work for you.
Speaker 5 (03:47):
A bunch of us.
Speaker 4 (03:48):
Sitting in a room and somebody walks in and says,
we need somebody that knows visio.
Speaker 5 (03:53):
Does anybody know visio? I'd never heard of visio.
Speaker 4 (03:56):
I don't know if anybody in the room has. So
everybody's likes sit around looking at their shoes. So finally
I was like, I know it. So I raised my hand.
They're like, great, we got a project for you next week.
So I was like, all right, I have like three
days to figure out what visio is, and I hope
I can actually figure out how to use it now. Luckily,
(04:17):
it wasn't like a programming language. I mean, it's pretty
much a drag and drop capability. And so I literally
left the office.
Speaker 5 (04:25):
Went to a bookstore, bought the first three books.
Speaker 4 (04:28):
On Visio I could find, spent the whole week in
reading the books, and showed up and got their work
on the project. And so it was a bit of
a risky move, but I think that's kind of you
doing this well. But if you don't take risk, you'll
never you'll never achieve. And so does some extent. Everybody's
making everything up all the time. It's like, can you
(04:51):
learn faster than somebody else. Is what the difference is
in almost every part of life. And so it was
not planned, but it was an accident, but it kind
of forced me to figure out that you're going to
have to figure things out.
Speaker 3 (05:04):
You know, we're here to talk about AI, and I'm
curious about the evolution of your understanding or IBM's understanding
of my AI. At what point in the last twenty
five years do you begin to think, oh, this is
really going to be at the core of what we
think about and work on at this company.
Speaker 4 (05:24):
The computer scientist John McCarthy, he was he's the person
that's credited with coining the phrase artificial intelligence. It's like
in the fifties, and he made an interesting comedy said
he said, once it works, it's no longer called AI,
and that then became it's called like the AI effect,
(05:45):
which is it seems very difficult, very mysterious, but once
it becomes commonplace, it's just no longer what it is.
And so if you put that frame on it, I
think we've always been doing AI at some level. And
I even think back to when I joined I in
ninety nine. At that point there was work on rules
(06:05):
based engines, analytics, all of this was happening, So it
all depends on how you really define that term.
Speaker 5 (06:14):
You could argue that elements of.
Speaker 4 (06:16):
Statistics, probability, it's not exactly AI, but it certainly feeds
into it.
Speaker 5 (06:22):
And so I feel like.
Speaker 4 (06:23):
We've been working on this topic of how do we
deliver better insights better automation since IBM was formed. If
you read about what Thomas Watson Junior did, that was
all about automating tasks that AI will probably certainly not
by today's definition, but it's in the same zip code.
Speaker 3 (06:44):
So from your perspective, it feels a lot more like
an evolution than a revolution.
Speaker 4 (06:48):
Is that a fair statement, yes, which I think most
great things in technology tend to happen that way. Many
of the revolutions, if you will, tend to fizzle out.
Speaker 3 (07:00):
Even given that is there, I guess what I'm asking is,
I'm curious about whether there was a moment in that
evolution when you had to readjust your expectations about what
AI was going to be capable of. I mean, was there,
you know, was there a particular innovation or a particular
problem that was solved that made you think, oh, this
(07:21):
is different than what I thought.
Speaker 4 (07:26):
I would say the moments that caught our attention certainly
casper Off winning the chess tournament Nobody or Deep Blue
beating casper Off. I should say, nobody really thought that
was possible before that, and then it was Watson winning Jeopardy.
These were moments that said, maybe there's more here than
(07:46):
we even thought was possible. And so I do think
there's points in time where we realized maybe way more could.
Speaker 5 (07:57):
Be done than we had even imagined.
Speaker 4 (08:00):
But I do think it's consistent progress every month and
every year versus some seminal moment. Now, certainly large language
models as of recent have caught everybody's attention because it
has a direct consumer application, but I would almost think
of that as what Netscape was for the for the
(08:21):
web browser. Yeah, it brought the Internet to everybody, but
that didn't become the Internet per se.
Speaker 5 (08:29):
Yeah.
Speaker 3 (08:29):
I have a cousin who worked for IBM for forty
one years. I saw him this weekend. He's in Toronto.
By the way, I said, do you work for Rob Thomas?
He went like this, he goes, He said, I'm five
layers down. But so I always whenever I see my
cousin I ask him, can you tell me again what
(08:50):
you do? Because it's always changing, right, I guess this
is a function of working at IBM. So eventually he
just gives up and says, you know, we're just solving problems.
So what we're doing, which I sort of as a
kind of frame, And I was curious, what's the coolest
problem you ever worked on? Not biggest, not most important,
but the coolest, the one that's like that sort of
(09:11):
makes you smile when you think back on it.
Speaker 4 (09:13):
Probably when I was in microelectronics, because it was a
world I had no exposure to. I hadn't studied computer science,
and we were building a lot of high performance semiconductor technology,
so just chips that do a really great job of
processing something or other. And we figured out that there
(09:36):
was a market in consumer gaming that was starting to happen,
and we got to the point where we became the
chip inside the Nintendo, We the Microsoft Xbox Sony PlayStation,
so we basically had the entire gaming market running on
ib AND chips.
Speaker 3 (09:56):
And to use every parent basically is pointing at you
and saying you're the call.
Speaker 5 (10:03):
Probably well they would have found it from anybody.
Speaker 4 (10:06):
But it was the first time I could explain my
job to my kids, who were quite young at that time,
like what I did, Like it was more tangible for
them than saying we solve problems or douce you know,
build solutions like it became very tangible for them, and
I think that's, you know, a rewarding part of the
job is when you can help your family actually understand
(10:28):
what you do. Most people can't do that. It's probably
easier for you. They can they can see the books,
but for for some of us in the business, the
business world, it's not always as obvious. So that was
like one example where the dots really connected.
Speaker 3 (10:42):
There were a couple's a couple of stuck about a
little bit of this into context of of AI love
because I love the frame of problem solving as a
way of understanding what the function of the technology is.
So I know that you guys did something, did some
work with I never know how to announced it is
it Sevia, Sevia, Sevia with the football club Sevia in Spain.
(11:05):
Tell me about Tell me a little bit about that.
What problem were they trying to solve and why did
they call you?
Speaker 4 (11:11):
In every sports franchise is trying to get an advantage, right,
Let's just be that clear. Everybody's how can I use data, analytics, insights,
anything that will make us one percent better on the
field at some point in the future. And Sevie reached
(11:34):
out to us because they had.
Speaker 5 (11:36):
Seen some of that.
Speaker 4 (11:37):
We've done some work with the Toronto Raptors in the
past and others, and their thought was maybe there's something
we could do. They'd heard all about generative AI, that
heard about large language models.
Speaker 5 (11:50):
And the problem, back to your point on.
Speaker 4 (11:53):
Solving problems, was we want to do a way better
job of assessing talent, because really the lifeblood of a
sports franchise is can you continue to cultivate talent? Can
you find talent that others don't find? Can you see
something in somebody that they don't see in themselves or
maybe no other.
Speaker 5 (12:12):
Team season them.
Speaker 4 (12:13):
And we ended up building some of them called Scout Advisor,
which is built on Watson X, which basically just ingests
tons and tons of data, and we like to think
of it as finding, you know, the needle in the
haystack of you know, here's three players that aren't being considered.
(12:34):
They're not on the top teams today, and I think
working with them together, we found some pretty good insights
that's helped them out how What was intriguing to.
Speaker 3 (12:43):
Me was we're not just talking about quantitative data. We're
also talking about qualitative data. But that's the puzzle part
of the thing that fastens me. How does what incorporate
qualitative analysis into that sort of so you just feeding
in scouting reports and things like that.
Speaker 4 (13:02):
I got to realize think about how much I can
actually disclose it. But if you think about so, quantitative
is relatively easy.
Speaker 5 (13:12):
Yeah, every team collects that.
Speaker 4 (13:14):
You know, what's the forty yard dashable think they use
that term, certainly not in Spain. That's all quantitative. Qualitative
is what's happening off the field. It could be diet,
it could be habits, it could be behavior. You can
imagine a range of things that would all feed into
an athlete's performance and so relationships.
Speaker 5 (13:39):
There's many different aspects, and.
Speaker 4 (13:41):
So it's trying to figure out the right blend of
quantitative and qualitative that gives you a unique insight.
Speaker 3 (13:48):
How transparent is that kind of system? I mean, is
it telling you it's saying pick this guy, not this guy,
But is it telling you why it prefers this guy
to this guy.
Speaker 4 (13:57):
Is that I think for anything the realm of AI,
you have to answer the why question. Otherwise you've fallen
into the trap of the you know, the proverbial black box.
And then wait, I made this decision, I'd never understood
why it didn't work out, So you always have to
answer why without a doubt?
Speaker 3 (14:16):
And how is why answered?
Speaker 4 (14:20):
Sources of data, the reasoning that went into it, and
so it's basically just tracing back the chain of how
you got to the answer. And in the case of
what we do in Watson X is we have IBM models.
We also use some other open source models, So it
would be which model was used, what was the data
set that was fed into that model, How is it
(14:41):
making decisions?
Speaker 5 (14:42):
How is it performing? Is it robust?
Speaker 4 (14:46):
Meaning is it reliable in terms of if you feed
it two of the same data set, do you get
the same answer? These are all the you know, the
technical aspects of understanding the why.
Speaker 3 (14:56):
How quickly do you expect all professional sports franchises to
adopt some kind of are they already there? If I
went out and pulled the general managers of the one
hundred most valuable sports franchises in the world. How many
of them would be using some kind of AI system
to assist in their efforts.
Speaker 4 (15:14):
One hundred and twenty percent would meaning that everybody's doing it,
and some think they're doing way more than they probably
actually are. So everybody's doing it. I think what's weird
about sports is everybody's so convinced that what they're doing
is unique that they generally speaking, don't want to work
with a third party to do it because they're afraid
(15:36):
that that would expose them. But in reality, I think
most are doing eighty to ninety percent of the same things.
So but without a doubt, everybody's doing it.
Speaker 3 (15:46):
Yeah. Yeah. The other I say that I loved was
there was one but a shipping line tricon on the
Mississippi River. Tell me a little bit about that project.
What problem were they trying to solve?
Speaker 4 (16:00):
Think about the problem that I would say everybody noticed
if you go back to twenty twenty was things are
getting hold held up in ports. It was actually an
article in the paper this morning kind of tracing the
history of what happened twenty twenty twenty one and why
ships were basically sitting at seas for months at a
time and at that stage, we just we had a
(16:23):
massive throughput issue. But moving even beyond the pandemic, you
can see it now with ships getting through like Panama Canal,
there's like a narrow window where you can get through,
and if you don't have your paperwork done, you don't
have the right approvals, you're not going through and it
may cost you a day or two and that's a
(16:44):
lot of money. In the shipping industry and the tricon example,
it's really just about when you're pulling into a port,
if you have the right paperwork done, you can get
goods off the ship very quickly. Ship a lot of food,
which by definition, since it's not packaged food, it's fresh food.
(17:06):
There is an expiration period and so if it takes
them an extra two hours, certainly multiple hours or a day,
they have a massive problem because then you're going to
deal with spoilage and so it's going to set you back.
And what we've worked with them on is using an
assistant that we've built in Watson X called Orchestrate, which
(17:28):
basically is just AI doing digital labor, so we can
replicate nearly any repetitive task and do that with software.
Speaker 5 (17:39):
Instead of humans.
Speaker 4 (17:41):
So as you may imagine shipping industry still has a
lot of paperwork that goes on and so being able
to take forms that normally would be multiple hours of
filling it out. Oh this isn't right, send it back.
We've basically built that as a digital skill inside of
WATSONEX orchestrate, and so now it's done in minutes.
Speaker 3 (18:03):
They did they realize that they could have that kind
of efficiency by teaming up with you? Or is that
something you came to them and said, guys, we can
do this way better than you think. What's the.
Speaker 4 (18:15):
I'd say it's always, it's always both sides coming together
at a moment that for some reason makes sense because
you could say, why didn't this happen like five years ago,
like this seems so obvious. Well, technology wasn't quite ready then,
I would say, But they knew they had a need
because I forget what the precise number is, but you know,
(18:36):
reduction of spoilage has massive impact on their bottom line,
and so they knew they had a need.
Speaker 5 (18:45):
We thought we could solve it and the two together.
Speaker 3 (18:48):
Who did you guys go to them?
Speaker 5 (18:50):
Though?
Speaker 3 (18:51):
Did they come to you?
Speaker 4 (18:52):
I recall that this one was an inbound meaning they
had reached out to IBM and we'd like to solve
this problem. I think it went into one of our
digital centers if I recall it a literary phone.
Speaker 3 (19:04):
Call, but the other the reverse is more interesting to
me because there seems to be a very, very large
universe of people who have problems that could be solved
this way and they don't realize it. What's your Is
there a shining example of this of someone you just
can't you just think could benefit so much and isn't
(19:25):
benefiting right now?
Speaker 5 (19:28):
Maybe I'll answer it slightly differently.
Speaker 4 (19:30):
I'm I'm surprised by how many people can benefit that
you wouldn't even logically think of. First, let me give
you an example. There's a franchiser of hair salons, sport
Clips is the name. My sons used to go there
for haircuts because they have like TVs and you can
(19:51):
watch sports, so they loved that they got entertained while
they would get their haircut. I think the last place
that you would think is using AI today would be
a franchiser of hair salons.
Speaker 5 (20:03):
But just follow it through.
Speaker 4 (20:06):
The biggest part of how they run their business is
can I get people to cut hair? And this is
the high turnover industry because there's a lot of different
places you can work if you want to cut hair.
People actually get injured cutting hair because you're on your
feet all day, that type of thing. And they're using
same technology orchestrate as part of their recruiting process. How
(20:28):
can they automate a lot of people submitting resumes, who
they speak to, how they qualify them for the position.
And so the reason I give that example is the
opportunity for AI, which is unlike other technologies, is truly unlimited.
It will touch every single business. It's not the realm
(20:50):
of the fortune five hundred or the fortune one thousand.
Speaker 5 (20:53):
This is the.
Speaker 4 (20:54):
Fortune any size. And I think that may be one
thing that people underestimate about.
Speaker 3 (21:00):
Yeah, what about I mean I was thinking about education
as a kind of I mean, education is a perennial
whipping boy for you guys that are living in the
nineteenth century, right. I'm just curious about if a superintendent
of a public school system or the president of the
university sat down and had lunch with you and said,
(21:25):
do the university first. My cost are out of control,
my enrollment is down, my students hate me, and my
board is revolting help. How would you think about helping
someone in that situation.
Speaker 5 (21:43):
I spend some time with universities.
Speaker 4 (21:45):
I like to go back and visit Alma Maters, where
I went to school, and so.
Speaker 5 (21:50):
I do that every year.
Speaker 4 (21:52):
The challenge I have hall of Seeming university is there
has to be a will. Yeah, and I'm not sure
the incentives are quite right today because bringing in new technology,
let's say we want to go after we can help
you figure out student recruiting or how you automate more
of your education, everybody suddenly feels threatened that university.
Speaker 5 (22:15):
Hold on, that's my job.
Speaker 4 (22:17):
I'm the one that decides that, or I'm the one
that wants to dictate the course. So there has to
be a will. So I think it's very possible, and
I do think over the next decade you will see
some universities that jump all over this and they will
move ahead, and you see others that do not.
Speaker 5 (22:35):
Because it's very possible.
Speaker 3 (22:39):
Where how does when you say there has to be
a will? Is that the kind of is that a
kind of thing that that people that I beb to
think about? Like when in this conversation you hypothetical conversation,
you might have with the university president, would you give
advice on on where the will comes from.
Speaker 4 (22:59):
I don't do that as much in a university context.
I do that every day in a business context, because
if you can find the right person in a business
that wants to focus on growth or the bottom line
or how do you create more productivity. Yes, it's going
to create a lot of organizational resistance potentially, but you
(23:19):
can find somebody that will figure out how to push
that through. I think for universities, I think that's also possible.
I'm not sure there's there's there's a return on investment
for us to do that.
Speaker 3 (23:31):
Yeah, yeah, yeah, God, let's let's find some terms. AI years.
I told you'd like to use. What does that mean?
Speaker 4 (23:43):
We just started using this term literally in the last
three months, and.
Speaker 5 (23:49):
It was it was what we observed.
Speaker 4 (23:51):
Internally, which is most technology you build, you say, all right,
what's going to happen in year one, year two, year three,
and it's, you know, largely by a calendar.
Speaker 5 (24:02):
AI years are the idea that what.
Speaker 4 (24:04):
Used to be a year is now like a week,
and that is how fast the technology is moving. Do
you give you an example. We had one client we're
working with. They're using one of our granite models, and
the results they were getting we're not very good. Accuracy
was not there, their performance was not there. So I
was like scratching my head. I was like, what is
(24:25):
going on? They were Financial services, the bank, So I'm
scratching my head, like what is going on? Everybody else
is getting this and like these results are horrible. And
I said to the team, which version of the model
are you using? This was in February, Like, we're using
the one from October. I was like, all right, now
(24:46):
we know precisely the problem because the model from October
is effectively useless now since we're here in February.
Speaker 3 (24:53):
Serious, actually useless, completely useless.
Speaker 4 (24:57):
Yeah, that is how fast this has changed. And so
the minute, same use case, same data, you give them
the model from late January instead of October, the results
are off the charts.
Speaker 3 (25:11):
Yeah. Wait, so what exactly happened between October and January?
Speaker 5 (25:14):
The model got way better?
Speaker 3 (25:16):
Could dig into that, Like, what do you mean by
the way.
Speaker 5 (25:18):
We are constant?
Speaker 4 (25:19):
We have built large compute infrastructure where we're doing model training,
and to be clear, model training is the realm of
probably in the world. My guess is five to ten companies.
Speaker 5 (25:32):
And so.
Speaker 4 (25:34):
You build a model, you're constantly training it, You're doing
fine tuning, you're doing more training, you're adding data every day,
every hour it gets better. And so how does it
do that. You're feeding it more data, you're feeding it
more live examples. We're using things like synthetic data at
this point, which is we're basically creating data to do
(25:55):
the training as well. All of this feeds into how
useful the model is.
Speaker 5 (26:00):
And so using the.
Speaker 4 (26:02):
October model, those were the results in October, just a fact,
that's how good it was then. But back to the
concept of AI years, two weeks is a long time.
Speaker 3 (26:13):
Does that are we in a steep part of the
model learning carve or do you expect this to continue
along this at this pace?
Speaker 5 (26:23):
I think that is the big question and don't have
an answer yet.
Speaker 4 (26:28):
By definition, at some point you would think it would
have to slow down a bit, but it's not obvious
that that is on the horizon.
Speaker 3 (26:35):
Still speeding up. Yes, how fast can it get?
Speaker 4 (26:41):
We've debated can you actually have better results in the
afternoon than you did in the morning. Really it's nuts, Yeah,
I know, but that's why we came up with this
term because I think you also have to think of
like concepts that.
Speaker 5 (26:57):
Gets people's attention.
Speaker 3 (26:58):
So you basically earning into a bakery, you're like the
bread from yesterday. You know you can have it for
twenty five cents. But I mean you do proferential pricing.
You could say we'll judge you X for yesterday's model,
two X for today's model.
Speaker 4 (27:16):
I think that's dangerous as a merchandising strategy, but I
guess your point.
Speaker 3 (27:21):
Yeah, but that's crazy. And this, by the way, so
this model is the same true for almost You're talking
specifically about a model that was created to help some
aspect of a financial services So is that kind of
model accelerating faster and running faster than other models for
other kinds of problems?
Speaker 4 (27:39):
So this domain was code. Yeah, so by definition, if
you're feel feeding in more data some more code, you
get those kind of results.
Speaker 5 (27:50):
It does depend on the model type.
Speaker 4 (27:52):
Yeah, there's a lot of code in the world, and
so we can find that we can create it. Like
I said, there's other aspect x where there's probably less
inputs available, which means you probably won't get the same
level of iteration. But for code that's certainly the cycle
times that we're seeing.
Speaker 3 (28:09):
Yeah, and how do you know that Let's stick with
this one example of this model you have. How do
you know that your model is better than big company
B down the street? The client asks you, why would
I go with IBM as opposed to some the s
firm in the valley that says, as they have a
model on this, what's your how do you frame your advantage?
Speaker 4 (28:32):
Well, we benchmark all of this, and I think the
most important is metric is price performance, not price, not performance,
but the combination of the two. And we're super competitive there. Well,
for what we just released, with what we've done in
open source, we know that nobody's close to us right
now on code now. To be clear, that will probably change, yeah,
(28:54):
because it's like leapfrog. People will jump ahead, then we
jump back ahead. But we're very confident that with everything
we've done in the last few months, we've taken a
huge leap forward here.
Speaker 3 (29:05):
Yeah, it's I mean, this goes back to the point
I was making in the beginning, so about the difference
between your twenty something self in ninety nine and yourself today.
But this time compression has to be a crazy adjustment.
So the concept of what you're working on and how
you make decisions internally and things has to undergo this
(29:28):
kind of revolution if you're switching from I mean back
in the day, a model might be useful for how
long years.
Speaker 4 (29:36):
Years I think about you know, statistical models that set
inside things like SPSS, which is a product that a
lot of students.
Speaker 5 (29:45):
Use around the world.
Speaker 4 (29:45):
I mean, those have been the same models for twenty
years and they're still very good at what they do.
And so yes, it's a completely it's a completely different
moment for how fast this is moving. And I think
it just raises the bar for everybody, whether you're a
technology provider like us, or you're a bank or an
(30:06):
insurance company or a shipping company, to say, how do
you really change your culture to be way more aggressive
than you normally would be.
Speaker 3 (30:18):
Does this means it's a weird question, but does this
mean a different set of kind of personality or character
traits are necessary for a decision maker in tech now
than twenty five years ago.
Speaker 4 (30:33):
There's a book I saw recently, it's called The Geek Way,
which talked about how technology companies have started to operate
in different ways maybe than many you know, traditional companies,
and more about being data driven, more about delegation. Are
(30:55):
you willing to have the smartest person in the room
make decisions opposed to the high paid.
Speaker 5 (31:00):
Person in the room.
Speaker 4 (31:01):
I think these are all different aspects that every company
is going to face.
Speaker 3 (31:05):
Yeah, yeah, next term, talk about open. When you use
that word open, what do you mean.
Speaker 4 (31:14):
I think there's really only one definition of open, which
is for technology, is open source? An open source means
the code is freely available. Anybody can see it, access it,
contribute to it.
Speaker 3 (31:30):
And what is Tell me about why that's an important principle.
Speaker 4 (31:36):
When you take a topic like AI, I think it
would be really bad for the world if this was
in the hands of one or two companies, or three
or four, doesn't matter the number some small number. Think
about like in history sometimes early nineteen hundreds, the Interstate
Commerce Commission was created, and the whole idea was to
(32:00):
protect farmers from railroads. Meaning they wanted to allow free trade,
but they knew that, well, there's only so many railroad tracks,
so we need to protect farmers from the shipping costs
that railroads could impose. So good idea, but over time
that got completely overtaken by the railroad lobby, and then
they use that to basically just increase prices, and it
(32:23):
made the lives of farmers way more difficult. I think
you could play the same analogy through with AI. If
you allow a handful of companies to have the technology,
you regulate around the principles of with those one or
two companies, then.
Speaker 5 (32:38):
You've trapped the entire world. Think that would be very bad.
So the danger of that happening for sure.
Speaker 4 (32:46):
I mean there's companies in Watson in Washington every week
trying to achieve that outcome. And so the opposite of
that is to say it's going to be an open
source because nobody can dispute opens because it's right there,
everybody can see it. And so I'm a strong believer
that open source will win for AI.
Speaker 5 (33:07):
It has to win.
Speaker 4 (33:08):
It's not just important for business, but it's important for humans.
Speaker 3 (33:14):
On the I'm curious about on the list of things
you worry about, actually, let me before I ask, let
me ask this question very generally. What is the list
of things you worry about? What's your top five business
related worries right now?
Speaker 5 (33:29):
Tops from those are the first question. We could be
here for hours for me to answer.
Speaker 3 (33:34):
I did say business related. We could leave you know,
your kids haircuts got it out of.
Speaker 4 (33:40):
The Number one is always it's the thing that's probably
always been true, which is just people. Do we have
the right skills? Are we doing a good job of
training our people? Are our people doing a good job
of working with clients? Like that's number one? Number two
is innovation? Are we pushing the envelope enough? Are we
(34:06):
staying ahead? Number three is which kind of feeds into
the innovation one is risk taking?
Speaker 5 (34:13):
Are we taking enough risk? Without risk, there is no growth?
Speaker 4 (34:17):
And I think the trap that every larger company inevitably
falls into is conservatism. Things are good enough, and so
it's are we pushing the envelope? Are we taking enough
risk to really have an impact? I'd say those are
probably the top three that I spend.
Speaker 3 (34:35):
Last turn to define productivity paradox something. I know you've
thought a lot about what does that mean?
Speaker 4 (34:42):
So I started thinking hard about this because all I
saw and read every day was fear about AI, and
I studied economics, and so I kind of went back
to like basic economics, and there's been like a macro
invest formula. I guess I would say it's been around
(35:03):
forever that says growth comes from productivity growth plus population
growth plus debt growth. So if those three things are working, you'll.
Speaker 5 (35:18):
Get GDP growth.
Speaker 4 (35:20):
And so then you think about that and you say, well,
debt growth, we're probably not going back to zero percent
interest rates, so to some extent there's going to be
a ceiling on that.
Speaker 5 (35:31):
And then you.
Speaker 4 (35:32):
Look at population growth. There are shockingly few countries or
places in the world that will see population growth over
the next thirty to fifty years. In fact, most places
are not even at replacement rates. And so I'm like,
all right, so population growth is not going to be there.
So that would mean if you just take it to
(35:52):
the extreme, the only chance of continued GDP growth is productivity.
And the best way to solve productivity is AI That's
why I say it's a paradox. On one hand, everybody's
scared after death it's going to take over the world,
(36:14):
take all of our jobs, ruin us. But in reality
maybe it's the other way, which is it's the only
thing that can save us.
Speaker 5 (36:21):
Yeah, and if you believe.
Speaker 4 (36:23):
That economic equation, which I think has proven quite true
over hundreds of years, I do think it's probably the
only thing that can save us.
Speaker 3 (36:31):
Actually looked at the numbers yesterday for totally random reason
on population growth in Europe and receive this is a
special bonus question. We'll see how smart you are. Which
country in Europe? Condeently, Europe has the highest population growth?
Speaker 4 (36:46):
It's small continental Europe, probably one of the Nordics, I would.
Speaker 3 (36:52):
Guess, close. Luxembourg. Okay, something that's going on in Luxembourg.
I feel like, well, all of us need to investigate.
They're at one point four nine, which in the day,
by the way, would be a relatively that's the best
performing country. I mean in the day, you'd countries had
routinely had two points something, you know, percent growth in
(37:15):
a given year. Last question, you're writing a book. Now
we were talking chatting about it backstage, and now I
appreciate the paradox of this book, which is in a
universe with a model, is better in the afternoon than
it is in the morning. How do you write a
book that's like printed on paper? I expected to reuseful.
Speaker 4 (37:37):
This is the challenge. And I am an incredible author
of useless books. I mean, most of what I've spent
time on in the last decade of stuff that's completely useless,
like a year after it's written. And so when we
were talking about it, I was like, I would like
to do something around AI that's timeless.
Speaker 5 (37:56):
Yeah, that would be useful ten or twenty years.
Speaker 4 (38:00):
No, But then to your point, so, how is that
even remotely possible if the model is better in the
afternoon and in the morning. So that's the challenge in
front of us. But the book is around AI value creation,
so kind of links to this productivity paradox, and how
do you actually get sustained value out of AI, out
(38:25):
of automation, out of data science. And so the biggest
challenge in front of us is can we make this
relevant that's the day that it's published.
Speaker 3 (38:34):
How are you setting out to do that?
Speaker 4 (38:38):
I think you have to to some extent level it
up to bigger concepts, which is kind of why I
go to things like macroeconomics, population geography as opposed to
going into the weeds of the technology itself. If you
write about this is how you get better performance out
of a model, we can agree that will be completely
(38:59):
useful two years from now, maybe even two months from now,
and so it will be less in the technical detail
and more of what is sustained value creation for AI,
which if you think on what is hopefully a ten
or twenty year period, it's probably we're kind of substituting
AI for technology. Now I've realized, because I think this
(39:21):
has always been true for technology. It's just now AI
is the thing that everybody wants to talk about. But
let's see if we can do it. Time will tell.
Speaker 3 (39:31):
Did you get any inkling that the pace that this
AI year's phenomenon was gonna that things with the pace
of change was going to accelerate so much because you
had More's law, right, you had a model in the
technology world for this kind of exponential increase in so
were you were you thinking about that kind of accelerate
(39:53):
similar kind of acceleration in.
Speaker 4 (39:55):
The I think anybody had said they expect did what
we're seeing today is probably exaggerating. I think it's way
faster than anybody expected. Yeah, but technologies, back to your
point at More's Law has always accelerated through the years,
(40:15):
So I.
Speaker 5 (40:16):
Wouldn't say it's a shock, but it is surprising.
Speaker 3 (40:19):
Yeah, You've had a kind of extraordinary privileged position to
watch and participate in this revolution, right, I mean, how
many other people have been in that have ridden this wave.
Speaker 5 (40:33):
Like you have.
Speaker 4 (40:35):
I do wonder is this really that much different or
does it feel different just because we're here?
Speaker 5 (40:41):
I mean, I do think on one level. Yes.
Speaker 4 (40:44):
So in the time I've been an IBM, internet happened,
Mobile happened, social network happened, blockchain happened.
Speaker 5 (40:55):
AI. So a lot has happened.
Speaker 4 (40:56):
But then you go back and say, well, but if
I'd been here between nineteen seventy and ninety five, there
were a lot of things that are pretty fundamental. Then too, say,
I wondered, almost do we do we always exaggerate the
timeframe that we're in.
Speaker 5 (41:13):
I don't know.
Speaker 4 (41:14):
Yeah, but it's a good idea though.
Speaker 3 (41:19):
I think the ending with the phrase I don't know
it's a good idea though. That's the great way to
wrap this up. Thank you so much, Thank you, Malcolm.
In a field that is evolving as quickly as artificial intelligence,
it was inspiring to see how adaptable Rob has been
(41:39):
over his career. The takeaways from my conversation with Rob
had been echoing in my head ever since. He emphasized
how open source models allow AI technology to be developed
by many players. Openness also allows for transparency. Rob told
me about AI use cases like IBM's collaborate with Sevilla's
(42:01):
football club. That example really brought home for me how
AI technology will touch every industry. Despite the potential benefits
of AI, challenges exist in its widespread adoption. Rob discussed
how resistance to change, concerns about job security and organizational
inertia can slow down implementation of AI solutions. The paradox, though,
(42:26):
according to Rob, is that rather than being afraid of
a world with AI, people should actually be more afraid
of a world without it. AI, he believes, has the
potential to make the world a better place in a
way that no other technology can. Rob painted an optimistic
version of the future, one in which AI technology will
(42:47):
continue to improve at an exponential rate. This will free
up workers to dedicate their energy to more creative tasks.
I for one am on board Smart Talks with IBM
is produced by Matt Romano, Joey Fishground and Jacob Goldstein.
We're edited by Lydia gene Kott. Our engineers are Sarah
(43:09):
Bruguer and Ben Tolliday. Theme song by Gramscow. Special thanks
to the eight Bar and IBM teams, as well as
the Pushkin marketing team. Smart Talks with IBM is a
production of Pushkin Industries and Ruby Studio at iHeartMedia. To
find more Pushkin podcasts, listen on the iHeartRadio app, Apple Podcasts,
(43:31):
or wherever you listen to podcasts. I'm Malcolm Gladwell. This
is a paid advertisement from IBM. The conversations on this
podcast don't necessarily represent IBM's positions, strategies, or opinions.