Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:02):
Welcome, Welcome, Welcome to Smart Talks with IBM.
Speaker 2 (00:10):
Hello, Hello, Welcome to Smart Talks with IBM, a podcast
from Pushkin Industries, iHeartRadio and IBM. I'm Malcolm Gladwell. This season,
we're diving back into the world of artificial intelligence, but
with a focus on the powerful concept of open its possibilities, implications,
and misconceptions. We'll look at openness from a variety of
(00:32):
angles and explore how the concept is already reshaping industries,
ways of doing business and our very notion of what's possible.
And for the first episode of this season, we're bringing
you a special conversation. I recently sat down with Rob Thomas.
Rob is the senior vice president of Software and chief
Commercial Officer of IBM. I spoke to him in front
(00:55):
of a live audience as part of New York Tech Week.
We discussed how business is can harness the immense productivity
benefits of AI while implementing it in a responsible and
ethical manner. We also broke down a fascinating concept that
Rob believes about AI, known as the productivity paradox. Okay,
(01:16):
let's get to the conversation. How are we doing good?
Speaker 3 (01:26):
Rob?
Speaker 2 (01:26):
This is our our second time. We did one of
these in the middle of the pandemic. But now it's
all such a blur now that us can figure out
when it was.
Speaker 3 (01:35):
I know it's hard to those are like a blurry years.
You don't know what happened, right.
Speaker 2 (01:39):
But well, it's good to see you, to meet you again.
I wanted to start by going back. You've been at
IBM twenty years.
Speaker 3 (01:48):
Is that right? Twenty five in July, believe it or not.
Speaker 2 (01:51):
So you were a kid when you joined.
Speaker 3 (01:52):
I was four.
Speaker 2 (01:53):
Yeah, So I want to contrast present day Rob and
twenty five years ago. Rob. When you arrive at IBM,
what do you think your job is going to be?
Speaker 3 (02:07):
It, your career is going.
Speaker 2 (02:08):
Where do you think the kind of problems you're going
to be addressing are?
Speaker 1 (02:13):
Well, it's kind of surreal because I joined IBM Consulting
and I'm coming out of school and you quickly realize
what the job of a consultant is to tell other
companies what to do. And I was like, I literally
know nothing, and so you're immediately trying to figure out,
so how am I going to be relevant given that
I know absolutely nothing to advise other companies on what
(02:34):
they should be doing. And I remember it well, like
we were sitting in a room. When you're a consultant,
you're waiting for somebody else to find work for you.
A bunch of us sitting in a room, and somebody
walks in and says, we need somebody that knows Visio.
Speaker 3 (02:49):
Does anybody know Visio? I'd never heard of Visio.
Speaker 1 (02:52):
I don't know if anybody in the room has. So
everybody's like sitting around looking at their shoes. So finally
I was like, I know it. So I raised my hand.
They're like, great, we got a project for you next week.
So I was like, all right, I have like three
days to figure out what visio is, and I hope
I can actually figure out how to use it now.
Speaker 3 (03:12):
Luckily, it wasn't like.
Speaker 1 (03:14):
A programming language. I mean, it's pretty much a drag
and drop capability. And so I literally left the office,
went to a bookstore, bought the first three books on
Visio I could find, spent the whole week in reading
the books, and showed up and got to work on
the project.
Speaker 3 (03:31):
And so it was a bit of a risky move,
but I think that's kind of you.
Speaker 1 (03:38):
This well, but if you don't take risk you'll never
you'll never achieve, and so does some extent. Everybody's making
everything up all the time. It's like, can you learn
faster than somebody else? Is what the difference is in
almost every part of life. And so it was not planned,
but it was an accident, but it kind of forced
me to figure out that you're gonna have to figure
(03:59):
things out.
Speaker 2 (04:00):
You know, we're here to talk about AI. And I'm
curious about the evolution of your understanding or IBM's understanding
of my AI. At what point in the last twenty
five years do you begin to think, oh, this is
really going to be at the core of what we
think about and work on at this company.
Speaker 1 (04:20):
The computer scientist John McCarthy, he was he's the person
that's credited with coining the phrase artificial intelligence. It's like
in the fifties, and he made an interesting comedy said
he said, once it works, it's no longer called AI,
and that then became it's called like the AI effect,
(04:41):
which is it seems very difficult, very mysterious, but once
it becomes commonplace, it's just no longer what it is.
And so if you put that frame on it, I
think we've always been doing AI at some level, and
I even think back to when I joined IBM in
ninety nine.
Speaker 3 (04:57):
At that point there.
Speaker 1 (04:59):
Was work on rules based engines, analytics.
Speaker 3 (05:04):
All of this was happening.
Speaker 1 (05:05):
So it all depends on how you really define that term.
You could argue that elements of statistics, probability, it's not
exactly AI, but it certainly feeds into it. And so
I feel like we've been working on this topic of
how do we deliver better insights better automation since IBM
(05:28):
was formed. If you read about what Thomas Watson Junior did,
that was all about automating tasks that AI well, probably
certainly not by today's definition, but it's in the same
zip code.
Speaker 2 (05:40):
So from your perspective, it feels a lot more like
an evolution than a revolution.
Speaker 1 (05:44):
Is that a fair statement, yes, which I think most
great things in technology tend to happen that way. Many
of the revolutions, if you will, tend to fizzle out.
Speaker 2 (05:55):
But even given that is there, I guess what I'm
asking is, I'm curious about whether there was a a
moment in that evolution when you had to readjust your
expectations about what AI was going to be capable of.
I mean, was there, you know, was there a particular
innovation or a particular problem that was solved that made
(06:15):
you think, oh, this is different than what I thought.
Speaker 1 (06:22):
I would say the moments that caught our attention certainly
casper Off winning the chess tournament Nobody or Deep Blue
beating casper Off. I should say, nobody really thought that
was possible before that, and then it was Watson winning Jeopardy.
These were moments that said, maybe there's more here than
(06:42):
we even thought was possible. And so I do think
there's points in time where we realized maybe way more could.
Speaker 3 (06:53):
Be done than we had even imagined.
Speaker 1 (06:56):
But I do think it's consistent progress every month and
every year versus some seminal moment.
Speaker 3 (07:04):
Now.
Speaker 1 (07:04):
Certainly, large language models as of recent have caught everybody's
attention because it has a direct consumer application. But I
would almost think of that as what Netscape was for
the for the web browser. Yeah, it brought the Internet
to everybody, but that didn't become the Internet per se.
Speaker 3 (07:25):
Yeah.
Speaker 2 (07:25):
I have a cousin who worked for IBM for forty
one years. I saw him this weekend. He's in Toronto.
By the way, I said, do you work for Rob Thomas.
Speaker 3 (07:34):
He went like this.
Speaker 2 (07:35):
He goes, he said, I'm five layers down. But so
I always whenever I see my cousin, I ask him,
can you tell me again what you do? Because it's
always changing, right, I guess this is a function of
working at IBM. So eventually he just gives up and says,
you know, we're just solving problems. So what we're doing,
which I sort of loved as a kind of frame,
(07:58):
And I was curious, What's what's the coolest problem you
ever worked on? Not biggest, not most important, but the coolest,
the one that's like that sort of makes you smile
when you think back on it.
Speaker 1 (08:09):
Probably when I was in microelectronics, because it was a
world I had no exposure to. I hadn't studied computer science,
and we were building a lot of high performance semiconductor technology,
so just chips that do a really great job of
processing something or other. And we figured out that there
(08:32):
was a market in consumer gaming that was starting to happen,
and we got to the point where we became the
chip inside the Nintendo. We the Microsoft Xbox Sony PlayStation,
so we basically had the entire gaming market running on
IBM chips and.
Speaker 2 (08:52):
To use every parent basically is pointing at you and saying.
Speaker 1 (08:57):
You're the Probably well, they would have found it from anybody.
But it was the first time I could explain my
job to my kids, who were quite young at that time,
like what I did, Like it was more tangible for
them than saying we solve problems or douce you know,
build solutions like it became very tangible for them, and
(09:18):
I think that's, you know, a rewarding part of the
job is when you can help your family actually understand
what you do. Most people can't do that. It's probably
easier for you. They can, they can see the books,
but for for some of us in the business the
business world, it's not always as obvious. So that was
like one example where the dots really connected.
Speaker 2 (09:38):
There were a couple there's a couple of stuck about
a little bit of this in the context of of AI.
I love because I love the frame of problem solving
as a way of understanding what the function of the
technology is. So I know that you guys did something,
did some work with I never know how to pronounce
it is it Sevilla Sevilla with the football club Severe
(10:00):
in Spain. Tell me about Tell me a little bit
about that. What problem were they trying to solve and
why did they call you?
Speaker 1 (10:07):
In Every sports franchise is trying to get an advantage, right,
Let's just be that clear. Everybody's how can I use data, analytics, insights,
anything that will make us one percent better on the
field at some point in the future. And Seville reached
(10:30):
out to us because they had seen some of the
We've done some work with the Toronto Raptors in the
past and others, and their thought was maybe there's something
we could do. They'd heard all about generative AI, they
heard about large language models. And the problem, back to
your point on solving problems, was we want to do
(10:51):
a way better job of assessing talent, because really the
lifeblood of a sports franchise is can you continue to
cult a talent, Can you find talent that others don't find?
Can you see something in somebody that they don't see
in themselves or maybe no other.
Speaker 3 (11:08):
Team season them.
Speaker 1 (11:09):
And we ended up building somebody with them called Scout Advisor,
which is built on Watson X, which basically just ingests
tons and tons of data, and we like to think
of it as finding you know, the needle in the
haystack of you know, here's three players that aren't being considered.
(11:30):
They're not on the top teams today, and I think
working with them together we found some pretty good insights
that's helped them out.
Speaker 2 (11:38):
How What was intriguing to me was we're not just
talking about quantitative data. We're also talking about qualitative data.
But that's the puzzle part of the thing that fastens me.
How does one incorporate qualitative analysis into that sort of
so you just feeding in scouting reports and things like that.
Speaker 1 (11:58):
I got to realize, think about how much I can
act actually disclosed it. But if you think about so,
quantitative is relatively easy. Every team collects that, you know,
what's their forty yard dash? They use that term, certainly
not in Spain. That's all quantitative. Qualitative is what's happening
(12:19):
off the field. It could be diet, it could be habits,
it could be behavior. You can imagine a range of
things that would all feed into an athlete's performance and
so relationships.
Speaker 3 (12:35):
There's many different aspects, and.
Speaker 1 (12:37):
So it's trying to figure out the right blend of
quantitative and qualitative that gives you a unique insight.
Speaker 2 (12:44):
How transparent is that kind of system? I mean, is
it telling you it's saying pick this guy not this guy,
But is it telling you why it prefers this guy
to this guy?
Speaker 3 (12:53):
Is that?
Speaker 1 (12:54):
I think for anything in the realm of AI, you
have to answer the why question, otherwise you fall into
the trap of the you know, the proverbial black box,
and then wait, I made this decision, I'd never understood
why it didn't work out.
Speaker 3 (13:09):
So you always have to answer why without a doubt?
Speaker 2 (13:12):
And how is why? Answered?
Speaker 1 (13:16):
Sources of data, the reasoning that went into it, and
so it's basically just tracing back the chain of how
you got to the answer. And in the case of
what we do in Watson X is we have IBM models.
We also use some other open source models, So it
would be which model was used, what was the data
set that was fed into that model, How is it
(13:37):
making decisions?
Speaker 3 (13:38):
How is it performing? Is it robust?
Speaker 1 (13:42):
Meaning is it reliable in terms of if you feed
it two of the same data set, do you get
the same answer. These are all the you know, the
technical aspects of understanding the why.
Speaker 2 (13:52):
How quickly do you expect all professional sports franchises to
adopt some kind of are they already there? If I
went out and pulled the general managers of the one
hundred most valuable sports franchises in the world, how many
of them would be using some kind of AI system
to assist in their efforts.
Speaker 1 (14:10):
One hundred and twenty percent would, meaning that everybody's doing it,
and some think they're doing way more than they probably
actually are. So everybody's doing it. I think what's weird
about sports is everybody's so convinced that what they're doing
is unique that they generally speaking, don't want to work
with a third party to do it because they're afraid
(14:32):
that that would expose them. But in reality, I think
most are doing eighty to ninety percent of the same things.
Speaker 3 (14:39):
So but without a doubt, everybody's doing it. Yeah.
Speaker 2 (14:43):
Yeah. The other I say that I loved was there
was one but a shipping line tricon on the Mississippi River.
Tell me a little bit about that project. What problem
were they trying to solve?
Speaker 1 (14:56):
Think about the problem that I would say every body
noticed if you go back to twenty twenty was things
are getting hold held up in ports. It was actually
an article in the paper this morning kind of tracing
the history of what happened twenty twenty twenty one and
why ships were basically sitting at seas for months at
a time. And at that stage we just we had
(15:19):
a massive throughput issue. But moving even beyond the pandemic,
you can see it now with ships getting through like
Panama Canal, there's like a narrow window where you can
get through, and if you don't have your paperwork done,
you don't have the right approvals, you're not going through
and it may cost you a day or two and
(15:40):
that's a lot of money. In the shipping industry and
the Tricon example, it's really just about when you're pulling
into a port, if you have the right paperwork done,
you can get goods off the ship very quickly. They
ship a lot of food, which by definition, since it's
(16:00):
not packaged food, it's fresh food, there is an expiration
period and so if it takes them an extra two hours,
certainly multiple hours or a day, they have a massive
problem because then you're going to deal with spoilage and
so it's going to set you back. And what we've
worked with them on is using an assistant that we've
(16:21):
built in Watson X called orchestrate, which basically is just
AI doing digital labor, so we can replicate nearly any
repetitive task and do that with software.
Speaker 3 (16:35):
Instead of humans.
Speaker 1 (16:37):
So, as you may imagine, shipping industry still has a
lot of paperwork that goes on, and so being able
to take forms that normally would be multiple hours of
filling it out, Oh this isn't right, send it back.
We've basically built that as a digital skill inside of
watsonex orchestrate, and so now it's done in minutes.
Speaker 2 (16:59):
They did Did they realize that they could have that
kind of efficiency by teaming up with you or is
that something you came to them and said, guys, we
can do this way better than you think.
Speaker 3 (17:09):
What's the.
Speaker 1 (17:11):
I'd say it's always, it's always both sides coming together
at a moment that for some reason makes sense because
you could say, why didn't this happen like five years ago,
like seems so obvious. Well, technology wasn't quite ready then,
I would say, But they knew they had a need
because I forget what the precise number is, but you know,
(17:32):
reduction of spoilage has massive impact on their bottom line,
and so they knew they had a need, we.
Speaker 3 (17:41):
Thought we could solve it, and the two together.
Speaker 2 (17:44):
Who did you guys go to them thought? Or did
they come to you?
Speaker 1 (17:48):
I recall that this one was an inbound meaning they
had reached out to IBM and that we'd like to
solve this problem. I think it went into one of
our digital centers, if I if I recall so literary,
I call yeah.
Speaker 2 (18:01):
But the other the reverse is more interesting to me
because there seems to be a very very large universe
of people who have problems that could be solved this
way and they don't realize it.
Speaker 3 (18:13):
What's your.
Speaker 2 (18:15):
Is there a shining example of this of someone you
just can't you just think could benefit so much and
isn't benefiting right now?
Speaker 3 (18:24):
Maybe I'll answer it slightly differently.
Speaker 1 (18:26):
I'm I'm surprised by how many people can benefit that
you wouldn't even logically think of.
Speaker 3 (18:33):
First, let me give you an example.
Speaker 1 (18:35):
There's a franchiser of hair salons, sport Clips is the name.
My sons used to go there for haircuts because they
have like TVs and you can watch sports, so they
loved that they got entertained while they would get their haircut.
I think the last place that you would think is
using AI today would be a franchiser of hair salons. Yeah,
(18:59):
but just follow it through. The biggest part of how
they run their business is can I get people to
cut hair? And this is the high turnover industry because
there's a lot of different places you can work if
you want to cut hair. People actually get injured cutting
hair because you're on your feet all day, that type
of thing. And they're using same technology orchestrate as part
(19:21):
of their recruiting process. How can they automate a lot
of people submitting resumes, who they speak to, how they qualify.
Speaker 3 (19:30):
Them for the position.
Speaker 1 (19:32):
And so the reason I give that example is the
opportunity for AI, which is unlike other technologies, is truly unlimited.
It will touch every single business. It's not the realm
of the fortune five hundred or the fortune one thousand.
This is the fortune any size. And I think that
(19:53):
may be one thing that people underestimate about AI.
Speaker 2 (19:56):
Yeah, what about I mean I was thinking about education
as as a kind of I mean, education is a
perennial whipping boy for you guys that are living in
the nineteenth century, right. I'm just curious about if a
superintendent of a public school system or the president of
the university sat down and had lunch with you and said,
(20:21):
do the university first. My cost are out of control,
my enrollment is down, my students hate me, and my
board is revolting.
Speaker 3 (20:31):
Help.
Speaker 2 (20:33):
How would you think about helping someone in that situation.
Speaker 3 (20:39):
I spend some time with universities. I like to go
back and there's.
Speaker 1 (20:42):
Alma maters where I went to school, and so I
do that every year. The challenge I have hall of
Seming University is there has to be a will. Yeah,
and I'm not sure the incentives are quite right today
because bringing in new technology, say we want to go
after we can help you figure out student recruiting or
(21:05):
how you automate more of your education, everybody suddenly feels
threatened that university.
Speaker 3 (21:11):
Hold on, that's my job.
Speaker 1 (21:13):
I'm the one that decides that, or I'm the one
that wants to dictate the course. So there has to
be a will. So I think it's very possible, and
I do think over the next decade you will see
some universities that jump all over this and they will
move ahead, and you see others that do not.
Speaker 3 (21:31):
Because it's very possible.
Speaker 2 (21:35):
Where how does when you say there has to be
a will? Is that the kind is that a kind
of thing that that people that IBM to think about,
Like when in this conversation you hypothetical conversation you might
have with the university president, would you give advice on
where the will comes from?
Speaker 1 (21:55):
I don't do that as much in a university context.
I do that every day in a business context, because
if you can find the right person in a business
that wants to focus on growth or the bottom line
or how do you create more productivity. Yes, it's going
to create a lot of organizational resistance potentially, but you
(22:15):
can find somebody that will figure out how to push
that through. I think for universities, I think that's also possible.
I'm not sure there's there's there's a return on investment for.
Speaker 3 (22:26):
Us to do that.
Speaker 2 (22:27):
Yeah, yeah, yeah, God, let's let's find some terms. AI
years I told you'd like to use What does that mean?
Speaker 1 (22:39):
We just started using this term literally in the last
three months, and it was it was what we observed internally,
which is most technology you build, you say, all right,
what's going to happen in year one, year two, year three,
and it's you know, largely by by a calendar. AI
(22:59):
years are the idea that what used to be a
year is now like a week. And that is how
fast the technology is moving.
Speaker 3 (23:07):
And do you give you an example. We had one
client we're working with.
Speaker 1 (23:11):
They're using one of our granite models, and the results
they were getting were not very good. Accuracy was not there,
their performance was not there. So I was like scratching
my head. I was like, what is going on? They
were financial services, the bank, So I'm scratching my head,
like what is going on? Everybody else is getting this
and like these results are horrible. And I said to
(23:33):
the team, which version of the model are you using?
This was in February, Like we're using the one from October.
I was like, all right, now we know precisely the
problem because the model from October is effectively useless now
since we're here in February.
Speaker 2 (23:49):
Serious, actually useless, completely useless.
Speaker 1 (23:53):
Yeah, that is how fast this is changing. And so
the minute, same use case, same day, you give them
the model from late January instead of October, the results
are off the charts.
Speaker 3 (24:07):
Yeah.
Speaker 2 (24:07):
Wait, so what exactly happened between October and January?
Speaker 3 (24:10):
The model got way better?
Speaker 2 (24:12):
Could dig into that, Like, what do you mean by
the way.
Speaker 3 (24:14):
We are constant.
Speaker 1 (24:15):
We have built large compute infrastructure where we're doing model training.
And to be clear, model training is the realm of
probably in the world my guess is five to ten companies.
Speaker 3 (24:28):
And so.
Speaker 1 (24:30):
You build a model, you're constantly training it, you're doing
fine tuning, you're doing more training, you're adding data every day,
every hour it gets better. And so how does it
do that. You're feeding it more data, you're feeding it
more live examples. We're using things like synthetic data at
this point, which is we're basically creating data to do
(24:51):
the training as well. All of this feeds into how
useful the model is. And so using the October model,
those were the results in October, just a fact, that's
how good it was then. But back to the concept
of AI years, two weeks is a long time.
Speaker 2 (25:10):
Is that Are we in a steep part of the
model learning carve or do you expect this to continue
along this at this pace?
Speaker 3 (25:19):
I think that is the big question and don't have
an answer yet.
Speaker 1 (25:24):
By definition, at some point you would think it would
have to slow down a bit, but it's not obvious
that that is on the horizon.
Speaker 2 (25:31):
Still speeding up. Yes, how fast. Can it get.
Speaker 1 (25:37):
We've debated, can you actually have better results in the
afternoon than you did in the morning. Really it's nuts, Yeah,
I know, but that's why we came up with this term,
because I think you also have to think of like
concepts that.
Speaker 3 (25:53):
Gets people's attention.
Speaker 2 (25:54):
So you're basically turning into a bakery. You're like the
bread from yesterday. You know you can have it for
twenty five cents. But I mean you do proferential pricing.
You could say, we'll judge you x for yesterday's model,
two x for today's model.
Speaker 1 (26:12):
I think that's dangerous as a merchandising strategy, but I
guess your point.
Speaker 2 (26:17):
Yeah, but that's crazy. And this, by the way, so
this model is the same true for almost You're talking
specifically about a model that was created to help some
aspect of a financial services. So is that kind of
model accelerating faster and learning faster than other models for
other kinds of problems?
Speaker 3 (26:35):
So this domain was code.
Speaker 1 (26:38):
Yeah, and so by definition, if you're feeling feeding in
more data some more code, you get those kind of results.
It does depend on the model type. There's a lot
of code in the world, and so we can find
that we can create it. Like I said, there's other
aspects where there's probably less inputs available, which means you
(27:00):
probably won't get the same level of iteration. But for code,
that's certainly the cycle times that we're seeing.
Speaker 2 (27:05):
Yeah, and how do you know that Let's stick with
this one example of this model you have. How do
you know that your model is better than big company
B down the street? The client asks you, why would
I go with IBM as opposed to some the some
firm in the valley that says, let's they have a
model on this, what's your how do you frame your advantage?
Speaker 1 (27:28):
Well, we benchmark all of this, and I think the
most important is metric is price performance, not price, not performance,
but the combination of the two.
Speaker 3 (27:38):
And we're super competitive there.
Speaker 1 (27:41):
Well for what we just released, with what we've done
in open source, we know that nobody's close to us
right now on code.
Speaker 3 (27:47):
Now.
Speaker 1 (27:48):
To be clear, that will probably change because it's like leapfrog.
Speaker 3 (27:51):
People will jump ahead, then we jump back ahead.
Speaker 1 (27:54):
But we're very confident that with everything we've done in
the last few months taken a huge lead forward here.
Speaker 2 (28:01):
Yeah, it's I mean, this goes back to the point
I was making in the beginning. So about the difference
between your twenty something self in ninety nine and yourself today.
But this time compression has to be a crazy adjustment.
So the concept of what you're working on and how
you make decisions internally and things has to undergo this
(28:24):
kind of revolution. If you're switching from I mean back
in the day, a model might be useful for how long.
Speaker 1 (28:31):
Years years I think about you know, statistical models that
set inside things like SPSS, which is a product that
a lot of.
Speaker 3 (28:40):
Students use around the world.
Speaker 1 (28:41):
I mean, those have been the same models for twenty
years and they're still very good at what they do.
And so yes, it's a completely it's a completely different
moment for how fast this is moving.
Speaker 3 (28:53):
And I think it just.
Speaker 1 (28:55):
Raises the bar for everybody, whether you're a technology provider
like us, or you're a bank or an insurance company
or a shipping company, to say, how do you really
change your culture to be way more aggressive than you
normally would be?
Speaker 2 (29:14):
Does this mean it's a weird question, but does this
mean a different set of kind of personality or character
traits are necessary for a decision maker in tech now
than twenty five years ago.
Speaker 1 (29:29):
There's a book I saw recently, it's called The Geek Way,
which talked about how technology companies have started to operate
in different ways maybe than many traditional companies, and more
about being data driven, more about delegation. Are you willing
(29:51):
to have the smartest person in the room make decisions
opposed to the highest paid person in the room. I
think these are all different aspects that ever company is
going to face.
Speaker 2 (30:01):
Yeah, yeah, next term, talk about open. When you use
that word open, what do you mean.
Speaker 1 (30:10):
I think there's really only one definition of open, which
is for technology, is open source. An open source means
the code is freely available. Anybody can see it, access it,
contribute to it.
Speaker 2 (30:26):
And what is Tell me about why that's an important principle.
Speaker 1 (30:32):
When you take a topic like AI. I think it
would be really bad for the world if this was
in the hands of one or two companies, or three
or four, doesn't matter the number, some small number. Think
about like in history sometimes early nineteen hundreds, the Interstate
Commerce Commission was created, and the whole idea was to
(30:56):
protect farmers from railroads, meaning they wanted to allow free trade.
But they knew that well, there's only so many railroad tracks,
So we need to protect farmers from the shipping costs
that railroads could impose. So good idea, but over time
that got completely overtaken by the railroad lobby and then
they use that to basically just increase prices, and it
(31:19):
made the lives of farmers way more difficult. I think
you could play the same analogy through with AI. If
you allow a handful of companies to have the technology,
you regulate around the principles of those one or two companies,
then you've trapped the entire world.
Speaker 3 (31:36):
I think that would be very bad. So the danger
of that app for sure.
Speaker 1 (31:42):
I mean there's companies in Watson in Washington every week
trying to achieve that outcome.
Speaker 3 (31:49):
And so the.
Speaker 1 (31:50):
Opposite of that is to say it's going to be
an open source because nobody could dispute open source because
it's right there, everybody can see it. So I'm a
strong believer that open source will win for AI. It
has to win. It's not just important for business, but
it's important for humans.
Speaker 2 (32:10):
On the I'm curious about on the list of things
you worry about, Actually, let me before I ask, let
me ask this question very generally, what is the list
of things you worry about. What's your top five business
related worries right now?
Speaker 3 (32:25):
Tops from those are the first question. We could be
here for hours for me to answer.
Speaker 2 (32:30):
I did say business related. We could leave. You know,
your kids' haircuts got it out of.
Speaker 1 (32:36):
The Number one is always it's the thing that's probably
always been true, which is just people. Do we have
the right skills? Are we doing a good job of
training our people? Are our people doing a good job
of working with clients like that's number one? Number two
is innovation? Are we pushing the envelope enough? Are are
(33:02):
we staying ahead? Number three is which kind of feeds
into the innovation one is risk taking? Are we taking
enough risk? Without risk, there is no growth. And I
think the trap that every larger company inevitably falls into
is conservatism. Things are good enough, and so it's are
(33:24):
we pushing the envelope? Are we taking enough risk to
really have an impact? I'd say those are probably the
top three that I spend talk about.
Speaker 2 (33:32):
The vast trend to define productivity paradox something I know
you've thought a lot about what does that mean?
Speaker 1 (33:39):
So I started thinking hard about this because all I
saw and read every day was fear about AI, and
I studied economics, and so I kind of went back
to like basic economics, and there's been like a macro
investing formula I guess I would say it's been around
(34:00):
forever that says growth comes from productivity growth plus population
growth plus debt growth. So if those three things are working,
you'll get GDP growth. And so then you think about
that and you say, well, debt growth, we're probably not
(34:22):
going back to zero percent interest rates, so to some
extent there's going to be a ceiling on that. And
then you look at population growth. There are shockingly few
countries or places in the world that will see population
growth over the next thirty to fifty years. In fact,
most places are not even at replacement rates. And so
(34:43):
I'm like, all right, so population growth is not going
to be there.
Speaker 3 (34:46):
So that would mean if you just take.
Speaker 1 (34:48):
It to the extreme, the only chance of continued GDP
growth is productivity. And the best way to solve productivity
as AI.
Speaker 3 (35:03):
That's why I say it's a paradox.
Speaker 1 (35:05):
On one hand, everybody's scared after death it's going to
take over the world, take all of our jobs, ruin us,
But in reality, maybe it's the other way, which is
it's the only thing that can save us.
Speaker 3 (35:18):
Yeah, and if you believe.
Speaker 1 (35:20):
That economic equation, which I think has proven quite true
over hundreds of years, I do think it's probably the
only thing that can save us.
Speaker 2 (35:28):
Actually looked at the numbers yesterday for total random reason
on population growth in Europe and receive this is a
special bonus question. See how smart you are? Which country
in Europe continentally Europe has the highest population growth?
Speaker 1 (35:43):
It's small continental Europe, probably one of the Nordics, I
would guess.
Speaker 2 (35:50):
Close Luxembourg. Okay, something that's going on in Luxembourg. I
feel like, well, all of this need to investigate. There're
at one point four nine, which in the day, by
the way, would be a relatively that's the best performing country.
I mean in the day, you'd countries had routinely had
two points something, you know, percent growth in a given year.
(36:13):
Last question, you're writing a book. Now, we were talking
chatting about it backstage, and now I appreciate the paradox
of this book, which is universe with a model is
better in the afternoon than it is in the morning.
How do you write a book that's like printed on paper?
I expected to reuse Aul.
Speaker 1 (36:34):
This is the challenge. And I am an incredible author
of useless books. I mean most of what I've spent
time on in the last decade of stuff that's completely useless,
like a year after it's written. And so when we
were talking about it, I was like, I would like
to do something around AI that's timeless. Yeah, that would
(36:54):
be useful ten or twenty years from now. But then
to your so, how is that even remotely possible if
the model is better in the afternoon and in the morning.
Speaker 3 (37:07):
So that's the challenge in front of us.
Speaker 1 (37:09):
But the book is around AI value creation, so kind
of links to this productivity paradox, and how do you
actually get sustained value out of AI, out of automation,
out of data science. And so the biggest challenge in
front of us is can we make this relevant?
Speaker 3 (37:30):
How's the day that it's published?
Speaker 2 (37:31):
How are you setting out to do that?
Speaker 1 (37:35):
I think you have to to some extent level it
up to bigger concepts, which is kind of why I
go to things like macroeconomics, population geography as opposed to
going into the weeds of the technology itself. If you
write about this is how you get better performance out
of a model we can agree that will be completely
(37:56):
useless two years from now, but maybe even two months
from now, and so it will be less in the
technical detail and more of what is sustained value creation
for AI, which if you think on what is hopefully
a ten or twenty year period, it's probably we're kind
of substituting AI for technology. Now I've realized, because I
(38:18):
think this has always been true for technology. It's just
now AI is I think that everybody wants to talk about.
But let's see if we can do it. Time will tell.
Speaker 2 (38:28):
Did you get any inkling that the pace that this
AI year's phenomenon was gonna that things with the pace
of change was going to accelerate so much? Because you
had More's law, right, You had a model in the
technology world for this kind of exponential increase in so
we you were you thinking about that kind of accelerate
(38:50):
similar kind of acceleration in the.
Speaker 1 (38:55):
I think anybody that said they expected what we're seeing
today is probably exactly. I think it's way faster than
anybody expected. Yeah, but technologies, back to your point at
More's law has always accelerated through the years, so I
wouldn't say it's a shock, but it is surprising.
Speaker 2 (39:16):
Yeah, you've had a kind of extraordinary privileged position to
watch and participate in this revolution, right, I mean, how
many other people have been in that have ridden this
wave like you have?
Speaker 1 (39:32):
I do wonder is this really that much different or
does it feel different just because we're here?
Speaker 3 (39:38):
I mean, I do think on one level.
Speaker 1 (39:40):
Yes, So in the time I've been an IBM, Internet happened,
Mobile happened, social network happened, blockchain happened.
Speaker 3 (39:51):
AI, So a lot has happened.
Speaker 1 (39:53):
But then you go back and say, well, but if
I'd been here between nineteen seventy and ninety five, there
were a lot of things that are pretty fundamental then too, say,
I wondered, almost do we always exaggerate the.
Speaker 3 (40:06):
Timeframe that we're in. I don't know. Yeah, but it's
a good idea though.
Speaker 2 (40:16):
I think the ending with the phrase, I don't know
it's a good idea though. Comd great way to wrap
this up.
Speaker 3 (40:24):
Thank you so much, Thank you, Malcolm.
Speaker 2 (40:29):
In a field that is evolving as quickly as artificial intelligence,
it was inspiring to see how adaptable Rob has been
over his career. The takeaways from my conversation with Rob
had been echoing in my head ever since. He emphasized
how open source models allow AI technology to be developed
by many players. Openness also allows for transparency. Rob told
(40:53):
me about AI use cases like IBM's collaboration with Sevilla's
football club. That exam really brought home for me how
AI technology will touch every industry. Despite the potential benefits
of AI, challenges exist in its widespread adoption. Rob discussed
how resistance to change, concerns about job security and organizational
(41:17):
inertia can slow down implementation of AI solutions. The paradox, though,
according to Rob, is that rather than being afraid of
a world with AI, people should actually be more afraid
of a world without it. AI, he believes, has the
potential to make the world a better place in a
way that no other technology can. Rob painted an optimistic
(41:40):
version of the future, one in which AI technology will
continue to improve at an exponential rate. This will free
up workers to dedicate their energy to more creative tasks.
I for one am on board Smart Talks with IBM
is produced by Matt Romano, Joey Fishground, and Jacob Goldstein.
(42:02):
We're edited by Lydia gene kott Our engineers are Sarah
Bruguer and Ben Tolliday. Theme song by Gramscow. Special thanks
to the eight Bar and IBM teams, as well as
the Pushkin marketing team. Smart Talks with IBM is a
production of Pushkin Industries and Ruby Studio at iHeartMedia. To
find more Pushkin podcasts, listen on the iHeartRadio app, Apple Podcasts,
(42:28):
or wherever you listen to podcasts. I'm Malcolm Gladwell. This
is a paid advertisement from IBM. The conversations on this
podcast don't necessarily represent IBM's positions, strategies, or opinions.