All Episodes

November 13, 2024 48 mins
In this episode of Growth Talks, I sit down with Nigel Toon, author of "How AI Thinks," founder and CEO of Graphcore, recently acquired by SoftBank for $600 million.

With his extensive experience as a board member across multiple tech companies and a reputation as one of the world's leading voices on AI, Nigel provides a unique perspective on the evolving landscape of artificial intelligence.

We delve into the themes of his book, exploring what it really means for AI to "think" and "reason." Nigel offers his unfiltered views on various pressing topics, including why he believes that chasing after Artificial General Intelligence (AGI) might be a misguided goal.

Don't miss this episode packed with thought-provoking insights and eye-opening perspectives.

Tune in for a conversation that will challenge how you think about AI today.

📗 Here you can find Nigel's book:
https://amzn.to/3YGhJ2S

📚 Suggested book:
https://amzn.to/3AMRmjK

👍 Follow our guest here:
https://nigeltoon.com/
https://www.linkedin.com/in/nigeltoon/

📔 Read my book "Growth Talks"
https://amzn.to/3zfqRTu

🧲 Watch my free lead generation course:
https://gaito.link/skillshare

🙏 Subscribe to the channel:
https://gaito.link/subscribe

📚 Download the Reading List:
https://gaito.link/gtbooks
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Show everyone, and welcome back to Growth Thoughts. I am rough,
you hosted my guest today's Nigel Tune. Hi Nigel, and
thanks for being here. How about you today?

Speaker 2 (00:11):
I'm really good, rough. How are you yea? All good?

Speaker 1 (00:14):
Yeah, very good, very good, super excited about this about
this chart. As I told you, I've read your book
a while ago now and I loved it. And then
we had the chance to meet in person here in London.
It was a couple of months ago, so you know,
I was really looking forward for for for this one.
So we usually started these conversations here on the podcast

(00:36):
with a classic one, with a simple one. So who's Nigel,
what's your what's your story, what's your background? And what
are you doing now?

Speaker 2 (00:46):
So yeah, so I'm an engineer, worked most of my
life and semikinductors, building micro processes and related semiduct technology.
Was sort of involved in early days of the Internet
and some of the underlying technologies that went into that.

(01:06):
Then in mobile technologies, particularly on the base station side,
then move over to the handsets, and since twenty twelve
been working in AI, which is kind of a sort
of a repeat in some ways because when I was
at university. I was sort of really interested in AI
and all that it might be able to offer, but
that was a long time ago when nothing was really possible.

(01:29):
So to see the way that it's evolved and developed
since is just amazing. I have written a book, as
you know. I sit on the board of something called UKARI,
which is UK Research Innovation, which is sort of non
governmental group that funds all of our university research here

(01:51):
in the UK, and you know, lots of the innovation
stuff through Innovate UK as well. I've sat on the
Prime Minister's Business Council during that really interesting time when
they installed the revolving door in Number ten Downing Street
and Prime ministers seem to change at every meeting. Yeah.

(02:13):
I've invested in a few companies, mainly around the AI space,
on a couple of other sort of small company boards,
trying to help them to grow. I think investing in
AI and the next which is really the next economic
wave here I think is kind of a really interesting
area too. And CEO of graph Core, which we recently
sold to SoftBank. We now have sort of very deep

(02:37):
pockets investing in the next generation of AI hardware. But
all that's quite secret. So I can't really get into
a lot of details around there, so we.

Speaker 1 (02:48):
Could say that you have been working on these before
it was cool, right, I mean, now AI's is cool
the topic of the bond.

Speaker 2 (02:58):
We started thinking about AI and what was possible before
Ilia Suitskava and Alex Krazski came up with alex net,
which was probably one of the first deep neural networks
that sort of kicked off the whole process. And you know,
in graph Core we had people like Demesisavis as a

(03:20):
friends and family investor in the company through the period
that they were doing alphag and all of that sort
of interesting stuff with reinforcement learning, and then the emergence
of large language models. You know, obviously Ilia moved over
to Open AI, and Ilia is also a friends and

(03:41):
family investor in graph Corps, so we were sort of
front row seat seeing all of these large language models emerge.
And so it's really been a very privileged position to
see a lot of this technology under the hood before
it's out in the wild and you know, other people

(04:01):
get to see it. You know, It's like I knew
chat GPT was coming back in the end of twenty
twenty two, and I was really keen to kind of
get in and use it like everybody else, just stunned
by how important that has been and how much it
kind of pushed AI into public consciousness in much the

(04:24):
same way, interestingly, as Alpha Go did in China. You know,
most people don't realize two hundred and forty million people
watched that live on television. Lease it all losing against
the Alpha Go machine, and that really pushed AI into
public consciousness in China back in twenty sixteen. So you know,

(04:45):
it's been really interesting to have that sort of front
row seat seeing AI evolve and probably also having a
sense of what's coming next.

Speaker 1 (04:53):
And from the front row seat, So from your privileged position,
why now, I mean, what happened in the last let's say,
I don't know ten years that made you know, these
revolution possible, these high moment possible. Why tragibiity around now?

(05:15):
Why we have this technology in these particular moments.

Speaker 2 (05:21):
Yeah. Again, it's something I cover in the book, and
it goes back much longer. You know, if you look
back to the birth of computing in the nineteen forties,
you know, people like Alan Turing were you know, enamored
by the idea that these machines would somehow, you know,
build some form of artificial intelligence. So, you know, this

(05:42):
has been on people's consciousness for a while. People, we've
been thinking about this for a long time. There was
a big push in the nineteen sixties. You know, the
term artificial intelligence was born at a conference at Dartmouth
College back in the nineteen at late nineties, in fifties.

(06:02):
You know, during the nineteen eighties, there was the whole
sort of push around what we're called expert systems. The
problem is that there wasn't enough information data information is
data with contexts just to sort of set that little
I always call it information because it's really the information

(06:23):
is what you need. So there wasn't enough information. There
wasn't enough computing power to make any of these artificial
intelligence systems work. And the other problem is people were
researchers were sort of following this deterministic compute approach, you know,

(06:44):
the idea that if I've got all the information and
I can follow a simple script, you know, I've got
a program and I can come up with answers. And
the reality is AI doesn't really work that way. Your
brain doesn't work that way. We follow a induction processors
Aristotle would describe it, where we distill the information we

(07:06):
have available, and we reason with that come up with
possible answers. You know, most of what we do in
our daily life is just guessing. You know, hopefully they're
pretty good guesses. But you know, it's like you meet
this person and you think you're going to spend the
rest of your life with them. You don't have all
the information you need to make that life changing decision.

(07:29):
You don't have a methodology, a process, a program that
you go through to work that out. And yes, I
guess some people know. You know, that doesn't always work
out for people, right, You know, we live our life,
you know, making these probabilistic judgments, and now we have

(07:49):
methods that work on computers that are following the same approaches.
And what's really happened is it was internet, you know,
with the discoveries of Claw Shannon, going back to you know,
the original information theory that created this digital information, the
way to share digital information that allowed the Internet, and
then mobile networks cloud which has made all of this

(08:12):
information available in a digital form. The methods, most of
which actually sort of deep learning methods, kind of date
back to the nineteen eighties with the likes of Jeff
Hinton and you know, all these researchers who've been recognized,
you know, around the methods that you know allowed us
to build these deep learning approaches. But it wasn't until

(08:36):
probably twenty twelve when suddenly people started to use researchers,
started to use GPUs and and you know, more powerful
processors and to be able to build this AI and
actually make it work. And you know, the point I'm
making the book is semiconductors. You know, first transistor nineteen

(08:57):
forty seven, the the first integrated circuit that packaged together
for transistors into a single chip nineteen sixty. Today we
put one hundred billion transistors into a single ship, so
to twenty five billionfold improvement over a roughly a sixty
year period. You know, if cars had improved by the

(09:20):
same amount, we'd all be going everywhere at two hundred
times as we are. So so we live in what
is a science fiction world made possible by semiconductors. You know,
semi conductors let us go to the moon. They you know,
drive the Internet, they mobile phones, you know, all this
stuff I used to watch in a science fiction program

(09:40):
like you know, all these communicators, you know, we just
have them. It's just part of our everybody la life,
you know, so you know, and semi conductors made that
all possible, and it's it's that's what's really made AI
start to be possible as well.

Speaker 1 (10:00):
Uh, there are a couple of things that I want
to ask you. There are you know, directly from the book. So, uh,
this is probably the chap that I love the most
was chopping number eight with these intelligence.

Speaker 2 (10:14):
I loved it. Yeah, I love it. I loved it.

Speaker 1 (10:16):
So there is this part here when you start talking about,
you know, the meaning of the word intelligence, what do
we mean with intelligence? Blah blah blah. And there is
a paragraph here when you think about you talk about
you say, I don't need to anthropomorphs their abilities in
any way. They are. These are intelligence, self sufficient animals

(10:40):
that are highly evolved.

Speaker 2 (10:42):
The way we.

Speaker 1 (10:43):
Describing intelligence is a human construct that is very much
defined by our own human abilities. So the first one
that I want to ask you is this when we
say that now you know chigipit or you know any
of these that we used today, we say that they
are intelligent, and they think are they really intelligence? Do

(11:08):
they really think somehow do we need to I don't know,
just accept the idea that there are different kind of
intelligences up there. And you know, yeah, it's.

Speaker 2 (11:22):
A great question, isn't it. I kind of I kind
of named the book how I Think a bit of
a double meaning, you know, it's it's like a provocative
title where you know, I don't want to spoil it
for people, but you know, I don't believe AI does
think in the way that we think. You know, you
look at what these large language models do, so on

(11:45):
one level, it's absolutely amazing. They have started to understand
the structure of language, and you know, to the point
that they can understand and you know, decode what you
were saying and extract information from what you're saying or

(12:06):
from what's written in pages, and they can start to
reconstruct that in new forms of writing. Words and language
is really just an encoding scheme that we use to
transfer information from one person to another. You know, so
thoughts in my brain, I convert it into language. Hopefully

(12:29):
it's a language that you can understand. You're then able
to hear that language, and you're able to convert it
back into thoughts in your own brain, and the idea
that a machine can start to do the same, can
start to actually understand this encoding scheme that we use
to communicate intelligence between each other, is really amazing. Captain

(12:53):
Cook when he went to the South Pacific on his
round the World tour, he was given a I can't
remember who it was, oral society or or somebody that
predates through all society, but you know, some one of
these scientific groups, you know, said to him, please discover
whether these people in the South Pacific seas are able

(13:15):
to communicate with each other without speaking and you know,
through through thought transfer, you know, which is a really
weird way to describe it. And what they actually meant
was are they able to write things down and pass
written notes to each other and read? That's what he

(13:35):
was really being, you know, asked, you know, do they
know how to read and write, you know? Or is
that a uniquely English thing, you know, back in the
back in the day, you know. So, so this idea
that a machine can now understand this encoding scheme, can
read and write in the same way that we do,
to start to understand all this written information that we've

(13:58):
we've put down, that's that's quite a But on the
other side of it, what's it actually doing. It's just
text prediction. It's just really good text prediction. You give
it a prompt and it is working out what is
the next best word in a sequence, and from that,
you know, building a phrase and then a paragraph and

(14:19):
then a story. It's just text prediction and it goes
along and it says, oh, I understand language. At this point,
I should insert quote from somebody famous, And so it
inserts a quote from what somebody who thinks it famous,
But it actually just makes the quote up, but it
makes the person up. It hallucinates. It's trying to do
the right thing. It's, you know, much like a child

(14:40):
might do, you know, in understanding that that's what happens
in language. But it's just predicting text. And so on
one level it's amazing what it's been able to do.
But on another level it's not really thinking. But the
next level is, okay, so if I can start to
understand what's written and I can decode it and extract

(15:02):
information from that, okay, maybe now I can start to
build an agent that will start to reason. You know,
we'll be able to look at enough pieces of information
find connections between different pieces of information reason to come
up with answers to difficult questions. And maybe because it's
a machine, it will be able to do that pattern

(15:24):
recognition and testing on a much broader set of information
than you or I would quickly be able to do,
and so we'll be able to help us solve problems
that you know are currently out of reach for us.
So I don't think it thinks in the same way
we do. It certainly doesn't, you know, have the emotions

(15:45):
and you know all the other things that we do.
It doesn't have that sort of intuition that you might
based on your understanding of the full world, because it
doesn't really know about the world, right, you know. I
use the example in the Book of Apple. You know
what apple? When you think of the word apple, you know,
you think of this object that you can hold in

(16:07):
your hand that comes from a tree. You know what
it's like to bite into it, you know the weight
of it, you know the taste of it. All this
enormous context about this word apple or that it might
be a computer. He I doesn't know anything about that.
All it knows is it's four letters to which you're repeated,

(16:27):
and it doesn't even really understand that properly either, so
you know, it doesn't have any of that real world
context that you have that makes you amazing as a
computing device an intelligent machine, and that's that's what we are.
We're intelligent machines.

Speaker 1 (16:47):
And I have a follow up on that. Do you
think that we have kind of reached blood door moment?
Let me explain there are I think there is part
of the discussion now in the AI ecosystem. Someone says that,
you know, this is what we can get with l M.
I mean, it's not getting better than that because they

(17:08):
are just you know, text predictions, as you said, and
someone else is thinking that maybe if we push it
a little bit more, we can use this text praedition
to make them understand the world, you know, understand the context,
the ecosystem, you know, the environment, and and do something else.

Speaker 2 (17:25):
Uh.

Speaker 1 (17:26):
And I think a good example of this was the
release of one for the first time. Was that was
a product that when I saw it, I say, okay,
maybe we can actually do something more with l l M.
What do you think about that? Do we did we
reach a plateau or there is something else that we
can do if we push it a little bit more well.

Speaker 2 (17:46):
So I think some of the things that are going
on are using a large language model to look at
other pieces of information so that it could and then
start to piece those together to come up with better
predictions and it and it looks very much like it's

(18:07):
reasoning the idea that you can you know, one of
the one of the things that's been evolving is this
idea of mixture of experts. You know, so GPT three
to GPT four, GPT four is not one big model.
It's actually about eight models that all work together. And

(18:29):
there's there's a there's a model that has been trained
to understand oh, when these types of questions get asked,
I use this piece of the model, and these types
of questions, I use that type of the model. And
so you know, it's pulling together experts to try and
build a more complete system, and there's that that is
now evolving. You then start to connect it to other

(18:51):
sources of information so it can go off and it
can retrieve and augment its answers based on other pieces
of information. And then it starts to then you know,
maybe instead of just coming up with a with a
person and a quote, it actually goes off and retrieves,
you know, a real quote from a real person, So
the answers start to look better. And these are all

(19:14):
steps that you know we're in the middle of. And
then you wrap around it what's called an agentic system,
where you know, you pose a question. The agentic system
tries to understand the question, plan how it's going to answer,
maybe goes off and retrieves information that will help it answer,
maybe retrieves specialist forms of models they're going to help

(19:39):
it answer that question. Then it uses that to sort
of reason an answer, presents those answers to you and
you can sort of see whether you know it's giving
you a well reasoned answer. In the middle of all
of that, we probably need to build in something called
uncertainty quantification, because these systems are making guesses, right, you know,
they're probabilistic answers, and so I would quite like to

(20:01):
know how possible, how probable is this answer that you know,
how certain are you over the answer that you're proposing
to me. The other thing I'd probably want to do
is if it's a good agent But like you know,
talking to your doctor, you don't expect your doctor just
to tell you the answer you expect to have a

(20:22):
conversation with the doctor where you're going to then you know, explore, well,
what do you mean? You know, this rash might be
you know such and such, and you know I've read
on the internet it's you know so and so, And
you want to have a conversation and kind of test
the answers that your doctor is giving you, and you
want to test the certainty that they have over the answer.

(20:45):
So again, a good agent with a good interface to
a human is going to be trying to do this.
And the challenge we have is as you try and
make these systems larger and larger to be able to
answer more and more questions, the risk is that, you know,
we actually end up heading in slightly the wrong direction.

Speaker 1 (21:12):
And what's the right direction? Quick break here. If you
are enjoying the podcast and the topics that we co cover,
I highly recommend you to check out my book growth
Os It's back with a tree on the sixty five
reflections crafted for entrepreneurs, managers, freelancers, content creators. Basically a
page a day to keep you inspired all year round,

(21:35):
and the links in the description below. Of course, back
to the podcast.

Speaker 2 (21:39):
Well, you know, so I've got to slight be in
my bondage at the moment. About what people call AGI.
First of all, it's a very undefined term, you know,
artificial general intelligence, you know, and it sort of implies
an AI system that is as clever as a human,
you know, is as capable as a human, can answer
questions that a human would answer, maybe can perform tasks

(22:02):
that humans would perform, and therefore, you know, can be
used as a productivity tool or you know, used in
a number of different ways to improve things. But if
you're going to create something that has a general intelligence
that you're going to expect it to answer anything, It's

(22:23):
a bit like me going to the Cleverest University professor
on physics and expecting them to know something about biology
or know something about medicine. You know, in my mind,
if you've got a general intelligence system, the risk is
it ends up being a jack of all trades and

(22:44):
a master of none. The other problem is how do
you test it? You know, the whole point. You know,
I live in the semiconductor industry, you know, we build
these chips, we send them off to be manufactured. It
costs maybe ten million dollars to manufacture a chip you know,
the first one you build. If it's wrong, that's ten

(23:04):
million dollars you waste it. So you verify the chip,
You find all the ways in which you can test it,
and verify that it is going to do exactly what
you wanted it to do before you send it off
and pay the big bucks to have your first chip built.
And that is a really rigorous process. We've kind of

(23:24):
got used to in software. You just kind of throw
it over the wall. Oh it breaks, you know, somebody
will send us a bug report, you know, will reproduce it,
and we'll fix the bug. Nobody died, you know, And
people are sort of assuming that you can do the
same with AI. But the problem is if it's general,
there's no way you can test it, and you will

(23:46):
constantly be finding issues with it because the systems are
so complicated that the AI system will never actually be
giving you good answers and you won't actually be able
to trust it. So the way you get over that
is you say, we're going to constrain the space over
which we ask the AI to operate. We're going to

(24:06):
we're going to try and build an expert in medicine.
But rather than just all medicine. We're going to say
breast cancer, and we're going to make it an expert
in breast cancer detection, and within that field, I can
then actually say, well, these are the outcomes that I'm

(24:28):
looking for, and therefore I can actually start to test
whether it works, and I can make it up with
maybe breast cancer experts all over the world who start
using the system, and they feedback into the system to
say this was a good answer, this was a bad answer,
and you can use the reinforcement learning scheme to actually
cause it to improve over time, so you can actually

(24:51):
you can actually test it, you can improve it because
it's only trying to be an expert in one particular
sub the model, although it might still be lots of
mixture of expert models combining together to come up with
good answers about bread cancer, the overall model is going

(25:11):
to be much much smaller. The efficiency of that model
is going to be much less or less. Power use
less compute, give you faster responses, and you're going to
create AI that is much more useful, and rather than
just replacing people, it actually augments people and allows us

(25:32):
to come up with answers that we might not be
able to come up with on our own. You know,
it's like AI is a bit like an engine, right.
You know, when engines first came out, we were able
to do things that we couldn't on our own because
our physical strength just wasn't strong enough. You know, we
couldn't we couldn't dig a big hole, We couldn't move
a mountain to you know, put a city there. I

(25:54):
go to Boston sometimes and I'd look at the back bay.
Most of that was you know, they had a big
mountain and they used machines to collapse most of that
mountain and fill in the back bay to create a
big part of Boston. And it was basically early steam
engines that allowed them to do that. It wasn't you know,

(26:16):
human muscle. And now we're at the stage where AI
comes along and rather than just helping us to move
mountains or fly, it's actually improving our intelligence. It can
improve our intelligence, and that allows us to solve problems
that you know, currently are out of reach, and that
is the application for AI that we really need to
focus on. So so I sort of say, you know,

(26:38):
AGI is probably nonsense, and what we really want is
artificial expert intelligence. You know, it's called AI. I. We
need lots of different AIS. You know, the more expert,
the more focused they are, the better they have the
chance of being. And yes, you can combine a bunch
of them together to come up with quite a broad

(26:59):
set of answers. But at no point is it ever
a general intelligence. That just seems to be the wrong
direction for us to head in.

Speaker 1 (27:09):
So let me get this way. You don't think it's
a good idea, but do you think it's possible or both?
So it's not even possible.

Speaker 2 (27:19):
You know, if you over time, if you end up
with lots of experts and you have a way of understanding,
you know, triaging the questions to know which experts are
going to answer the specific question, then you know you'll
have something which is, you know, quite generally intelligent. And
if that all sort of sits in one place somehow
through an interface, then you know, maybe that's that helps

(27:40):
you do lots of things. But the reality is it
is made up of lots of independent AI systems, each
of which is an expert in its own specific area.
And the reality is will probably yet used by experts
in that space to make their answers better.

Speaker 1 (27:58):
Yep, let me get another one from the book. By
the way, the link is going to be in the description, guy,
So how I think is going to be linked in
the description?

Speaker 2 (28:09):
So h yeah.

Speaker 1 (28:14):
Another one that a lot was the chapter number sixteen,
So the challenges of AI, and there is this para
paragraph here called framework. We will need to establish regulative
frameworks that help to provide appropriate guidelance and controls. Do
you think that's actually possible? It's really in terms of

(28:37):
in terms of speed, this is going very very fast.

Speaker 2 (28:41):
Yeah, I think it's I think it's really difficult. It's
probably it's probably the area of the book that I
felt most challenged over because everyone's to black my answer,
you know, and and regulations around AI are actually really
really difficult. So so I might look at you and say, wow,
I'm gonna I'm going to take your podcast and I'm

(29:04):
going to make a deep fake of you, and I'm
going to reproduce your voice, and I'm going to reproduce
you as far as I'm concerned. You know, that one's
pretty black and white, you know, without your permission. My
view is that should just be illegal. By the way,
it's not today, which is kind of a failing. But
you know that to me, that's that's pretty black and white.

(29:29):
But let's say I'm inspired by you because you're a
very inspirational person. Where does inspiration start to turn into replication?
And you know, we've seen this with you know, music
artists where people come along and claim, oh, you stole
my riff and you stole my song. Maybe they were
inspired by something, or maybe both artists were inspired by

(29:52):
something and it ends up being similar. But is that
replication or is it inspiration? And that is a gray
area which is really really difficult to regulate around. So
until you know, we've got some good case study around that,
you know, it's going to be very, very hard. And
the reality is it's going to come down to our values,

(30:15):
you know, our backgrounds to think about what is appropriate.
You know, I travel to China, probably the safest place
in the world. You know, there's there's no crime on
the streets in China because there's cameras everywhere and they're
recognizing the faces and so people don't do crime in
the street. But when where do you cross the line

(30:39):
to become a person of interest in that environment? You know,
what are you culturally happy having in in that environment?
And so these things are really difficult. I think perhaps
the people who are doing it best at the moment
is maybe Singapore. Singapore is creating guidelines and then they're
trying to work with the industry to say these guidelines correct,

(31:03):
you know, how do we modify them, how do we
improve them? And can we eventually turn these into regulations
that you know are universal and will actually work for us.
You know, you go to Europe and they're sort of
going the other direction, where they've come up with a
whole bunch of potential laws which they're going to enact,

(31:24):
and the reality is what we're going to end up
with is these innovators who are going to be on
the front end of that where suddenly they're going to
you know, somebody's going to say, oh, you've crossed the line,
and they're going to have to work out, Okay, how
do we back off from this? Have we crossed the line?
You know, a whole bunch of case law precedent is
going to end up being created. Unfortunately, it's probably not

(31:45):
going to be by the big tech companies who will
just say, oh, it's our platform, but we're not responsible
for the end application. It's going to be some small,
you know, innovative European company that gets caught in the
crossfires on that trying to come up with a useful application,
which somebody points out and says, oh, no, you crossed

(32:06):
the line. You didn't protect people, or you didn't protect
their data or you know whatever, infringement. And then and
then we're going to end up with a whole bunch
of case law having to be developed to actually work
out what really are the lines on these laws. It's
going to be it's going to be difficult, I think,

(32:27):
to put this stuff in place. The reality is, you know,
it's not the A the thing that my big point is,
it's not the machine. Don't blame the machine. It's the people.
It's the people who use the machine. It's the people
who develop the machine. You know, that's where the issue is.
You know. It's like the UN since twenty twelve has

(32:48):
been trying to work on regulation that would stop AI
autonomously deciding to push the button to kill someone. As
far as I'm concerned, that's that's one of those black
and white ones that just makes perfect sense. We should
outlaw it shouldn't be allowed. Two countries have blocked the

(33:10):
Russia and America. You know, why is it a good
idea that we have weapons that are going to decide
whether you or I you should be killed? The reality is,
as Alan Turing says, if it's going to be intelligent,
it's going to make mistakes. So don't we want a
human in the loop on that to be able to

(33:30):
hold them accountable? And you know, you can't say, like,
was it Air Canada and their chat pot that gave
away free flights? You know who claimed, Oh it's not
as the chat part who gave away the free flights? No, no, no, no,
you gave away the free flights your tool. You're responsible.
Somebody has to be responsible.

Speaker 1 (33:48):
Yeah, I'm thinking a lot about this topic.

Speaker 2 (33:52):
You know.

Speaker 1 (33:53):
I remember reading the AI art the EU developed a
few months back, and then last week, I think or
two weeks ago, there was the memo from the US government,
And every time I you know, read this kind of
you know document, what I feel is that it's always

(34:13):
very difficult to find a balance between, you know, the
regulation on one side and the innovation on on on
the other. I think I think that we need you know,
something on then on the point of view, but it's
really hard to do it. Without you know, blocking the companies,

(34:36):
the researchers, you know, the innovators out there. How how
do we find this balance if there is a way
to find it?

Speaker 2 (34:44):
Yeah, I think it's hard, isn't it. But what you've
somehow got to focus on is what harm is being
done two people, And that's really where you know, we
we do need to take responsibility, and businesses need to
take responsibility. You know, if if you're facial detection system

(35:09):
that maybe is being used to process visas and paperwork
to be able to cross a border, you know, if
that doesn't recognize people of color, for example, properly and
pushes them to the back of the queue, you know

(35:32):
that there's a bias in the system. And it's not
the AI that's biased. It's it's the fact that it's
been trained badly, it's been implemented badly. It's the people
who put that system into service that are to blame.
But you know, it's if somebody is being held back
because of an AI system, you know, maybe loan applications

(35:55):
that looks at where you live and is making judgments
about you, you know, that have been built into the system,
then you know that then that's that those are areas
which have really got to look at and say this
is wrong, and who are the people best place to
do that. Well, of course the people who are putting
those systems into service, but also the people who are

(36:17):
developing those systems. You know, it's a bit like a doctor.
We trust doctors to look after us. We we don't
expect doctors to experiment on us. You know, they take
a hippocratic house where you know, do no harm. You
don't experiment on your patients. You don't try stuff out
on your patients to see whether it's going to work

(36:39):
or not. You try and use your expertise to minimize
your harm. And so maybe with the developers of these systems,
you know, we need to have more ethics in computer
science courses and AI courses so the experts to actually
understand them and are building these systems actually take a
step back and say, okay, you know, I'm really concerned

(37:01):
about how this thing is going to be used. And
then we end up in a whole debate around regulation
and control, but more to do with you know, more
and more, you know, we're seeing situations where even at
a country level, they can't afford to be in this
AI race, and they're becoming dependent upon the US or

(37:24):
China for this technology, and so you end up with
sort of degree of colonization technology colonization which potentially is
going to go on as well. And it's probably at
the root of the whole battle between America and China

(37:44):
at the moment, where America is trying to limit China's
access to some of this technology so that it is
in the driving seat of this and it has control,
you know, over what's going on. So I think we're
very much at one of those moments where, you know,
do we want a billionaires to get richer or do
we want eight billion people to get richer? And I think,

(38:06):
you know, that is that is something we've we've really
got to think quite hard about, you know, as AI
continues to roll forward from here.

Speaker 1 (38:15):
Yeah, I totally agree on that, Nigel. From your perspective
and from your you know, privileged position, what do you
think is going to happen in this couple of days, next,
next couple of years. So we are at the end
of twenty twenty four, what are the main trends that
we should expect for you know, twenty five in twenty six,

(38:35):
what's going to happen in the AI world.

Speaker 2 (38:39):
Yeah, well stealing this from other people. You know, it's
like the the AI you used today is by far
the worst. I are you everything, And that's pretty clear.
Ray Amara talked about this idea that we tend to
overestimate in the short term and underestimate the impact in
the long term, and I think that's very true. You know,

(39:00):
there's a sort of a level of expectation around AI
in the short term which is probably going to be missed. Actually.
You know, there's lots of companies racing to think, how
can I apply AI in my business, and in some
cases it's perhaps too early for them to do it,
or they're going to use systems that aren't actually very useful,
and people will go, yeah, that didn't really work very well.

(39:24):
But then the risk is that they miss how it
might completely change their industry. I think a great example
is electricity. You know, electricity is probably the biggest economic impact,
you know, bigger than steam engines. And the thing that
made the difference was not light bulbs, which is what
electricity was originally rolled out to do to remove these

(39:47):
smelly gas lights. It was actually electric engines. And the
fact that an electric engine could be put next to
a machine, and you can then put the machine anywhere
in the factory that you know it needs to go.
So you could then lay out the factory to match
the manufacturing process, and then with replaceable parts, you got
mass production, and from mass production where you got our

(40:09):
consumer society. And that period created the most economic wealth
of any period of time because people completely changed the
way the process works. You know, they didn't just automate,
you know, in the way they did in the first
phase of the Industrial Revolution. They just automated the machines,

(40:30):
the weaving machines, and replaced the humans to get a
productivity boost, which pushed the economic outcome, the economic value
into the owners of the machines, into the owners of
the mills, and away from the workers, you know, who
became blood eites. And you know, it's kind of frowned
upon as a word now, isn't. It's kind of seen

(40:50):
as a bad thing, But actually what they were just
arguing for was, you know, we want to get some
economic value from what's going on here. This second wave
of the Industrial Revolution with electricity, created much fairer distribution
of income, and it was interesting. You go from the point.
At the Wall Street crash, round about twenty twenty six

(41:14):
percent of all the world's wealth was in the hands
of one percent of the people, the top one percent.
By nineteen sixty it was down to, I think I
can remember the numbers exactly off the top of my head,
round about half that amount was in the hands of
the one percent. It had been distributed much more fairly
across people through you know, through through the consumer society

(41:39):
that sort of built up, which almost seems counterintuitive in
some ways, but you know, wages got better, there were
better jobs generally everywhere, you know, wages went up and
more wealth was distributed. Stayed about the same till about
nineteen eighty, and since nineteen eighty it's been going in
the other direction, you know it. Computers have been used
primarily to automate parts of people's jobs, reduce, you know,

(42:05):
take people out of the system. Hasn't changed necessarily how
people do their jobs or how the industries work. And
as a result, we're now back to twenty six percent
in the hands of the top one percent. So, you know,
is going to make that worse or is it going
to be like electricity where it allows us to completely
reshape how we how we do things, and you know,

(42:30):
end up in a much fairer place. You know, I
was I travel a lot and I go to an
airport and it just frustrates me that, you know, oh,
there's a machine you've got to check yourself in. You've
got to tag your own bag and put it on
the belt, and the machine never works and it's poor
operators then, you know, having to come over and help you.
You know, they've been automated out of a job, like
at the beginning of the Industrial Revolution. Why can't I

(42:53):
walk into the airport with a tag, you know, like
an air tag type thing in my bag. The airports
i've arrived, I just put my bag on the conveyor belt,
you know, I go off and do my shopping. The
person in the shop points out, oh, mister Tuni on
your flight's just about to leave. Isn't the time you
went to the gate and the whole that you could

(43:13):
change completely the way the process works rather than automating
the current process, and people would have, you know, in
that environment, people would have much better jobs, people would
have much better experiences, you know. So we've got to
think about how AI can completely change the way that
we do things to you know, create a fairer environment.

(43:36):
So I think I think that's a big responsibility and
kin in some ways, it's sort of why I wrote
the book to help people, everybody understand more about AI
so they can have potentially have some agency to think about, Okay,
how's this stuff going to be used? How's it going
to affect me?

Speaker 1 (43:52):
The thing that you do this squad is more a
tune of mindset, I would say, before you know that
technological change, it's wow, the way we see the technology
applied in our world, that has to change, right.

Speaker 2 (44:05):
Yeah, And you know, we're we're sort of facing a
couple of really big transitions here, right. You know, We've
we've some when we're now at the phase where certainly
in the western countries our populations are actually declining, people
are aging and our birth rates are going down, and
that's something we've never seen before. You know, there's no

(44:26):
precedent for that. So what did the economic impacts of that?
How do we adjust to that new reality? The second
thing is, you know, we're throwing all this carbon into
the air and we've got to stop. You know, you can't,
you can't just Okay, Well, let's not do any more
than we're currently doing. No, no, no no. This stuff
takes thousands of years to secrete into oceans, and you

(44:51):
know it's hangs around in the atmosphere for thousands of years.
So we've got to stop throwing up there. We've got
to literally stop. And that's a massive, massive change that
is going to need technology to help us do that.
And then we've got AI coming along that you know,
equally is like engines that made us stronger. You know,
AI is going to make us cleverer. So let's make

(45:13):
sure that everybody gets clever. Let's make sure that everybody's
life improves as as well. You know, we don't just
put this extra intelligence into the hands of a few.
That ends up meaning that, you know, your job is
replaced by somebody who knows more about AI than you.

Speaker 1 (45:31):
Yeah, I totally agree on that, Nigel. We usually end
these conversations here on the podcast with you know, two
quick questions. So the first one is do you have
any books that you want to share with my audience.
It can be AI related or not, so feel free
to share everything that it's nice or cool or important

(45:51):
to read.

Speaker 2 (45:52):
Yeah, well, the one I read recently, which was really informative,
was after Henry Kissinger died, his book about the opening
up of America China relationships, and you know, really, you know,
there's a whole bunch of stuff in there that I
didn't know and didn't understand, and I found it, you know,

(46:15):
really interesting, especially in a world where we seem to
be going in the other direction, where you know, America
and China seemed to be going further and further apart,
and that is the potential cause of massive conflict in
the future. So so I've found that, you know, an
amazing book from an amazing person who I once sat
next to on an aeroplane in Europe, you know, and

(46:39):
it was just by pure chance, and you know what,
an inspirational person, and that was a That's an amazing
book that I would highly recommend. What was the other question?

Speaker 1 (46:51):
The other one is do you have any tools that
you love and you want to share with us, you
know we are maybe NERD community, or any tools you
use day to day.

Speaker 2 (47:02):
My analog watch. You know, I'm kind of a tech
ludd eye from that point of view, you know, like
I like stuff that's you know, mechanical things I like
old cars and mechanical watches and you know, things that
you can get in and play with, you know, so
I actually like to be able to control things. Maybe
that's the engineer in me. I like control it. I

(47:25):
like it. I like it.

Speaker 1 (47:26):
Nigel, any links that you want to share with us
before saying by where they can read you, reach you
follow you.

Speaker 2 (47:35):
Yeah. I put some stuff out on LinkedIn occasionally, so
you know on my LinkedIn post. I do have a website,
but you know, struggle to keep it up to date.
I must find a better way of doing that, invest
more time in it. Yeah. So hopefully some exciting things
over the next year or so. So, yeah, it'd be
great to keep people up to date, and you know,

(47:57):
see what comes next, you know. I think I think
there's AI is going to change a lot, you know,
for you and I. I'm sure we'll be doing very
different things in the year's time. Yeah. Yeah.

Speaker 1 (48:07):
I leave every link in the description below. Nigel, thank
you very much for your time. This was great.

Speaker 2 (48:13):
I loved it. Thanks Rav And if

Speaker 1 (48:16):
You enjoyed this podcast, don't forget to subscribe to the
channel on YouTube and to follow me on Spotify so
you don't miss any future episode and thanks for tuning in,
See you soon.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.