Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Every episode answers
the question what's the future
on, With voices and opinionsthat need to be heard.
Your host is internationalkeynote speaker and actionable
futurist, Andrew Grill.
Speaker 2 (00:14):
Today's guest is
Peter Voss, the founder and CEO
of IGO AI.
Igo AI created the world'sfirst intelligent cognitive
assistant.
This assistant currentlymanages millions of customer
service inquiries for householdbrands.
Peter is the world's foremostexpert in artificial GI and has
been an AI innovator for over 20years and helped coin the term
general artificial intelligence.
(00:35):
Welcome, peter, thanks forhaving me.
Andrew, much to cover today ona topic that I'm actually
fascinated about and I'm gonnalearn a lot from you today.
Hopefully, let's learn a bitmore about your journey.
Tell me more about yourbackground.
Speaker 3 (00:45):
I started off as an
electronics engineer.
I started my own company.
Then I fell in love withsoftware and my company turned
into a software company.
I developed various frameworks,including programming language
and database system and ERPsoftware system.
That became quite successful.
My company grew very rapidlyand we actually did an IPO, so
(01:07):
that was super exciting.
When I was able to exit thatcompany, I had enough time and
money on my hands to be able toreally pursue something that'd
been worrying me or interestingme for a long time, and that is
how can we make software moreintelligent?
Because software typically isn'ttoo smart, program it and think
of some condition that'll justgive you an error, conditional
(01:31):
crash or whatever.
So I really wanted to figureout how we can build intelligent
software.
So I took off five years toreally deeply study what
intelligence entails, startingwith philosophy, epistemology,
theory of knowledge how do weknow anything, what is our
relationship to reality, what doIQ test measure?
(01:51):
How do children learn, how doesour intelligence differ from
animal intelligence, and so on,and then, of course, also
finding out what else had beendone in the field of AI.
So after doing this for fiveyears, I basically came up with
a design of a cognitive engineor cognitive architecture, and
then started an AI company,hired about 12 people, and we
(02:15):
spent quite a few years just inR&D mode, turning my ideas into
various prototypes and seeingwhat works and what didn't work,
and then, over a number ofyears, we actually developed a
commercial product from thatwhich we initially launched in
2008.
Speaker 2 (02:31):
I'm so glad you spent
all that time understanding how
we think, because I've alwayswondered with general artificial
intelligence if it's going tobe closer to what a human can do
and we'll talk about that in aminute.
It has to come from a deepunderstanding of how we think
and act and feel.
You helped coin the phrasegeneral artificial intelligence
and I first came across it onthe Gartner hype cycle.
Last few years they've startedto put it on there.
It's on that steep curve whereit is a hype.
(02:54):
Some people have said we're 50years away.
I'll get your view on that in aminute.
What is general AI and how doesit differ from the AI that we
know at the moment?
Speaker 3 (03:07):
Yes, a good question.
So the term artificialintelligence was coined some 60
plus years ago, and the originalintent was really to build
thinking machine, a system thatcan think and reason and learn
the way humans do, and thatturned out to be really, really
hard.
So over the years, over thedecades, ai really morphed into
(03:31):
narrow AI, and really what we'veseen now for the last 50 years
has been narrow AI, and what Imean by that is it's basically,
you identify one particularproblem that sort of requires
some intelligence to solve, butwhat's really happening is that
it's the programmer or the datascientists that use the
(03:53):
intelligence to figure out howthey can solve this problem
programmatically.
So it's kind of the externalintelligence, the intelligence
of the programmer or the datascientist, that really solves
the problem.
So, for example, one shiningexample is IBM's Deep Blue, the
world chess champion.
So it was the ingenuity of theengineers that figured out how
(04:17):
they could use a computer tobecome the world chess champion.
Now, in 2001, I got togetherwith some other people that
thought the time was ripe for usto actually get back to the
original dream of AI, theoriginal vision to build
thinking machines, and so weactually wanted to publish a
(04:38):
book on the topic and put ourideas down, we felt that
hardware and software hadadvanced sufficiently to go back
to tackling that problem, so wewere looking for a term to
describe this generalintelligence, and that's how we
came up with AGI artificialgeneral intelligence which is a
(05:00):
system that by itself has theintelligence embedded in it that
it can learn how to solvedifferent problems, to adapt to
different circumstances and soon, and that's basically what
I've been working on for thelast 20 plus years.
Speaker 2 (05:17):
So will we ever get
to be as smart as a human, or is
an unfair question?
Speaker 3 (05:22):
Well, of course, in
certain narrow things, AI is
already superhuman, but in termsof general intelligence, yes,
absolutely.
I see no reason why we couldn'tbuild machines that will have
the general thinking, learning,reasoning ability of humans
Absolutely.
Speaker 2 (05:42):
But where does it
start from?
So for a computer to think I?
Often when I was at IBM, I'dsay to clients who expected IBM
Watson to cure cancer the nextday with a credit card often the
AI that we know about it's likea twelve year old and you have
to teach it, and so if it isgoing to be an oncology expert,
(06:03):
you have to have the world'sbest oncologist to teach it.
But does it just generally Ihave to be taught by humans, or
can it then?
I just don't know.
It's such a foreign concept forme to be able to think like a
human.
Where do you have to startdifferently with generally I
versus the AI we looking at themoment?
Speaker 3 (06:21):
that's a very good
question and I think one of the
reasons we haven't seen a lot ofprogress generally in a, g, I
Really having intelligentmachines is that most of the
people working in the field aremathematicians, statisticians,
you know, software engineers,and their approaches is really
that mathematical, logicalapproach where to solve the
(06:43):
problem of intelligence youreally need to start from
cognitive psychology.
You really need to start fromunderstanding what intelligence
is, what it entails, and then tofigure out how you can build a
machine that has thosecapabilities.
Once you build an, a, g, I likethat in principle, it could
then hit the books and learnthings by itself.
(07:04):
Now it may need clarification.
I mean the same way that if anintelligent person studies a new
field, they might read a lot ofbooks on the topic, but there
may be some practical experiencethat they need or some insights
that are not explained in thebooks or articles they can find
on it.
So ultimately, that is how Iwant AI will will learn, but you
(07:27):
kind of have a bootstrappingproblem.
How do you get it to beintelligent enough to be able to
hit the books, to be able tolearn by itself?
And that is kind of.
The task we are tackling is tobuild an initial framework to
make it intelligent enough to beable to learn by itself.
Speaker 2 (07:44):
Yeah, that
bootstrapping is what I've
always worried about.
How do you give it that pushstart like the bobsled?
The bobsled is a very fastpiece of machinery which slides
down the ice ramp.
You need to push it to get itgoing.
How far away are we fromgeneral AI being able to emulate
a human of any sort, even ifit's a five year old or a 10
year old or a 50 year old?
Are we 50 years away?
Are we five years away, andwill quantum computing help
accelerate that?
(08:04):
Because it's just going toprogram things faster.
Speaker 3 (08:06):
I usually answer this
question not in terms of how
much time will it take, but morehow much money or effort will
it take, because I have seen solittle support, for the main
reason over the last 10 years isthat deep learning, machine
learning, has been so successful, so if nobody works on it will
never get it.
You know people try to justcontinue with deep learning.
(08:28):
Machine learning will never getto a GI.
So it's really a question ofhow soon the tide will turn and
more people will actually workon approaches, cognitive
architectures and I can talkmore about what the other calls
a third wave of AI.
So it's only when we see moreresources being thrown at that
(08:49):
that will start seeing progress.
I could think we could havehuman level intelligence in less
than 10 years if enough effortwas put into it.
I don't think there anyinherent hardware or software
limitations that can't beovercome with, you know, some
significant focused effort.
(09:09):
I don't think.
I certainly don't think we needquantum computing to solve this
problem.
Quantum computers by themselvesare still very much.
You know, I have a big questionmark over them in terms of,
ultimately, what will theyreally be able to do?
What kind of problems will theybe able to solve effectively.
Speaker 2 (09:28):
So here's a question,
maybe a meta question why
couldn't General AI work out howto make the fastest computer in
the world?
Speaker 3 (09:33):
Well, it could, of
course.
It's like deep minds say theirmission is to solve intelligence
, and once you do that, it cansolve all other problems.
Speaker 2 (09:44):
You say that we've
got to throw a lot of resource
and money at this.
Again, having Google IBM, I'veseen firsthand the first rate
research teams they have aroundthe world.
Ibm are looking at this problem.
Google, microsoft who's goingto win?
Speaker 3 (09:55):
What I see sort of as
a bigger perspective is that
humanity is going to win.
But yes, of course there'scompetition between the
enterprises.
I don't actually believe thatany of the big companies are
going to win, are going toproduce AGI, and the reason I
say this is they're like big oiltankers they're not going to
(10:17):
turn around quickly and all ofthe big companies are focused on
big data, machine learning,deep learning.
That's what they have, that'stheir strengths.
They have a lot of computingpower, they have a lot of data.
The people they hire, the topmanagement, the whole teams,
everyone.
They're basically statisticians, logicians, software engineers,
(10:41):
and I don't see that they aregoing to start using the right
approaches to solve AI.
I think it's going to come fromsome startup company, in the
same way that who would haveever thought that little startup
, google could dominate thesearch space?
Or little startup, amazon, theonline retail?
(11:03):
I mean, there are many exampleslike that.
The existing large companiesare often quite blindsided by
changes that are required toopen up a new market.
Speaker 2 (11:14):
Well, also commercial
considerations.
I mean, you've got to pay thebills and if you're just doing
research, it's hard to get thatto market.
Let me just go back tosomething you said in the
beginning and it fascinated methat the five years you took off
you really deeply studied howhumans work and think.
You've spent the last 15 yearsstudying what intelligence is,
how it develops in humans.
What have you learned and whatare we getting wrong as a
species?
Speaker 3 (11:34):
It's kind of
interesting because I've also
hired a lot of smart people onmy team over the years and the
ones that ultimately can reallyhelp with AGI are those people
who can think about the problemboth as a cognitive psychologist
, from a sort of cognitivepsychology perspective, and then
also understand it from asoftware engineering point of
(11:56):
view and put those together.
And typically softwareengineers aren't that
comfortable with cognitivepsychology and vice versa.
It's really a deepunderstanding of what
intelligence entails.
What are the essentials ofintelligence that you need to do
, engineer in artificial journalintelligence and you know there
are quite a number of technicalthings.
(12:18):
I'll just mention two of them.
One of them is the importanceof concept formation and exactly
what that entails.
Humans are able to formconcepts and form concepts of
concept, basically abstractconcept formation and exactly
what those concepts need to looklike or what, how they need to
(12:39):
function, I think is reallyimportant.
The second point ismetacognition.
One of the things I discoveredin working spent a year helping
out develop a new, not really anIQ test, but a cognitive
process profile test, and one ofthe things I learned there was
that metacognition is incrediblyimportant.
So that's basically thinkingabout thinking or it's being
(13:00):
able to use the right cognitiveapproach for any given problem.
So some problems require thatyou have a very systematic
logical approach.
Other problems require that youhave a more intuitive, sort of
fuzzy view of it.
They don't have a specificsolution, and so on.
So metacognition is reallyimportant.
(13:21):
So it's a number of technicalthings like that that I began to
understand much better as I wasresearching this.
Speaker 2 (13:28):
So thinking about
thinking.
One question I've alwayswondered is can machines have
empathy and could general AIlearn to love?
Speaker 3 (13:36):
Very interesting
question.
Certainly they can have empathyin the sense that a good
psychologist can understandother people's emotions very
accurately and respondappropriately to them.
But the machines won'tthemselves feel that emotion
like we do in our gut or in ourraised heart rate or whatever.
(13:57):
It won't be visceral to them.
So they can certainlyunderstand them and be
empathetic in their responses,but it's not something they will
feel unless we went to a lot oftrouble of actually giving them
kind of a body or simulating abody with all of the
physiological attributes that wehave in our emotional
(14:17):
experience.
Speaker 2 (14:18):
The problem, I think,
with a lot of AI at the moment.
As you pointed out, it'sdeveloped by programmers and so
there's a conscious bias that'sbuilt in.
Where do you stand in ethicsand conscious bias when it comes
to AI, and will this becomemore of a problem in general AI?
Speaker 3 (14:31):
No, it will be much
less of a problem because AGI
will be able to learn a muchbroader perspective and the
reasons behind certaininstructions or business rules
that you might give and be ableto help us figure out better
ways of being moral, beingethical.
So I think it will be a greathelp for us to think more
(14:52):
clearly about these things, toapply bias where bias should be
applied and not to apply itwhere it shouldn't be applied.
Yeah, ai will help us in reallyevery aspect of life.
Speaker 2 (15:03):
ultimately, once we
get to sort of human level and
beyond AI, Can the machines thenmaybe overrule the humans to
say you're not being very fairthere?
Peter, you need to really think, because you've got your own
bias there and I can sense thatand I've looked at all of the
other stats and you're not beingvery fair.
Could we be overruled by themachines?
Speaker 3 (15:20):
Whether it's
overruled.
Ultimately, when we design themachines, we'll decide where we
want to have the final say ornot.
Yes, absolutely that an AIshould alert us to aspects where
we are going against the valuesthat the system has been taught
or has learned, or somethingthat is inconsistent.
So, absolutely, it will alertus to situations where we are
(15:44):
not being fair or rational.
Speaker 2 (15:46):
Let's just move on to
what you're doing at the moment
.
So we've talked about sometheory.
Now let's talk about thepractice.
Tell me more about IGO, andwhat problem are you trying to
solve?
Speaker 3 (15:53):
We are trying to sort
of bootstrap and get our system
smarter and smarter.
Obviously, that takes money,and there's actually another
good reason for not just doingacademic research, and that is
that the practical experiencethat you get by actually having
a commercial product isinvaluable.
The first six years or so wespent pretty much an R&D mode
(16:15):
and you kind of create your ownproblems that you then solve.
Once you have a commercialproduct, you have that really
fantastic reality check of whatdoes the system really need to
be able to solve in reality.
So having a commercial companyas well as having our
development allows us tobasically do both of them.
(16:35):
Now the commercial product we'refocusing on is conversational
AI, and there's just atremendous demand for that in
many, many different areas.
Really anywhere where you wantsome kind of intelligent and or
hyper personalized conversation,now that could be in customer
service, whether it's sales orsupport, whether it's for a
(16:57):
retail company or a financialinstitution or a cable company
or whatever it might be.
So all of those kind ofcustomer support, we can really
have a hyper personalizedexperience where the artificial
agent will remember what theprevious conversations were,
what your preferences are.
So you're not just a number,you're not a demographic.
(17:18):
You are an individual that isgetting serviced.
But there are also many otherapplications, such as in
healthcare, for example, to helppeople manage diabetes or to
give a brain to a robot.
If you have a robot in ahospital or hotel, you want to
be able to talk to it and youexpect it to understand.
You Go to the pantry, pick upthis order and deliver it to
(17:39):
room 43 on the third floor.
Or in a hotel, bring me ashower cap and tomorrow morning
I want two eggs over easy to beable to have those kind of
conversations.
The applications in gaming andVR and AR again, anywhere where
you actually have a naturallanguage conversation and those
are the markets that we areaddressing and commercializing.
Speaker 2 (18:01):
Peter, it's a
fascinating area.
I'm sure we'll hear much moreabout this and much more about
IGO.
How can people find out moreabout you and your work?
Speaker 3 (18:08):
Our website Igoai
Also.
I've written quite a fewarticles on these topics.
You can find me in mediumcom.
Look for my name, peter Boss,on Medium.
Speaker 2 (18:18):
Peter, thank you so
much for your time and thanks
for being on the show.
Speaker 3 (18:21):
Yeah, thanks for
having me.
It was great.
Speaker 1 (18:23):
Thank you for
listening to the Actionable
Futurist podcast.
You can find all of ourprevious shows at
actionablefuturistcom and if youlike what you've heard on the
show, please considersubscribing via your favorite
podcast app so you never miss anepisode.
You can find out more aboutAndrew and how he helps
(18:43):
corporates navigate a disruptivedigital world with keynote
speeches and C-suite workshopsdelivered in person or virtually
at actionable futuristcom.
Until next time, this has beenthe Actionable Futurist podcast.