Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Jennifer Logue (00:10):
Hello everyone
and welcome to another episode
of Creative Space, a podcastwhere we explore, learn and grow
in creativity together.
I'm your host, jennifer Logue,and today we have the pleasure
of chatting with Graham Moorhead.
He's the author of the Shape ofThought why your Brain is so
Different from AI.
He is also the founder and CEOof Pangeon, cto of Topology and
(00:36):
a professor teaching AI atGonzaga University.
For over 25 years, graham hasbeen developing AI and machine
learning solutions to difficultproblems.
His research has led to severalTEDx talks, several tech
companies and hardware andsoftware patents.
He is currently working tosolve problems related to
wildfire, real estate,intellectual property,
(00:59):
linguistics and defense againstAI weaponry.
I'm thrilled to have him on theshow.
Welcome to Creative Space,graham.
Graham Morehead (01:08):
Thank you, it's
so great to be here.
Jennifer Logue (01:10):
Oh my gosh, you
have such an incredible
background and first I got toask where are you calling from
today?
Graham Morehead (01:17):
Washington,
Spokane, Washington, Actually a
walking distance from Gonzaga.
Jennifer Logue (01:22):
Oh, that's
wonderful Cool, do you spend a
lot of time teaching?
Graham Morehead (01:27):
I teach one
class at a time.
However, this fall I will besort of teaching two classes.
We have a new program atGonzaga for graduate level
remote learning, so this summerI'm doing research with students
, as I always do in the summer,but also preparing for that
class.
Jennifer Logue (01:47):
And I got to ask
where are you from originally?
Graham Morehead (01:50):
Boston, Mass.
Jennifer Logue (01:52):
Nice, okay,
you're an East.
Graham Morehead (01:53):
Coaster Yep.
Born and raised in Boston, Idid travel around a bit because
my dad was in the Air Force, sowe spent some of my childhood in
England.
Jennifer Logue (02:03):
Oh cool, Very
cool.
So international upbringing andwho are your biggest
inspirations in your early years?
Graham Morehead (02:10):
So I was
growing up in a time when Boston
was one of the leaders intechnology, the area around
Boston.
There's a highway called 128,and a lot of modern technology
was developed along that highwayand Bedford is smack right on
that.
So we had Hanscom Air ForceBase where a lot of research was
being done, right next toLincoln Labs, mitre, raytheon.
(02:33):
All these different companieswere doing really interesting
work and all their kids would goto the same school as me, the
public schools around there, asme, the public schools around
there.
So I was more or less inspiredby a lot of my parents, friends
(02:55):
and my friends' parents.
So in both respects, a lot ofscientists and mathematicians
were just in the area and thatmade it more difficult to be on
the math team, the science team,but also much more inspiring,
and I just remember beingexposed to a lot of interesting
mathematics early on, and thatgot me going down that path.
Jennifer Logue (03:15):
Wow, so math was
always a big thing for you.
Graham Morehead (03:16):
Oh yeah, I
think math is the way forward.
People are afraid of math, theyhate math, but it really is the
language of the universe.
Jennifer Logue (03:31):
When you put it
that way, it's really
interesting.
Graham Morehead (03:32):
But they didn't
teach it that way, at least not
where I went to school.
No, they don't, and I thinkit's because it's hard to learn
and hard to teach.
Whenever I took a statisticsclass, I always felt like I was
learning real truth with acapital T.
Think about your instincts.
We're all afraid of sharkattacks and we're not afraid of
driving down the street to thestore.
(03:52):
But way more people get hurtdriving down the street to the
store than shark attacks.
Shark attacks are very, veryrare.
Our fear systems are not basedon actual statistical likelihood
of something happening.
Statistics is a way to copewith our idiosyncrasies of how
our brain works.
(04:12):
We're not great at findingtruth.
Statistics is a copingmechanism we use to find truth.
Jennifer Logue (04:19):
Well, that is a
whole other conversation.
That is fascinating.
Oh man, I want to study math.
I think a lot of us need mathto cope right now, in this world
we live in, with all these newthings to be afraid of, which
maybe we don't need to be soafraid of them yep a lot of them
.
Graham Morehead (04:35):
We don't but
there are things we should be
somewhat afraid of.
But maybe there are people whospend their life not afraid and
they have just as much chance ofhaving trouble befall them.
So maybe it's better to liveyour life not very afraid, just
a little afraid.
Jennifer Logue (04:56):
Yeah, like
somewhere in the middle.
Graham Morehead (04:58):
Yeah, yeah.
Jennifer Logue (04:59):
Cool.
So when did you first getinterested in ai?
Graham Morehead (05:07):
so I, as a kid,
watched 2001, a space odyssey,
and they had the how 9000computer.
I was fascinated by it.
I watched the movie many times,including the long version that
begins with they play theentire alzo splach, zada,
thustra and just 10 minutes ofblack screen.
(05:27):
You know, but I loved it, evenas a kid.
I took a course in my last yearof college.
I was studying physics.
I took an ai course justbecause I had to fill out my
schedule and it was sointeresting and I remember
thinking I thought this wassci-fi only in the movies.
But this is real.
Turns out, ai started in 1956as a real direction, of course,
(05:54):
of research, and before that itwas already being done, but we
hadn't named it yet.
A lot of what we do in AIstarted with Alan Turing and his
work in the 1930s and 40s.
The Turing test have you heardof that?
Jennifer Logue (06:12):
Oh yes.
Graham Morehead (06:13):
Yeah.
Jennifer Logue (06:13):
Actually I've
been watching a lot of the
lectures at the Turing Institute.
Graham Morehead (06:18):
Really.
Jennifer Logue (06:19):
Excellent.
Well, I wanted to learn moreabout AI and they were really
helpful in giving me abig-picture overview, and I
didn't know that AI has beenaround for that long.
This isn't a new term.
This isn't a new thing, as yousaid.
Graham Morehead (06:34):
Yeah, the
Turing test is just one of the
many things he did.
He's well-known for that, butthe Turing machine is much more
influential.
The Turing machine is a way toidealize what a computer does,
and you can describe it wellmathematically.
This is not something wetypically use in real life.
(06:55):
It's just a way to studycomputation in the abstract.
And the Turing machine was thisidea that Alan Turing had, and
he was able to talk aboutcomputation in ones and zeros as
a little device that justeither reads and writes a one or
(07:18):
a zero in every place on thislong tape.
And using this abstract idea,he was able to prove things
about computers, computers thatdidn't even exist yet, and
because of that work we are ableto.
We knew ahead of time, butwe're able to now, you know,
have computers that do multiplethings.
(07:39):
You have more than one programon your computer and your phone
and your everything, and no onewas sure that was possible until
Alan Turing did this work.
There's something called this,the universal Turing machine,
and it's the idea that if youmake your computer correctly,
then you can run any software onthat one computer.
(08:02):
You just need to get the rightsoftware any software on that
one computer.
You just need to get the rightsoftware.
But that whole idea thatsoftware and hardware can be
treated separately and you canrun anything on one computer,
that's Alan Turing.
That's way back then.
He also separately.
This is another contributionfrom him.
He wrote or designed themachine that broke the Enigma,
(08:24):
the German communication devicethat the Nazis were using in
World War II.
So if it weren't for AlanTuring, you might be speaking
German right now.
Jennifer Logue (08:34):
Oh, wow.
It's amazing the impact oneperson can have on the future of
humanity right, and I don'tthink the average person knows
turing right now, apart from theturing test, if they're even
following that at all yep so Ihighly recommend the imitation
game.
Graham Morehead (08:53):
It's a movie
about alan turing, and benedict
cumberbatch plays him oh coolyeah, I have to check that.
Jennifer Logue (09:00):
I've heard of it
.
I need to catch up on my movies, but that's definitely on the
list.
Graham Morehead (09:07):
It's a worthy
movie.
Jennifer Logue (09:09):
For the actor
and also for Alan Turing.
Now for people listening whomay not understand AI at all as
a concept how would you defineit at a high level?
Graham Morehead (09:20):
There's
different kinds and I'm going to
put them in different boxes.
Okay, so one kind of AI the onethat people are the most used
to and have been using thelongest time is the ability to
find a needle in a haystack.
That's what Google Search does.
How many websites exist?
Well, it's hard to count right.
(09:42):
When you have something youwant to know, maybe some website
you want to go to, or somecompany you want to find, or a
location, you go to Google andyou ask, and it finds the
needles in the haystack.
The haystack is bigger than anyhaystack we can imagine and
those needles are really hard tofind.
Google is so good at findingneedles in the haystack and that
(10:07):
is a kind of AI.
Now, let's say you're going togo the other direction.
You want to generate a haystackfrom a needle, right?
So generating a whole lot fromone small thing, that's another
kind of AI.
We can call that generative AI.
Jennifer Logue (10:23):
That's
generative AI.
What would you call Google?
Google is the other one we cancall that generative AI.
That's generative AI.
What would you call Google?
Graham Morehead (10:26):
Google is the
other one.
I would call that search.
Jennifer Logue (10:29):
Search okay.
Graham Morehead (10:30):
Call that
search, so going the other
direction instead of finding onething out of many, going from
one thing to many To many.
That's generative AI and openAI has a number of products and
there's a lot of these.
There's 1000s of them now,actually, and a lot of them are
open source, but the text basedones they are.
(10:53):
You enter a little bit of textand you get a lot more text.
In fact, you can keep going.
You can have to write books foryou if you want.
Yep.
Now I wrote my book myself as ahuman.
I wanted a human effort becauseI think that's more valuable
than ever Human creativity,human thought.
Images can also be generated byAI now, and they're both
(11:20):
trained different ways.
So I'll talk about the textfirst.
The way they make these systemsis they train it with the word
guessing game.
Take a sentence like Jack andJill went up the.
Can you finish that?
No, very good.
Yay.
Had guessed wrong, I would havetold you that was wrong.
(11:43):
And this is how much wrong.
So the AI has this vocabulary,all of English or whatever
languages it's being trained on,and it has a probability
against statistics, probabilityfor each word being the next
word.
And the way you train it is yougive it a lot of these
(12:03):
sentences.
Jack and Jill went up the blankand he makes a guess and he
gets it wrong.
You tell it nope, the rightanswer was Hill.
Jennifer Logue (12:12):
Does a human
tell it it's wrong?
Graham Morehead (12:15):
It's human text
.
It learns on its own from humanbooks and websites and podcasts
and everything.
Oh wow, yeah, in fact, thinkabout this.
The average human is exposed toabout 20,000 words a day.
Right?
Ai like ChatGPT has beenexposed to well over a million
(12:37):
years worth of that worth of20,000 a day.
So it's a lot of data.
It's most of the internet.
Wow, and what they did is theygave full sentences.
They would black out a word andhave a guess that word, do that
again and again and again.
It gets really good at guessingthe next word.
(13:01):
Now, what nobody expected wasthat if you train it just on
that, it starts to do reallywell in understanding, general
understanding.
It seems to do well and it doeswell.
Until it doesn't.
It has weird breaking points,it hallucinates and it says
(13:23):
weird things.
Breaking points, ithallucinates and it says weird
things.
And the OpenAI company hashundreds, maybe more, maybe even
thousands, but they havehundreds of people working on
looking at the output and makingit better or trying to make it
better.
So they have this system, whichthey call it RLHF Reinforcement
(13:43):
Learning, human Feed, feedbackand all it means is they have
their original AI that generatestext.
Then they have a bunch ofpeople looking at that text and
then they grade it.
They say good, bad, ugly,whatever, and then they're
training another model toimitate what those people say.
So there's always going to be atwo model type system here.
(14:05):
Okay.
So they train it on the wordguessing game and it's amazingly
effective.
It knows everything on theinternet.
It's like having that Imagine aperson who's maybe very
autistic, but they can remembereverything they've read, and
they have read the wholeinternet yes so they make
(14:28):
unexpected mistakes, but theycan tell you so much right and
it is a little bit interestingwhen it makes mistakes.
They're very funny, like andthese are all mistakes that have
been published, so they've beenfixed later, right?
Someone asked it if nine womenwork together, how fast can they
make a baby?
And it said, well, one month.
(14:48):
And it shows the calculation,full paragraph explaining why
nine women can each, you know,make one ninth of the baby and
you get that done in a month.
So it doesn't really understandit.
It breaks in funny ways, butit's still very useful.
Jennifer Logue (15:07):
I guess, with
the paradigm that AI has, I mean
, if it's starting to reason abit on its own from different
things, it's pulling from theInternet.
It's funny to see some of thesehallucinations, you know, but
it makes you think like ifsomeone lands it on this planet
from another planet and had noconcept of our paradigms, would
they make similar deductions.
Graham Morehead (15:29):
Jennifer,
that's the perfect way to say it
.
A lot of people say it's analien intelligence.
Jennifer Logue (15:34):
Oh really.
Graham Morehead (15:36):
Not actually
from an alien, but like an alien
intelligence.
Interesting A lot of peoplehave said that same analogy.
Jennifer Logue (15:44):
Oh, interesting.
Okay, yeah, it's so fascinatingto me to think about how this
can develop, but that's a greatexplanation for people who may
not know what AI is.
Graham Morehead (15:59):
I think that
explains it at a really high
level.
So I want to add to that,though, the definition of AI.
Those are two ways to thinkabout it.
Another one is general problemsolving.
Jennifer Logue (16:10):
Ah, okay.
Graham Morehead (16:11):
Think about
your Roomba.
Do you have a Roomba or robotvacuum?
Jennifer Logue (16:15):
I do not, but my
roommate did last year.
Graham Morehead (16:19):
So it has a
problem to solve.
It has to cover all the floorarea somewhat efficiently before
its battery dies and make sureit's clean.
So the way it solves thatproblem different brands do
different things.
Some of them have a memory,some of them just have a nice
algorithm to make sure theyeventually cover it all.
(16:40):
I have a Neato brand one and Ithink it does have it has a
memory, actually.
It remembers everywhere it'sbeen and when it's done it will
message you with the floor planof where it was able to clean.
Wow, and it's sensing how muchdust is coming up.
So it's if there's a lot ofdust, it'll stay in an area for
(17:02):
a while.
If there's a lot of dust, it'llstay in an area for a while.
So it has a problem to solveand it has a number of steps
like if-thans you know, if this,then do that right, or keep
looping here until somethinghappens.
And coming up with the rightalgorithm to sweep your floor is
(17:29):
just one of the things that AIcan do.
A Tesla doing full self-drivingis one of the most advanced
systems that we have now andeven though it's not perfect, I
think it's really quiteimpressive.
Jennifer Logue (17:38):
Yes, and do you
think we'll have self-driving
cars as like a normal thing in afew years, or do you think
that's really far off?
Graham Morehead (17:47):
I think it's
pretty close, and it's got to be
10 times or 100 times betterthan humans before we'll let it
drive, because, if you look atsome statistics, supposedly
right now it's already saferthan humans on average.
Supposedly right now it'salready safer than humans on
average.
However, that's not good enoughbecause it will still make
(18:10):
stupid mistakes that a humanwon't.
Okay.
And I also think it's hard toknow who to sue, and our system
has to know whose neck to choke.
You know who do you sue, who doyou get angry at, and if it's
just a computer program, it'shard for us to handle that.
(18:31):
We need to have a person onwhich to put the culpability.
Jennifer Logue (18:36):
I didn't realize
that that was part of the
problem with getting them on theroad.
It's like who do you blame whensomething goes wrong?
The system has to be ready forit.
Graham Morehead (18:46):
It's like who
do you blame when something goes
wrong?
The system has to be ready forit.
Insurance actually gives us aneat look into the mind of AI.
So when insurance gets involved, they need to have assurances,
they need to figure out how doyou do an audit if something bad
happens.
And there was a famous Teslaaccidents where a truck had
(19:11):
fallen over sideways on thehighway and a Tesla ran into it
at full highway speeds withoutslowing down.
It was difficult to watch.
Watch.
Now it turns out the driversurvived fine, no injuries.
Even I think teslas are verysafe.
But still, why didn't it atleast pump the brakes right?
And they went into the computerand they could see that it
(19:35):
thought it was looking at anoverpass.
It thought it was going to gounder a bridge, so why stop?
It had never seen a trucksideways across the highway.
So obviously computers don'tlive lives like we do.
(19:55):
They don't think like we do.
They have to be expressly shownthings, just like with GPT-4,
it had to learn from over amillion years of English,
whereas a human can starttalking pretty well after a
couple of years.
Right, and that makes our braindifferent.
Very different.
Jennifer Logue (20:14):
Yeah.
Graham Morehead (20:16):
So these
computers have to learn with a
lot more examples.
So these computers have tolearn with a lot more examples.
So with the Tesla, they trainwith millions of examples and if
it hasn't seen something ever,it's going to get confused.
But what was important aboutthat example was Tesla has a
system by which we can peer intothe mind of the AI and see the
(20:37):
world that it sees.
And when you're driving downthe street in the Tesla, you can
see right in front of you.
When there's a car to your left, it recognizes that as a car
and knows exactly where it is atevery second and also try to
predict its movements.
Every car has momentum, it hasa direction and, yes, it can
(20:59):
stop and speed up and slow downand turn, but there's limits to
what it can do.
And if it recognizes a biker ora pedestrian, there's limits to
what it can do, like apedestrian is never going to
catch up to you at 60 miles anhour and pass you in front of
you, so it can predict what thenext step in time might be.
(21:21):
And it does that in this realmof these discrete objects D, I,
s, c, r, e, t, e and it's acertain kind of math.
Math is the way, remember.
So it's discrete.
So I could discrete person, thediscrete bicycle, a discrete
(21:43):
person, a discrete bicycle, adiscrete car truck, these are
all discrete, individual thingsthat act as one.
And this is how your visionsystem works too.
You're looking out into thestreet and you see, let's say I
see a cat walking by.
I think of that cat as asingular thing.
It's going to move in cat-likeways.
It's not going to turn intofive mice and go in five
(22:09):
different directions.
It's not going to do that.
It's a cat.
It has to obey the laws ofphysics of cats, and a car has
to obey the laws of physics thatapply to cars.
So when we resolve the worldinto discrete things, we can
much more simply understand them.
Let's go back to GPT for asecond, when you have words that
(22:32):
come into GPT.
They're discrete words at first, but as soon as they go into
the computer, the computerdoesn't read words.
It thinks about things calledvectors.
This is some more math, so I'lltry to describe it in a simple
way.
You think about coordinates, x,y, z, like space, right.
(22:53):
Here's one coordinate, here'sanother coordinate, here's
another one.
That's how gpt thinks aboutwords's in space.
So let's say you have the wordfor cat and then you have the
word for dog.
Now, where is tiger going to be?
It's going to be over here,much closer to cat than dog.
(23:14):
And then inside the mind of GPTthere's a bunch of points which
are these words, and it'smoving them around and turning
them and squishing them andtwisting it, and then in the end
it turns those back into words.
Jennifer Logue (23:31):
It's like an
association map almost.
Graham Morehead (23:33):
Yeah, it's a
big, squishy, high-dimensional
space with a bunch of thesevectors and it doesn't think
about words or meanings.
It just thinks about thosepoints in space.
So when it's creative, it'screative in that goopy, fuzzy
(23:57):
word vector space.
It's not thinking aboutconcepts like you or I.
When we're creative, we'recreative in the space of
thoughts, meanings, meanings andideas.
It's creative in the space ofthose points in space.
And another analogy I like tomake is when you're making
shadows on the wall with yourhand yeah you have
(24:20):
three-dimensional fingers, handsand wrists and you're using
that to cast a two-dimensionalflat shadow and you can see the
imperfections.
You can see, okay.
I can see the wrist.
I can see that those were humanfingers.
That's not a perfect rabbit ora perfect deer or a perfect duck
(24:41):
.
It's got imperfections aroundthe edges because it was this
three-dimensional thing andyou're just casting a 2D shadow.
Now a three-dimensional thingcan be viewed from different
directions, so if you cast ashadow on a different wall or in
a different direction, thenyou're going to see something
different.
(25:01):
You have this multi-dimensionalthing going down into a flat
shadow.
Now gpt lives in a verydifferent kind of space than our
3d fingers.
It's, I think it's actuallymore like a shadow generator
(25:23):
that's in the space of a flatspace already, so it can make a
perfect deer or a rabbit orwhatever.
It's not going to have the sameimperfections that the human
shadow would have, imperfectionsthat the human shadow would
(25:48):
have.
It's different and because itdoesn't have those imperfections
, it's also not going to havesome of the same beauty I think
that we find in our ideas andit's because it doesn't start
off with a thought, so you havea thought your head, and that
thought is high dimensional.
That thought has many ways youcan look at it.
You could cast it on thedifferent walls in different
directions, and you choose one.
Yes.
(26:08):
And then the words.
Well, the flat word shadow, ina sense, that's the whole world
for GPT.
It doesn't have any thoughts.
That is turning in the words.
Jennifer Logue (26:19):
It's just
playing the word guessing game,
it's ideas for one and humancreativity is so different and
(26:47):
we'll get into this in the nextset of questions but for me, as
an artist, it's about theprocess, like it's not about the
end result.
Like when I create art, I'm inthe process and sometimes the
process takes me down adifferent route that doesn't
make any sense.
Like I might get inspired to goto a concert that changes how I
(27:08):
approach a song later, you know, and because I'm able to have
real world experiences, it'sgonna change the output.
And I really, because GPT isfeeding, ai is feeding on the
internet things that have beencreated before.
It's copying, you know, but ashumans, we're able to go out and
(27:30):
have new experiences and thejourney to the art is the most
important thing.
And I feel like a lot of peopletalking about creativity and AI,
they're looking at creativityas this thing you output.
Like just because you generateda piece of art on Dali in 30
(27:51):
seconds does not make you anartist.
I'm sorry, like there'ssomething about the craft and
just the human process that Ican't explain.
Creativity, human creativity Idon't think any of us can.
I think there's higher elementsthat we'll never understand
Because, as you mentioned inyour book.
(28:11):
The brain is a very elegantmachine and I mean we're not.
There's so much we don'tunderstand, and I think it's
hard for some to accept thatthere is so much we don't
understand.
Graham Morehead (28:25):
So I have a
question for you, Jennifer.
Sure.
When you're writing creativelyand doing your best to come up
with something that can impactpeople or make them feel
something or learn something,what part of your process do you
think is you simulating thethoughts of your readers?
Jennifer Logue (28:45):
Simulating the
thoughts.
Graham Morehead (28:47):
Simulating the
thoughts of your readers.
Do you think that's in yourhead at all?
No, that you're trying tosimulate how they would receive
this, what they would feel.
Jennifer Logue (28:59):
It depends on
what I'm writing Like.
If it's for me, if I'm writingokay, if I'm doing work as a
copywriter, that's somethingwhere you're thinking about the
audience more.
But when I'm writing as anartist, I'm expressing my truth
and my experience, and myattitude towards that is if this
resonates with one person, I'vedone my job.
(29:20):
But my job as an artist is toexpress truth.
My truth and everyone's truthis different.
We all have differentexperiences, but that's because
we're all in this worlddifferently, experiencing this
world differently, and a chatbotcan't do that.
Graham Morehead (29:36):
How important
is it to reach that one person?
Or would you do it the sameeven if it would reach nobody?
Jennifer Logue (29:42):
even a reach,
nobody, I would still do it why
you?
Know, because there's no, Idon't know.
I I wish I did.
Sometimes I'm.
It's like there's this pull tocreate and if I resist the pull,
I go crazy.
Graham Morehead (30:07):
So that pull is
something computers don't have,
computers don't desire, theydon't want, they don't need.
Jennifer Logue (30:11):
that is an
excellent point.
It's like where does that pullcome from?
I, I can't explain inspiration.
And there's a wonderful bookthat I just went through this a
few months ago I did theArtist's Way I'm not sure if
you've heard of it by JuliaCameron, incredible book.
It's a spiritual path to highercreativity and it's absolutely
(30:36):
wonderful.
But there's so much we don'tunderstand about existence,
human existence, and yeah, Ijust find it funny that
everyone's, so many people aretrying to distill like right off
human creativity as this thingthat can just be explained in
equations and like algorithms.
(30:57):
It's like no, come, come on.
Graham Morehead (31:00):
Yeah.
Another reason I think it's somuch more is, first of all,
remember that space X, y and Zthat GPT lives in, it's more
dimensions than just three.
It's many, but those dimensionsare not inherently meaningful.
There are many points in thatspace that don't match up with a
(31:20):
word.
There it's not really ameaningful space and when it
explores that space, most ofthat exploration is it's not
going to map on to somethinginteresting to humans.
We, as humans humans, when weexplore, we're trying out
(31:43):
different thoughts in our headand every direction we go in is
a meaningful direction.
It's a direction along someaxis, like, am I being creative
in the type of word I'm using orthe type of metaphor I'm making
?
Or in the type of relationshipI'm using, or the type of
metaphor I'm making, or in thetype of relationship I'm
exploring between characters?
(32:05):
What is the direction I'mexploring it, whereas for AI
it's just exploring in mostlymeaningless directions.
Now, what's very interesting isthat we've been able to make
these systems work so well thatwhen they do come out with a
full output, we look at that asa human and we can imbue meaning
(32:29):
onto it, and we can look at itand think, oh, it is meaningful
and we can feel something, butreally it's more like a idiot
savant was the old term, peoplewho could say things that sound
really intelligent, but when youfurther dive into it and
question them you realize theydidn't understand a single word
(32:51):
they said.
But a human who does understandwhat they say and explores
those ideas.
That's really interesting to usbecause you almost feel like
you have a relationship with thewriter yes, there's intention.
Jennifer Logue (33:10):
You know art's
about intention.
It's not just like throwingstuff at the wall and like
hoping something sticks.
And hey, even if you'rethrowing stuff at the wall, as
an artist, there's still somemeaning there.
Probably you might be gettingsome emotion out that you didn't
.
But from what you're describingwith that space, it just seems
(33:30):
like very vacuous and just likesuperficial it's.
You know it ai requires us toattach meaning to it, if any
yeah.
Graham Morehead (33:38):
Yeah.
Jennifer Logue (33:39):
There's no
intentionality.
Graham Morehead (33:41):
Yeah, and for
me it's not as interesting
because when I do absorb somekind of art literature, photos,
paintings, whatever if there isart there I feel something
because I believe that aconscious human made choices and
somehow I'm learning from theirchoices and somehow a marker of
(34:06):
our time.
Think about the Lord of theRings great series of books.
If it were to come out today asa brand new series of books
that had never been writtenuntil now, people would think oh
, oh, that's nice, it's like agame of thrones repeat, good job
.
The reason it had such animpact is because it was kind of
(34:31):
beginning a genre.
Now it wasn't the first.
They stole a lot of stuff fromthe ring trilogy, from wagner
and ring for the four series ofoperas about, you know, getting
the gold out of the Rhine andone ring is power and stuff.
But it was the first bigimpactful thing in the English
language in that genre.
(34:52):
And he was really good atdescriptions and world building,
maybe some of the best worldbuilding that had been done so
far.
If it comes out now it's just acopy, but if it came out then,
then it's like wow, this is new,this is different.
So, as an artist, part of yourjob is to say something that
(35:15):
hasn't been said.
Jennifer Logue (35:17):
Hasn't been said
and AI is not going to do that.
Graham Morehead (35:21):
AI is only
going to say, in between the
things that have already beensaid, it's a mashup.
Jennifer Logue (35:26):
It's a mashup.
Graham Morehead (35:27):
PPT is a
mash-up of everything with
what's on the internet.
It's a mash-up of the wholeinternet yeah, so you're not.
Jennifer Logue (35:35):
You're not being
creative not really really
you're, you're putting thingstogether, you're mish-mashing
yeah, mash Mashups are a littlecreative.
Graham Morehead (35:45):
but that's old,
it can be.
Jennifer Logue (35:47):
But it's nothing
new, so it won't be as
impactful.
So what led you to devote yourcareer to AI?
Do you want to talk about that?
Graham Morehead (35:55):
Yeah, it felt
super interesting.
That's what it was, that's allit was.
It was just super interesting.
I remember three or four yearsafter I graduated college I
graduated in 1995.
I sat down with an advisor orfriend who had done grad school
(36:17):
and he was a little bit ahead ofme.
His name was Mads.
He sat me down at a restaurantand say Graham AI is dead, it's
over.
You shouldn't dedicate yourselfto this, don't?
It's dead, it's over.
And this is probably, you know,99 or 98, something like that.
(36:39):
And I remember thinking I hearyou, but it's too interesting.
I still want to do this, and Idon't think I responded to him
that day, but I just decided Istill want to do this.
I grew up.
My father was a neurologist andmy mother taught languages
(37:01):
French, english and Spanish andshe studied Arabic too, and my
dad was Air Force, so wetraveled around.
When I was a kid, language andthe brain were two things that I
was just so fascinated with howdoes the brain work, how does
language work?
And I always knew that thosewere what interested me.
But in around 99, around alittle bit after that
(37:24):
conversation, I decided I wantto study language in the brain,
but I want to study it by makingit happen, by doing it in a
virtual brain, in AI.
Interesting, that's what AI is,and Richard Feynman used to say
that which I cannot create, Ido not understand.
(37:47):
He said that about physics, likeif I can't generate this kind
of particle or this kind ofinteraction, then I don't
understand it.
If I can't create somethingthat understands language, then
I don't understand languagemyself.
So I'm trying to reverseengineer language and its
(38:09):
meaning.
Not just can I ramble off wordsthat in retrospect seem to make
sense, can I understand whatsense it makes as they come out?
How?
does the brain.
Jennifer Logue (38:21):
Do this Now are
you doing this kind of work with
parsimony, one of your projectshow do you say it, I'm sorry?
Kind of work with parsimony,one of your projects.
How do you say it?
I'm sorry?
Graham Morehead (38:28):
parsimony
parsimony okay that's all, and
then to ask goes like ah, it'sall right, parsimony so pangion,
our startup studio, is aneffort to streamline the process
of launching ai startupsbecause, no matter what it is
linguistic imagery, dataanalysis, behind the scenes 75%
(38:53):
of it's the same.
We're trying to systematizethat process and one of those
startups we're launching iscalled Parsimon.
Think about it as a cortexlayer on top of an LLM.
A large language model isreally good at generating text.
It's not good at understanding.
It doesn't quite know what itmeans.
(39:13):
I want to give you anotherexample from the news.
Air canada put a chat bot ontheir website.
So our customer came and saidyou know how do I get
bereavement discount tickets?
It gave them instructions.
The customer followed thoseinstructions.
When they got to the airport,the human said sorry, that's not
our policy.
No discount for you, but Ifollowed your instructions.
(39:37):
And the guy said we're notresponsible for what our chatbot
says.
He sued them.
And what Turns out?
You are responsible for whatyour chatbot says.
Yes, these chatbots aremindless, remember.
They're very good at coming upwith believable and
understandable strings ofcharacters, but in a mindless
(39:57):
way.
So we want to make a layer thatcan understand that, so it's
not mindless.
Okay.
Think about again that teslaexample.
They could figure out whathappened because they could see
the world from the mind of thetesla, what it was looking at,
how it understood the world.
(40:17):
Okay, that's a discrete car,that's a discrete truck, that's
a discrete bridge.
So, even though it got thewrong answer, at least they
could peer inside the mind andsee why there was a why.
Well, if with Air Canada, I'msure the executives called the
tech people in the room and saidwhy?
Let's look back in the mind ofthe AI and see what it was
(40:39):
thinking, why did it give thisanswer?
And the engineer probably hadto say something like heck, if I
know, it's just a bunch ofnumbers in there.
Remember those points in space?
Yes.
I know it's just a jumble ofpoints doing their thing.
There's no why there's no trueunderstanding.
That jumble of points hadjumbled around for a while and
(41:00):
then words spat out.
That's why there is no good,why there's no good answer.
So we're trying to be that goodanswer.
Jennifer Logue (41:09):
Like a black box
, almost for the interaction.
For now it's a black box.
Graham Morehead (41:13):
Okay.
We're making something that'snot a black box, oh, okay.
So same way, when you'redriving down the street and
you're testing, you see, okay,there's a discreet car on my
left, there's a truck on myright and there's a pedestrian
up ahead.
We want to do that for wordsand language, so that as you're
moving through a conversation,it will know what things mean
(41:36):
and where they can go.
Jennifer Logue (41:37):
Okay, so now our
earlier conversation.
We were talking about how AIcan't imbue meaning onto things.
Now with this, it sounds likeit'll be able to, so should we
be afraid not afraid.
Graham Morehead (41:53):
Okay, it does
not feel, it does not want, it
does not need.
Okay, consciousness is thatwhich feels consciousness that
which cares a computer, nomatter what it says, it doesn't
actually care if you turn it offwell, blake lemoine oh the
(42:14):
google guy yeah.
So first of all, he doesn'tunderstand the mathematics of
what he was talking about.
If you got a million peoplewith graph paper and pencils, it
would give, and they did themath instead of the computer.
You would get exactly the sameoutput.
Now, would blake bemoin saythat graph paper and pencils are
(42:34):
conscious?
Maybe he would.
I disagree.
I don't think pencils and paperare conscious, but they will
give exactly the same answer asthe computer would in his case.
Now, the other way I can lookat it is if I wanted to get
awesome press about my company,I think what I'd do?
(42:55):
I'd shoot one of my engineersand tell him okay, we'll give
you a bonus, don't worry aboutthis.
But I want you to leak a storysaying to the world that our AI
is conscious, and then we'regoing to fire you and pretend to
have a cover-up.
That would be the bestmarketing ever.
Jennifer Logue (43:14):
I mean, man, it
must have worked if that was the
case, hypothetically speaking.
Yeah, so I wanted to touch moreon human creativity and AI.
Like I know we covered it quitea bit, but before we do that,
what's your definition ofcreativity?
Graham Morehead (43:46):
Creativity can
be either actual voice or words
or images or theater orsomething else, like what you
know, people who do art.
That's not easily categorizable, but you say something and that
(44:06):
something is something deeplytrue to you and it's never been
said before.
Maybe no one's even thought itbefore.
Love, that line from La La Land.
They show us what colors to see, give us new colors, like are
there new colors that no one hasseen before?
(44:30):
and once it exists, then thatbecomes a new touchstone, a new
point from which we can build oneven further that's a really
good point, like increasing yourparadigm, like your palette
yeah, when you see a piece ofart that can like expand your
perspective.
Jennifer Logue (44:50):
Yes.
So, on that note, I know wetalked about this before, but
how would you say AI creativityis different from human
creativity.
Graham Morehead (45:04):
So AI
creativity?
First of all, it's not arelationship.
One thing that's really amazingabout humans is we all have our
own take on the world, onexistence and what it means to
be human, what the universe is,what they are.
Everyone has their own take andyou perceive the world in a
(45:26):
totally different way, like if Iwere to actually be inside your
brain somehow, magically.
Perhaps I'd look out the worldlike wait, the sky is red and
trees are green and trees areblue.
Like what's going on?
Why is your system different?
Because everybody's experienceis totally different.
(45:47):
Like when you have aninteraction, buying coffee in
the morning, things feeldifferent to you than they would
someone else.
When tragedy happens in yourfamily, maybe, things feel
different to you than they wouldsomeone else.
Everyone has their own uniqueview of life in the world, so
they're able to say somethingthat is expressive of that
(46:10):
uniqueness.
That's what real, the best ofcreativity should be.
What it often is is somethingdifferent.
It's often where people aretrying to guess what other
people would like to pay for asa TV show or whatever, and they
just do what other people arewilling to pay for.
(46:32):
And 80, 90% of the content thatwe get is like that.
I might put all the Marvelmovies in that and sadly Star
Wars became kind of in that vein, with a few exceptions.
But appealing to the broad baseis not really creativity, not
(46:53):
at its best.
Real creativity shouldchallenge you, should maybe make
you feel things and excite thatpart of your brain that is only
turned on when you seesomething you've never seen
before, when you hear a word ora concept you've never thought
about, or see a color you'venever seen.
(47:14):
You know that part of yourbrain should get turned on and
there's a reason that we alllike that.
We like hearing a story that'swe've never been told before in
a sense, yes, maybe it's adifferent take on something.
Jennifer Logue (47:30):
Yeah, even being
exposed to different colors,
like I, I'm learning jazz pianoright now because I've always
been a vocalist but I've alwaysplayed piano but I never really
dug into jazz chords and stuff.
And as I'm playing with thesechords, I'm just like every time
I learn a new one I'm like, ohmy God, these colors, it's
indescribable.
And I listened to the oldrecords and I'm like there's so
(47:54):
much richness there.
So, yeah, that's humancreativity, the way we put those
interesting things together.
But AI creativity, as you weresaying earlier, it's more flat.
Graham Morehead (48:11):
It's, yeah,
it's, it's going.
There's no drive behind it,there's no consciousness behind
it.
Now, just by chance, it willend up sometimes generating
things that make us feel thingsthat says more about us.
Mashups of what other humanshave done will make humans feel
(48:31):
things great, but that's notwhat art should be for.
What is art for?
Art is about someone out therein the world decides to dedicate
themselves to art.
Now we already know everyonehas a unique take on what it
means to be alive.
Some of those people a smallset are going to make that their
profession.
They're going to be artists orthey're going to spend some of
(48:53):
their time on art, and thosepeople who choose this for
themselves, they're going to sayyou know what this unique take
that I have on the world, thisis worth telling others about,
because I've seen something theyhaven't and I want them to see
it.
Jennifer Logue (49:10):
Yes, now,
something you said earlier made
me think about this.
But in the algorithm-drivenworld we live in now with social
media, so many artists you knowI get questions written into
the podcast.
Like you know, do I keep doingthis.
No one likes my posts on socialmedia and they're good artists.
(49:31):
Yeah.
And I mean it's like thealgorithms are almost deciding
for us.
Graham Morehead (49:39):
Yeah, and
that's messed up.
And my biggest advice toeveryone and I'm trying to
follow this myself is figure outwhat you want to say, feel it
strongly and just say and do it.
And it's not going to work mostof the time, and that's okay,
just keep at it.
I like one thing that Iremember joe rogan saying years
(50:00):
ago he was starting out hispodcast and it was like three
and a half hours long, threehours long, and no one was
watching it back then and all ofhis advisors and friends would
tell him stop this, you got tomake it shorter, you got to edit
it down, no one's going towatch it.
And his response was well fine,let no one watch it.
Jennifer Logue (50:19):
I love it.
Graham Morehead (50:21):
And because of
that, eventually, people who
were going to watch it found him.
I don't know all the parts ofthis whole puzzle, but I do feel
strongly, especially in a worldwhere good content can be
generated at will.
What I'm going to seek out, andwhat maybe other humans are
(50:42):
going to seek out, are thoseinteresting voices that actually
say something they feel.
Jennifer Logue (50:49):
Yes, and you
don't know when it's going to
hit.
You know no one can, it maynever hit.
It may never hit.
Graham Morehead (50:56):
Even if you
reach no one, you still want to
say that thing you want to sayright.
Jennifer Logue (51:00):
Yes, because we
have that drive to.
We have to.
It's like the result is none ofour, our business, like we
can't be focused on results asartists, as creators of any kind
, I mean, unless you're runninga startup.
I mean then there's businessinvolved and you know that's
another conversation yeah, um Iwanted to talk about your book a
(51:21):
little bit.
We have a few minutes now totalk about it.
Graham Morehead (51:25):
The book
touches on a lot of the concepts
that we've discussed today.
How does AI work internally?
Why is it so different fromyour brain?
And some of the examples I gavetoday are in the book, but I
discuss it in a lot more detail.
But it's not technical.
There's no code, just storiesand thoughts and some pictures
(51:49):
oh cool, so it makes it reallyaccessible to everybody but it's
all about the shape of yourthoughts, and shape is what
makes human thought differentfrom ai thought oh, that's
interesting.
The mathematical shape.
Jennifer Logue (52:06):
Cool.
Well, graham, thank you so muchfor appearing on Creative Space
.
I learned so much from havingyou on the show and I'm really
looking forward to checking outyour book.
For more on Graham Moorhead,visit grahammoorheadcom.
And thank you so much fortuning in and growing in
creativity with us.
I'd love to know what youthought of today's episode, what
(52:26):
you found most interesting,what you found most helpful.
You can reach out to me onsocial media, at Jennifer Logue,
or leave a review for CreativeSpace on Apple Podcasts so more
people can discover it.
I appreciate you so much forbeing here.
My name is Jennifer Logue, andthanks for listening to this
episode of Creative Space.
Until next time.