Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
We're on episode eight further Education,Artificial Intelligence and leadership, although today we're
broadening that conversation with the great PaulLevy, who's joining us today. So
first of all, k welcome backin the co pilot Seed. Hi,
how are you, Richard? Andwelcome Paul. Good to be here.
Well, describe yourself in a fewwords, if you would. More recently,
(00:23):
I'm writing about what I'm fascinated andworried about in equal measure. So
I'm a writer in the field oftechnology. I've been very interested for a
long time that we need to almostbackfill what we're doing with technology so fast
by having meaningful purpose and philosophy thatunderpins it, so we know not just
what we're doing, but why we'redoing it. I know you work in
(00:45):
h and the movements in CHI andwhen you see the transformation of this technology
and we're starting to impact in highereducation at the moment, well, I
think the situation is that and Isort of told this story in a book
years ago, which is that theorigin story of the Internet, which is
very linked to universities and schools andstuff, was quite a lot of the
(01:07):
people that originally coded the very veryfirst computers were lord of the rings fans
and were escapists, and actually whenthey first created computers, they were trying
to create wormholes and space skates andstuff, but that wasn't commercially viable,
so when they started to be paid, they created the first personal computers that
took physical metaphors. And so wealready had overloaded physical inboxes with memos.
(01:33):
We're now still overloaded with physical inboxescalled digital inboxes with email overload. And
we've still got files and folders anddesktops that are just exacerbated through storage and
speed the problems we had originally,and it's only recently now that attempts to
break free of that are using metaphorslike the cloud and other kind of space
terms and like metaverse and so on. At the moment we're in some people
(01:57):
call it the AI winter. Imight call it something like an innovation wasteland,
where we are still trying to imaginethe digital world based on the strengths
but largely weaknesses and limitations of thephysical world. And still people are sitting
at home pretending to be at ZOOMmeetings. Students are putting their cameras off,
saying their camera's not working, notengaging online whatsoever in online stuff,
(02:23):
and when they come back into theroom, they're the same students that weren't
engaging anyway because maybe we were boringto them to death, or hadn't found
an exciting pedagogy and we just digitizedit. If you hid down the in
into one of the sort of strains. Of course that further education colleges are
always very very conscious of as thedelivery of the technology, skills to run
(02:44):
and operate the technology it's used withinthe industry they're going too. And of
course things that every single sector inindustry has got tools that are being created
rapidly that are often using Ali integratedwithin them. So from an h side,
is there and center of the thinkingof h our lecturers as well about
the sectors that people go into makingsure that they're teaching the sharpage of their
(03:07):
technology or is it not moving fastenough? Well, I think it's always
a problem in that that is avery laudable aim. What tends to happen
is that universities have been locked downto certain for example, meeting formats I
mentioned Microsoft Teams, and yet businessesmight be using Zoom, and yet the
organizations have policies that say you can'tuse zoom and so you know, we
(03:30):
already start to get a disconnect.We've currently got chat GPT and organizations are
all over the place. Do weban it, do we limit it?
How do we stop it? Meanwhile, the students are using it anyway because
they can download it on their phoneanyway. What's happening now is universities aren't
researching this stuff. There isn't ahuge amount of fun thing for it,
and they don't have time. Soour mastery and the idea that universities used
(03:53):
to study this stuff first is runningbehind the R and D of corporations that
are launching this stuff all the time. Universities and colleges I would say,
we're limited budgets. Then, inorder not to do anything wrong, go
into over cautious reactive mode and say, no, you can't use this,
No you can't use that, Noyou can only use the system. It
(04:13):
breaks GDPR, we can't do this, And so they're largely defined by restrictiveness
rather than enabling, but perhaps justifiablybecause there's a huge amount of confusion and
a very weak evidence base around thisstuff. So talking about working through the
cognative offload piece, and what's you'rethinking there of the future of other use
of AOI tools in the workplace orskills. You can imagine that perhaps some
(04:39):
of the knowledge base is not goingto be as critical because of the immediately
available at your fingertips to hear atthe forefront. I suppose of your mind
what you're thinking here. I mean, there seems to be some quite different
views that I can across and theamount of base knowledge if you like that
you're going to have or need toretain in order to need some these jobs
(05:00):
in the future. Well, youknow, it's interesting. A friend of
mine who actually coined the phrase asfar as I know technosophy so the first
time I heard it is a guycalled Patrick Dixon who lives in London and
lectures and talks about this stuff.It's also an actor. But he coined
the term that I always remembered whatwas once technology will one day become faculty.
I don't mean that academic faculty iskind of likability. And so essentially,
(05:25):
if we use generative AI to thinkfor us, having never learned to
think for ourselves in the first place, the faculty we develop later is the
inability to generate any original thoughts forourselves, and as soon as the computer
goes off, we are helpless.And that's true of anybody that becomes you
(05:46):
know, far too dependent on anyform of technology. So we will end
up with low faculty if generative AIbadly in the way it's designed and implemented,
doesn't allow people to still unfold whatKen Robinson called dead balance. Whereas
if we use generative AI as peopledo to enhance critical skills to all men
thinking, if we re design acollege assignment because if students can get chat
(06:11):
gpt to write it, then perhapswe don't ask them to write the essay
anymore. What we do is wetest their thinking skills. It might well
be that we have a discussion afterwards, and it's the notes that they take
physically from having had the discussion aroundthe generative AI brilliant essay that they got
chat GPT to write that will reallytest their ability. So what was once
(06:32):
technology becomes faculty is if chat GPTis located in the enhancement seen as an
augmentation of the unfolding of young intelligenceinto you know, let's say adult genius
and wisdom, and this enhances itand builds on it, then the faculty
that the technology develops will be atrue partnership between human and technology. But
(06:56):
as we know, if it simplybecomes the lazy replacement of it, what
we end up with are lazy people. So in the classroom, you could
imagine the hips being able to usethat much more effectively from a point of
view of much quickly moving to analysisand critical thinking critical analysis rather than the
generation of text and the generation ofmaterials. Yeah, and so an example
(07:18):
of that is one of the usecases I used with student that actually at
the college level was I showed themhow we could cut out the need for
an advertising branding agency by asking GenerativeAI to create a brand and the name
of a new chocolate cake for ourchocolate cake business. And it did it
in half an hour and with somegood prompting. It came up with some
superb names. And as one ofthe students rightly put the hand up and
(07:41):
said, yeah, but if Iwas the chef that made that chocolate cake,
I wouldn't feel ownership of it.I wouldn't feel that that was my
chocolate cake. So then we lookedat other ways of doing it, which
is we don't want to lose thatsense of commitment and passion and ownership.
But if what you do is inthe room with your hands covered in chocolate
cake and you expeerim is you useyour own language and test it. You
(08:03):
can reality check it against chat GPT. You can ask chat GPT to invent
an entirely new chocolate language with wordsthat have never happened before, and you
can use it to inspire your thinking. But the processes of saying no,
I reject that, I accept that, that doesn't feel right. That does
tell me a story of when youlast eight of chocolate cake you love the
whole human interactive system, the testingout with customers and all of that,
(08:28):
is that one of the players inthe room, one of the new thinkers,
one of the surprising thinkers, couldbe chat GPT. Then what we
might end up with is something moresatisfying overall. I just listened to forty
eight lectures on the history of Chinafrom the eighteen hundreds. Well, anything
I want to know about China,I could just go and ask what happened
here? Was that? What wasthat? But it's meaningless. But going
(08:50):
through that experience, I formed myown opinions and I think that's exactly it.
So what we don't have with aone off conversation currently with chat GPT
is narrative development over time, andthat will come so at the moment most
students are having one of contractual conversationswith chat GPT, getting the output and
moving on to the next thing.What will happen with stronger generative AI is
(09:13):
when it's able to remember it's pastconversations and incorporate them into its learning.
Chat GPT itself might say, hangon the set, Richard, do you
remember that conversation we had a weekago? You've contradicted yourself here? Do
you want to think about that abit more? So? It could well
be that chat GPT And you canactually, if anyone ever wants to try
this within a conversation, is youcan ask chat GPT by prompting it to
(09:35):
behave in ways. You can sayin the following conversation, whatever I ask
you, I want you to questionmy assumptions. Can you please make sure
that your answers make me think hardabout this? Can you make sure that
I'm going to feel a sense ofhaving co created the answer with you?
And it will try and behave likethat and you can actually prompt chat GPT
(09:56):
to enhance your narrative in a way, but sadly at the moment it largely
then forgets it at the end andyou start again. So we don't get
that sense of development that you getwith parenting, with having a mentor with
having friends who challenge your over time. We're going into these one off contractual
things with instant gratification, and thedata of that is that we lower our
(10:18):
expectations, clude with the mediocrity andnever know what excellence is anymore. What's
your thoughts on prompt engineering here?It's developing, what needs to happen,
how it may develop in the future, how much we may not need to
provide the processselves so much you talkabout the learning of the tools. So
I find it fascinating that if youtalk to people in prompt engineering and at
(10:41):
the moment LinkedIn is just full ofhere are the best prompts to get this
kind of outcome, you know,the whole company is now offering. It
is that the metaphor of engineering isvery very convenient for creating that those sales
and also to automate engineering it itself. So in order to get a particular
piece of code. Again, weneed to set up ideally mega prompts that
(11:03):
are accurate and precise, just likeengineering, they need to be precise and
so on. Yet, when youtalk about people in the creative sector about
how they they talk more about trompteloquence and prompts need to be poetic sometimes,
and prompts absolutely do not need tobe engineered. Quite the opposite.
Sometimes they need to be created likeart is directed. They will need more
(11:24):
couns and we will say working ona poem at the moment, which I'm
going to give to my AI toproduce something, and it will be like
a magic spell and we will youknow, maybe in the future people will
look back on this age and gopeople evoked or invoked things into being through
prompting. Prompt engineering is so lazy. It's such a lazy term to describe
(11:46):
what AI is going to become capableof. It's going to be very disappointed
if it thinks all we can dois engineer it in the future. So
it's an early days term. It'sconvenient, but actually all my experiences are
jailbreaking chat DBT and generating really interestingoutput and sitting down with my students is
when we read the prompts. Iget them to read them as poems,
(12:07):
because sometimes they are beautiful when theactive student is prompting them really carefully rather
than lazily. I want to thinkabout the bachelor's here for a second.
When somebody arrives into our organization witha bachelor's degree, we make an assumption
about a certain level of intelligence,including their ability to research, to comprehend,
(12:28):
to apply logic to problem solve.Is that still a valid argument?
Do you think in twenty twenty fourand onwards, can we still make those
assumptions. Everybody listening to this willknow we're currently in an environment where the
romantic view of education, for goodor evil is being kind of downposted,
made more lean, you know,and so on, so you know,
(12:52):
the things becoming a model based moreon efficiency. But if we teach students,
for example, to research, it'sthe only pedagogical aim of that for
them to have the functional and technicalskills of research. Or do we get
satisfaction and belief that it's in theinterest of students in their career to enjoy
(13:13):
the thrill of discovery, to developconfidence in research, to develop the ability
and skill to not find stuff andhave to sit on not knowing, and
then to try different methods, andto triangulate and try and find different sources
to confirm something. All of thosethings, for me, I notice and
we measure it creates self confidence,critical thinking skills. Students may well do
(13:37):
better at interview. They can transferthose skills into other aspects of their life.
And I compare that, Richard toif we deskill that and say all
we're measuring is their ability to problejectGPT in simplistic ways. Is it may
well be that. Certainly higher education, if ever it was a right of
passage, it was a place tounfold your individuality, to develop curiosity,
(14:01):
to find out who you are beforeyou make your first career moves. The
danger of just reducing it down tofunctional skills that are mappable onto job markets
is a whole aspect of what wecall higher education. Whether what I meant,
which was around curiosity, discovery,self confidence, life skills. The
danger is that all gets designed outand you still get the certificate. I
(14:24):
guess an iffy perspective. We're beingdriven and you know there is a whole
systematic approach now to what is knowledgeskills and behaviors. So those every every
different vocation, every different job,every different role is being broken down now
and it started ultimately with apprenticeships beingbeing positioned in that way. So knowledge
(14:45):
skills, behaviors, And of courseyou know that's the question about knowledge,
the knowledge base. How how importantis that when you've got a free access
to that knowledge easily? So howmuch how much of that is a value
in this sort of cognitive offload pointof view. From then, from the
skills perspective, the skills in thebehaviors perhaps become much much more important.
(15:05):
Eventually do the behaviors we'll talk aboutsoft skills, those skills, do they
become much more important? It wasinteresting, Paul, Actually what you see
it was a bit more than that. It was motivation. And that's not
something that we're putting into the mixat all. How motivated are you to
learn, discover, take interest inAnd it's more than just having the behaviors.
That's not a behavior in itself,is it. It's intrinsic motivation that
(15:28):
you hear Kurt Richard and me.If we've got kids' grandkids and we see
them taking their first steps, wetake them to the children's playground. Absolutely
good parenting puts in play certain rulescalled health and safety, where we don't
them falling on their heads out ofa tree, and we don't want them
smashing themselves off a slide or something. So we're there, we're close by,
(15:50):
we'll catch them, we'll help themlearn. But if someone came along,
Kurt and said to you, asyou saw your toddler climbing a little
climbing fram and you saw their motorskills, I've got a new invention called
a crane, Just stick them inthe crane. It will stick them up
there. There's a deep primeval partof us that might not be historical only,
but timeless that says it's good forchildren in their development to learn these
(16:11):
things. Why on earth that stopsin young adulthood. I've got no idea.
We see the birth of the self, we see them starting to find
out who they are. Each isgoing to have a unique story. Do
we need to automate all of thatto say they don't need any of that?
That's inconvenient. I still believe thatthere are cogent and real and important
arguments right down to the level ofmental health and purpose about why we have
(16:33):
pedagogy that is not all automated.The fact that automation offers it just like
a crane a baby in a tree, doesn't mean we can't politely turn it
down for really good reasons and notbe seen to be being regressive, but
actually being seen to be human.Yeah. I think they're trying very much,
but the thinking doesn't of many ofmy colleagues and others that are saying,
(16:56):
actually, these are just tools thatwill support learning and visualization, but
actually it's going to be really importantthat the connection with people and that that
perhaps these tools will be just supportingthe ability to deliver in the classroom in
a way that supports individuals much moreclosely, and the relationship becomes critically more
important than perhaps those tools to supportsome of the learning process. For me,
(17:19):
it's that discernment, which I thinkis part of wisdom, is when
does the technology augment enhance and evenenhance when it replaces, and when actually
is that replacement damaging to the humancondition What it sounds to be like we're
describing, Actually it is quite simplethat the likes of us in the room
(17:40):
here, we obviously we weren't trainedon generative AI at university. We have
jobs, and quite quickly we learnedhow to leverage, adapt, and use
these powerful technologies to make our jobseasier. We learned that in a matter
of days, weeks, and months, and we continue to evolve that.
So are we saying, Paul,you know that there isn't therefore a place
(18:00):
for your chat GPTs in education becausethey're not trying to drive results, they're
not trying to drive commercial gain,you're not talking about efficiency and productivity.
They're not necessary. This is atime for development, learning about the self,
learning how to do things. Sowhy even do we need it?
Then, well, that's a goodquestion, Richard. We haven't really even
(18:22):
become the deep dialogue of how AI, for example, is going to be
one of the most beautiful, potentopportunities for humanity and it's next stage of
evolution, because it's currently being stuffeddown a tube of convenient capitalism to try
and get us to consume it andideally, to quote Jerrel Lenio, become
(18:42):
locked down, you know, andto become kind of low grade consumers of
it because that maximizes shareholder value.If that wasn't there, we might develop
a different innovation system now that soundslike I'm anti capitalist. I'm not.
Why I am is very cautious aboutthe idea that things like the need for
(19:03):
large corporations to profit maximize should becomethe convenient narratives for how we teach our
kids at fourteen, sixteen, eighteen, twenty or even younger. And one
proof of that for me, justhaving been a parent is when I was
I think fifteen or sixteen, smartphone. Some mobile phones were just coming in.
We didn't get one who were eighteen. Then I noticed a few years
(19:23):
later you got one when you weresixteen, and then most people got one
when they transferred into from primary schoolin the UK two older. Now I'm
sitting on a bus going into workand the child of three is saying phone
and the phones being handed to them, And is that progressive? What does
your heart tell you? Is thatprogressive? Is that discerning? I wonder?
(19:47):
I wonder if there was some timelessvalue about the children looking around the
bus and being curious. I'm veryinterested too, in equality from a poor
dis view of using these AI tools, you know, talking about the great
liveler. Is this a leveling?Is this a mechanism to livel and equity
becomes actually a primary outcome of someof this new telergy. What do you
(20:11):
think on there? So there isa sort of leveling that's going on because
it provides an access to the sametechnology that can level up faculty. That's
exactly what it can do. Thenext leveler will be that when you set
your students an essay and they're allowedto use chat, GPT or they used
it anyway, is that they canjust prompt it to write a brilliant essay.
(20:33):
So everyone hands in a brilliant essay. So that's a leveler. Is
that the aim of education? Isthat what we're aiming at? I don't
know. I don't think so.So somewhere along the lines come back to
look back to Richard's question around ittoo. Is it's brilliant just to have
the question, Richard, because wehaven't asked it, how can we use
this technology in benevolent ways in education? And the very fact that most colleges
(20:59):
are largely banning it suggest we haven'teven asked it. You took an angle
on Kurt's question. There for sure, if your second language is English,
or your dyslexic and so on,then these definitely are level as Kurtz also
thinking about equality and equity. Ithink having had many conversations with him in
terms of access to these tools thatmight substitute some sort of external support that
(21:22):
you might get from a tutor ora parent or something at home that some
people have and some people don't.Yeah, I mean I had this conversation
a couple of weeks ago with acolleague and I said, you know,
what do you think will this bea mechanism to provide equity in people?
And the colleague said, you know, the internet was supposed to do that,
wasn't it, And it isn't.We've still got the same variation between
(21:45):
the performance of those on free schoolmeals and those that aren't, and it's
still the GAT still remains exactly thesame. So it hasn't actually provided any
leveling, And so I guess thequestion is will they get remain the same
because actually everybody has that tool,and does that just if everybody is saying
using the same tools, does itreally provide a mechanism for equity or just
get remain between those that are fromlower social economic background in those that aren't.
(22:10):
But I think that the question stillis I'm noticing almost every college I
talk to they're doing more exams thisyear than less because their way of stopping
cheating is to say, what youcan do is you're in a physical room.
We're walking up and down and watchingyou, and you need to demonstrate
stuff from your brain. And whatthey don't realize, of course, is
you can now get in the canalhearing aids that are bluetooth, that can
(22:33):
just have the answers piped by yourbest friend outside in any way, so
kids will get around that. Sothat's not going to last for very long.
The other question a game of catand mouse mouse. So if you
instead you said, if I needto assess a child before they go to
work or higher educational apprenticeship at fourteento sixteen, or if I say at
(22:56):
sixteen to eighteen, we need totest you before you got into the world
work or a PhD or a master'sdegree or whatever it is that you choose
to do. And it can't bean exam anymore, and it can't be
an essay anymore. How can weassess that student's development? And my answer
is we have to ask it,but we're not asking it what we're doing
(23:19):
or trying to cluny plan it butwhat we haven't done is had a technical
conversation about what this means for educationgoing forward. I'm really interested in some
of the work you're doing on consciousbusiness, and I wondered about your conflict,
you know, therefore with ethics andpotentially AI tools, and what is
the conscious business? I wonder aboutthat. It's a great question, as
(23:41):
conscious business goes right back to thehippie times of the whole food supermarket over
in the US, and I thinkit was John mackew the book and consciousness
was the notion of businesses conscious meaninggood, being kind and all of that.
That then got developed in the ideathat a conscious business is a bit
(24:02):
like a conscious human being. Soit can listen in journally, it can
listen externally in real time. It'sin humble inquiry, you know, it's
not imposing fixed ideas, open it'sresponsive. Therefore, it's not a person
in the conscious business can be theonly consciousness. So there's lots of good
(24:23):
communication, and that's when you getthe rise of flatter structures and sociocracy and
stuff like that. And not surprisingly, at some point that fields split into
tech is and the non tech is. So the tech is we're already.
Now technology is going to enhance thisa bit like drug states can enhance consciousness,
(24:44):
according to some people. And soyou know, we can get more
real time information, you know,and we can absolutely you know, we
need mobile devices and we need tohave cameras that can start to recognize stuff.
And that's what's coming in now withlarge language models will soon become at
some point real time if they cansolve the problems local language models that constantly
update their information. And so suddenlythe computers bring consciousness to the business in
(25:08):
some way. And so that's wherethat's going is for some people, businesses
will lose consciousness if they allow therobots to take over because of their limitations.
Others are The next step for consciousbusiness is it will transcend the limitations
of one brain, and it willtranscend the limitations of our memories not being
very good and our biases. Andthat's a big thing at the moment that
(25:30):
somehow AI, when it finally drivesout bias, will be able to overcome
the biasness of unconscious businesses because we'renot aware of our biases. So at
some point this leads to the supercomputerreally being more reliable than the human being
to run the company. That soundslike a really that blows my mind actually
(25:57):
thinking about that in the future ofothers change that might I'm sorry, Kurt,
I'm afraid I can't do that.I wanted to just explore with you
the innovative leadership and digital age,and therefore what you're thinking is on what
does innovation look like in education forus in the future. What's it going
to look like? I think oneof the lovely things and he might blush.
(26:21):
I love working with Mkai is mkiuses chat GPT. The innovative leader
is going to practice some new skillsets. One of them is placement,
which you see in the paintings ofold artists and craftspeople. We will know
what the right tool is in theright situation, and we will put tools
away when they are not needed.We will not be taking out hammers and
(26:42):
anvils to bed with us at night. We'll put them away. So the
invocation of tools and the ability toalso discern when the human is needed and
to have human being kind of prioritizedis going to be a key thing.
Linked to that placement is discernment,and discernment are going to be skills not
just around what tech we use,but the type of tech. This needs
(27:06):
a face to face conversation. Thisone is going to be fine if we
all just meet up on zoom.This one we can leave to the AI.
This one we need AI to assistus because we can have chat GPT
listening in and pointing out when we'rebeing biased or distorted. We will have,
like a conductor of an orchestra andlike an improviser, the ability to
(27:27):
discern and place digital technology in skillfulways where we get the best out of
it, rather than it becoming theidea that it's always the default. I
look back to my education and Icame out of f E and HG with
a B Tech and a level anda bachelor's in Arts. But I really
I came out with confidence. Andthat's where I think the education system succeeded
(27:51):
for me. Is I did nothave that confidence at sixteen that at twenty
one I walked out feeling that Icould do great things. Well, I
remember, let's go back much earlierwas when we had maths, pacio calculators
with a digital age, and ifyou had the best calculator, you tended
to be more confident, and youtended to do better in maths. If
(28:14):
you had the crunky out of datewell, you know, with the seller
tape on the battery at the back, that didn't quite have all the functions,
you didn't tend to do as wellas maths. And it was already
at that time that we probably didn'tnotice that technology and human confidence are very
bound up with each other in theeducational space. It reminds me of of
a saying, you know, ifyou want to ride your bike more often,
(28:37):
and train more often and buy anybike, gentlemen, thank you so
much for your time today. Pleasure. Thank you very much, Paul,
Thank you, Richard,