Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Episode ten. I was about tosay further education, artificial intelligence and leadership,
as per my usual introduction, buttoday we're broaching hi as well.
Welcome, Thanks very mush Richard.Welcome Graham. Hi, Graham Bell from
Cranfield School and Management. It's greatto have you here. Thanks, thanks,
great to be here. You betterstart by telling us a little bit
(00:21):
about you and the fine institution thatyou work for. Okay, yeah,
no, great, Yeah, GrahamBell. I'm the director of Digital Education
at Cranfield School and Management, reallyhelping faculty and students get the most out
of the digital technologies that are eitherwith us presently or emerging as we no
doubt talk a little bit about today. My team and I we work on
developing programs so largely in the kindof the online and digital space. It's
(00:44):
really about making sure faculty get thebest support and the students get the best
outcomes. Really, that's what we'revery much focused on in terms of Cranfield
for those that don't know, isa postgraduate only universities. We don't have
any undergraduates at Cranfield, which makeslife very different. I think too many
kind of a university organizations. It'svery much focused on management and technology.
(01:04):
Our core areas are obviously business andmanagement, which is an area I come
from. We're very much into aerospaceand aerospace technologies, well known for that.
And the other thing I should sayis Cranfield I think I'm right insane
but happy to be corrected, isprobably the only university certainly in the UK
has its own airfield. We havea work in airport Cranfield, which is
(01:25):
again makes life really interesting because asa whole host of weird and wonderful things
that often go on at Cranfield thatyou wouldn't normally see. We're into,
as I said, technology management,but things like kind of autonomous vehicles.
Space we're quite heavily involved in space. Energy is another big sector for US.
Water science and source science is typicallywhat you find at Cranfield is that
(01:47):
we're involved in a lot of thegrand challenges that are troubling us at the
moment, climate change as well asthe technology days, stuff that with no
doubt talk about I'm really keen tohear about the technology front. And then
in in relation to AI, Iknow that you've sort of talked about those
things that are front and center toyou as an organization but how long have
you been involved in the AI andthe features of AI and the integration of
(02:10):
the technology more than the organization,And is it advancing rapidly? I suppose
it's a key question. Cranfield typically, I think, is the leading edge
of some of these technologies as theybegin to develop, But certainly the people
that are involved in it, I'veknown many of them. I've been at
Cranfield myself for nearly thirty years andI've known many of them for a long
time. We get involved in thesethings as they've emerge. Really and some
(02:35):
of the really exciting stuff that goeson at Cranfield is a bit tricky for
me because in the Business school it'snot quite at the leading edge of technology
in the same way. So wedon't have the kind of the autonomous vehicle
test track in the School of Managementthat we have on the site, or
even the air traffic or the digitalair Traffic Control Center that's now at Cranfield,
which is again a really neat innovation, a really great facility that's been
(02:57):
implemented, and the AI is reallykind of set central to some of those
research developments that are going on.So in terms of things like the air
traffic control, for example, they'relooking at how AI can be used to
support the air traffic controllers make senseof the data that they have to process
and the speed that they have toprocess the process that data at And of
course with autonomous vehicle constantly on thetest track that we've got, we're constantly
(03:19):
testing vehicles and working to see whatmight happen in terms of various different scenarios
and how the AI responds to it. So there's a whole load of work
going on in that space as well. And on the school of management,
where is it going there in relationto the technology, Yeah, and obviously
big area of interest for me specificallytwo different things I think happening. So
(03:42):
at the one end, you've gotthe kind of the senior leaders top of
the organization that kind of thing,really worried about what it's going to mean
for their organization. How do Ilead in this kind of environment, What
are the opportunities and what are thedangers involved, And just trying to get
the head around actually how quickly thisstuff is moving. Because somebody showed me
an interest in graph the other dayabout learning as you go through your career
(04:04):
and kind of learning is at itspeak when you're new into your career,
but then as you get further inthe amount of learning you undertake drops off
and drops off until you get towardsthe end of the career and it's virtually
zeros. I think sometimes people getvery concerned about how this technologies are going
to affect them and their organization,so it's trying to help them manage that.
And then at the other end,we've got students obviously coming in who
(04:25):
are probably a bit well almost certainlya bit earlier in their career, and
who are saying to us, howdo I make most of this? What
are the opportunities not just for mebut for my career, and what should
I be worried about. What we'vegot really is a bit of a squeeze
in terms of the two ends ofthat, So different things for different people,
but we're certainly hearing that AI isone of the biggest challenges that learning
(04:48):
and development people are identifying within theirorganization as hitting them over the next year
or two. You talked about someof that. You'd be younger in their
careers think about the innovation. Arethey thinking about how they might use it
in every day practice? How theyopportunities for business, and yeah, it's
a wide range of things, tobe honest, and I think yes,
(05:10):
there's definitely what's it going to meanto me as an individual or as the
kind of technology grows? You know, to what end is it going to
help us to achieve some kind ofdigital freedom against the kind of trappings that
we might expect some technology or haveseen some technologies give us over the years.
And I think we're seeing some ofthat from people. So people are
interested in what can I get outof it, how can I utilize it,
(05:32):
how do I use it within mydaily practice? What's what does good
practice look like? I think that'sthat's something that we know. We're hearing
a lot around the kind of theethics and the philosophy of using AI,
and that's been well becoming more embeddedin our programs. But I think we
also know people want and how canthey make use of it in the longer
term, And it's not just aboutday to day use, it's about how
(05:56):
can they make use of it withinthe organizations that they are even currently working
for or likely to be working forin the future. I guess it's horizon
scanning, isn't it what's the futurelook black, We hadn't really dealt with
too much of it. I thinkit's some of these casts that we've been
doing. But that is a reallyinteresting point about, as you say,
Chief is in your Reber Science andothers thinking about, so what in five
years and ten years and what areall the things potentially that may come up
(06:18):
and be utilized and how might weharnessed it. Absolutely, if I was
a CEO of an organization at themoment, trying to predict five years out,
ten years out is almost impossible reallyto know the speed in which these
things change. I think Richard andI were talking at one point not so
long ago about how quickly chat GBTburst onto the scene and suddenly became the
(06:40):
dominant thing that everybody was talking about, and in fact bleue the metaverse out
of the water. Really in somerespects, everybody was talking about the metaverse
and then all of a sudden thathad gone and we were all talking about
chat GBT. But I think ifyou're a CEO, you have to have
some plans around this. You haveto be thinking about and you've used the
term horizon scanning, and you haveto be thinking about what does this look
like in the future. I alsothink you have to think about your organization
(07:03):
in a different way. One ofthe things that I think is really important
is different people in the organization havea knowledge and awareness of how these different
technologies evolve and move and are usedand utilized. If you rely on the
same people to give you your informationall the time, then I think the
chances are you'll be either left behindor you won't be understanding the full impact.
(07:25):
Taking it back to the metaphverse asan example, I think if you
expect your kind of technology or itpeople to tell you exactly where the metaverse
is going to go, that youmight not be looking necessarily in the right
place of your organization for all ofthat information. I think there are probably
other people, maybe younger people,people interested in gaming or whatever it might
be, that can give you adifferent steer. And it's not to say
(07:46):
you just rely on those people,of course, but I think there are
there are probably different ways that thistechnology leads or lead organizational leaders to need
to behave tell me about upskilling.I mean, I guess, isn't it
the fear of missing out? Andthere'll be a lot of people in that
situation I'm sure right now thinking ohmy gosh, what don't I know,
and just running through the sort ofupskilling and thinking about senior leaders and how
(08:09):
they might upskill themselves and what they'reconsidering. We are seeing from senior leaders
at the moment this kind of needto almost consume as much information as they
possibly can. It's difficult to knowwhich is the right information, what to
rely on, what to believe in. Particularly again, if you set the
two two things with technology projects,we've talked about the metaverse and AI against
(08:30):
each other, how much effort shouldbe put in either of those? If
I was the senior leader at theminut I would be putting virtually nothing into
the metaverse and everything into AI,for example. But who knows where that
goes. I guess it's the sortof augmentation versus substitution. What jobs may
go completely, what may be augmented, and how might that be done significantly?
We're actually things like the productivity troublesof the individuals doing it, or
(08:54):
the abilities widened significantly. If Ithink we've talked about this before. If
you don't really needs strong math skillsto be an engineer, how many more
engineers might that be, and howmuch bitter might they be at certain tasks
as a result of it, Andhow many more creative people perhaps and as
good as meths that could suddenly fallinto their space. And I suppose the
cognitive load that you're removing from certaintasks could be very interesting if you thought
(09:18):
about that at all, and justquite interesting in your thoughts on au ventation
versus substitution, because I guess assenior leaders, we're really thinking about that.
What we're looking at here is asituation where there's a huge need for
senior leaders really to get a gripof what in the organization is likely to
be affected. One of our facultiesspoken recently about Elon Musk and some of
(09:39):
Elon Musk's comments, for example aboutthe loss of jobs, and how perhaps
that's not particularly helpful at this pointin time to see it that way,
As you say, it's about augmentation, really, how can we make sure
that AI is really useful to people? And then linking it back to perhaps
the point you were making about FOMO, how do we make sure people aren't
(10:01):
scared of the technology, How dowe make sure that people actually feel that
they are able to engage with itwithout it being problematic for them, or
for them perhaps see themselves as beingreplaced by it somehow, because actually they
find that they do become more productive. What happens when if I, in
my role suddenly start finding that Ican do my work in half the time?
(10:22):
For example, how does that makeme feel personally? But also how
is that viewed within the organization?Am I suddenly expected to double my targets
or output the twice amount of work? Or actually have I gained something because
I can now spend a bit moretime researching something that I might be more
interested in. Yeah, I'll giveyou a great example. In colleges,
we have so much work that we'redoing and development work on our systems that
(10:46):
we have. We have developers thatwe employ full time. We don't have
enough of them. We simply cannotafford within the funding parameters, really the
bubble that we operate with them tohave enough people to go at the speed
we really like to. Got tobe honest, if I had my developers
working twice as productively, or eventhree times or even and a half is
productive, it would be like thesame as being able to employ twice as
(11:07):
many of them, which we simplycan't afford to do now. That would
be absolutely fantastic and amazing for thespeed of which the organization can develop and
improve. So that's potentially what itcan do, isn't it, Rather than
replacing jobs or losing jobs. Thatwould hopefully the organization is that much more
responsive, capable, and able toimprove faster. Absolutely, And then it
(11:31):
comes into things like the responsibility ofthe organization to support people and to encourage
people see it as a good thingand to treat it as a good thing
as well. I feel that there'salmost like an overhanging threat sometimes with these
technologies that kind of sits there abovepeople's heads, is almost like is it
going to replace me? Or isn'tit going to replace me? And it
almost stifles to some extent the uptake. I think we can do more to
(11:56):
encourage people to understand that these aregood things, these are things that they
can people news and get more outof. I think to the point about
what more we could do with it. One of the things I'm really interested
in to support our programs is howwe might develop AI, for example,
to be like a learning coach,to be some kind of focused almost like
member of faculty, but it's learningassystem, that kind of thing where it
(12:18):
knows what our curriculum looks like,it understands the material that we deliver in
a particular module, and it allowsus will allow the student then to interact
with that content and specifically and havea conversation about that content. I always
use the example that I really struggledwith economics when I started my MBA.
I found economics. I'd never studiedit before. It was not something I'd
(12:41):
had a particular interest before in before, but I really struggled with it just
to get around the basic concepts inmy head around it, and I turned
to podcasts. Funny enough, andthat's how I really learned apart from outside
of faculty. Some of our facultymembers might not be very pleased if I
said that they didn't teach me.How they did. They were great,
but it was really struggling more.He needed the individualized, that personalized thing,
(13:07):
right, Yeah, if you takepersonalize again. Our academic faculty were
great. They provided lots of onlinematerial for us to study, but it
just wasn't it wasn't working. Iturned to podcasts, and actually, in
the end, the podcast really helpedme to get to get to grips with
it, which is fantastic, reallygreat, and I enjoyed the kind of
the depth of the material and stufflike that. But I think if you've
got that kind of learning assystem,that learning coach that you can fire questions
(13:30):
with essentially have a conversation with,that's one of the great things about tpts
build on those questions. There's alsoanother pressure, of course, which is
a lot of a lot of educationnow on things like future learning. Courserah
is provided free anyways, there isthat pressure. We tried to find the
middle ground of that what we've createdas a really high quality product, and
we tried we tried to find away to make that accessible essentially, and
(13:54):
you mentioned Courserah, dem other's LinkedInlearning. Of course, they're all no
cost on no cost options that mayor may not become more of a competitor
to what you offer, but AIitself, of course becomes a competitor.
People are building business plans now withCHATGPT marketing plans. The list goes on
what's the value of a low costcourse with Crownfield versus actually just going straight
(14:16):
in yourself and planning and chatting withan AI model. That's a really interesting
point. I suspect the truth ofthat answer is in the fact that well
planned and well constructed learning product guidespeople through a process. It helps them
to look at the material that's relevant, takes them through a process which has
(14:37):
got a stated outcome. And that'snot to say't you couldn't personally do that
with an AI for example, ButI think that's probably a lot more challenging
for individuals to think, what doI need to ask it? What the
outcomes that I need to get outof it's therefore, how do I plan
that backwards? There's quite a lotof thought process that probably has to go
into putting the right prompts in toget the right it comes out. I
(15:01):
think probably the value in it isthat you've got something that's been constructed professionally
for you, that you can relyon, that you can trust. One
of the great things about being auniversity still is that we have that kind
of level of trust from people.People understand that a university comes with almost
like a quality quality education badge onit. Anyways, Yes, you could
do, no doubt you could doit personally. I use I use chut
(15:24):
GPT sometimes just to do some basicresearch and stuff for my PhD. But
it's it definitely doesn't. It doesn'treplace some of the really interesting conversations I
have with my supervisors, for example. There's an opportunity there, I think,
and we can probably do more tohelp people to utilize that, but
I don't think it's I don't thinkit's a replacement. It talked about AI
(15:46):
being used a number of ways.One, we've talked about it as a
competitor there to what you're offering,and then we talked about it as a
coach that would go alongside your courses. Now, to have an effective coach
would mean uploading a lot of yourpersonal Cranfield data into those platforms to be
able to have this coach operate effectively. My question really is about what level
(16:08):
of risk do you see to Cranfieldgoing forward that if you wanted to have
that personalized coach, you'd need toput a lot of work in there,
a lot of best a lot ofmarketing, you need to put a lot
of the pedagogy in. You'd needto put interviews in with lecturers and all
sorts of things so it actually knewhow to coach in the style and the
level and the quality of Cranfield theinstitution, and that means putting a lot
(16:30):
of data into this. Do youthink Cranfield haven't the appetite to do that.
Yeah, there's so many questions aroundprivacy and data and that kind of
thing. I think is probably waytoo early for us to be thinking how
we might do that, But weare starting to have more conversations around what
tools we might want out of thisor to look at in the future.
(16:52):
If there were to be let's say, a big company like a microsoftware to
create something where you could trust withwhere the data is going to be housed
and how going to work, andI can see us definitely stepping into that
space to utilize it. But Ithink in terms of building something ourselves,
I think you're right. There's definitelya kind of concern around where we use
data and how data is put intosome of these systems. We certainly hear
(17:17):
that from some of our clients thatthey are concerned about how much AI might
be being used for certain things,and those conversations would need to take place
with those individual clients. But justcome back to you for a second,
how much is the sector at riskif it does this. We know a
lot of artists are at risk.Sites like shutter Stock are at risk,
although they are now signing deals withopen AI because the large tech companies,
(17:41):
we will see whether it's illegal ornot, but they harvest all this data
and then produce models that can recreateit. Well, what do you think
could happen if the university started reallyputting in a lot of the proprietary information
about how universities function and run,and then whether it was hacked or whether
big tech decided to utilize it,then suddenly you've got an alternative that's been
built out of the data that youprovided. Interesting concept. I think if
(18:03):
something somebody is capable of telling mehow universities operate and work, thirty years
working in one, I still struggleto understand it a little bit myself.
Sometimes. Yeah, I know again, I think you're right. I think
I would say we're a kind ofa We're not risk averse in the sense
(18:25):
that we would we go at theback end of all of these things,
but we're certainly going to be verycautious about how we approach putting data into
and utilizing some of these tools,and we want to make sure that our
students get a great experience. That'sfirst and foremost in our minds, but
we also want to make sure thatwe are operate efficiently. I think they're
the areas where we're likely to target, use, and learn from those things.
(18:48):
So some of these more dreamy thingsof like career coaches and things like
that probably probably much further down theline and may well be solved by other
people before we even get to itfor us to join in. But it's
not to say that we wouldn't Westill wouldn't consider thinking about our own AI
projects. For example, if wethought something came up that would be really
important, I think we would definitelyconsider it. But I think we're much
(19:11):
more likely to pick up on themainstream university products than find a Cranfield forge
its own path down something I think. I guess it's a business center of
information that going into this LUNDM models, and people are considering that and thinking
that, and as educational organizations thinkingthat and protecting some of that. But
actually the students aren't worried about that, and I wonder how much it's going
(19:33):
in accidentally actually, because of coursestudents are getting that material and using it
in those models to inform their work. Just as you said, I'm looking
to go into detail about some particulararea which they perhaps are feeding these things.
So I wonder how much of itactually is happening and DeVault in the
background that we hadn't really even considered, and by the time we have considered
(19:55):
it too much, it may alreadybe. Yeah. I'm not sure how
true this is, but I recountit as I was told, and then
basically somebody might be able to tellme that it is one hundred percentra or
not. But one of the thingsI heard that had happened was that a
consultant had been asked to work ona bid for a particular piece of work
and had quite a bit of corporateinformation that had been given in a sense
(20:18):
to be able to prepare for thebid, fed it into chat GPT without
really thinking about it, and thenessentially one of the corporate company's competitors,
in doing some competitive research, cameacross a lot of the sensitive corporate information,
which kind of alerted to them tothe fact that there was a bit
of a problem. That might beone of those kind of urban myths that
flies around and people hear about.But I think corporates are aware that there
(20:38):
is a danger, and certainly Iguess that's the same problem which the student
work as well, is that ifstudents put things in and that obviously causes
a potential issue. But again,one of the things that we're trying to
do at Cranfield is to raise people'sawareness of the dangers i'd say dangers,
but the potential problems with it aswell, but trying to do so in
a way that doesn't frighten people butmakes people of the challenges. And again,
(21:00):
I think that's part of our rolein the education sector is to try
and help people to understand those issues. Sure, I'm keen on just one
of those questions that really is thatfrom a point of view of senior leaders
in the for the education singer rightnow, what would you say to them
is the most important thing for themto consider now right now in relation to
(21:22):
AI of all your thinking, engagewith it is the big thing. I
think one of the things that I'mreally pleased about at Cranfield is that we've
not taken a negative approach to it, so we've seen it as an opportunity.
Is something that we need to engagewith or we need to have some
kind of guidance on but also weneed to find ways to support our faculty
in particular and students to make useof it. I really think it's a
(21:45):
case of we're trying to find gooduse examples, things that are positive that
improve the student experience. Perhaps isanother way I think in the university space,
where if you can do something thatthe students think, yes, this
is a good thing and is reallyhelping us, then it quite often brings
people along with you. I thinkthat engagement piece is a big thing.
(22:07):
It's certainly where we're going to bespending our time over the course of the
next months and months ahead really isto try and engage people in what's happening
and what kind of positive use AIcan be Overruptioningly, it sounds like your
institution's very positive neah to seeing thisas enhancement to what's possible with the faculty
(22:32):
Pedogontua. Absolutely, and I thinkone of the great things is there's some
faculty in the business Again, there'llbe people outside the business school that are
doing things that I'm not aware of, but there's some faculty in the business
school that are doing some really interestingthings with AI. We're working with it
already to help us to build learningprograms and things like that. I've got
one member of faculty in the BusinessSchool is using it to help support like
(22:52):
a debate. He uses a debateas a way of helping people to understand
resilience, organizational resilience, and theyuse AI productively to help them to prepare
for those debates. They remove AIwhen they get to the debate, but
they use it to help them structurewhat kind of their arguments around the debate.
And there's some really interesting uses outthere. Yeah, totally positive about
(23:15):
the opportunities. Obviously very aware ofsome of the challenges around it as well,
but we want to try and helpmake sure that again our students and
our faculty are able to make useof the tools and the technologies that are
there for fantastic I wish you everysuccess with that. Indeed, thanks both,
It's been great to talk to youboth. Thanks very much, Cram,
there's great heavy to you today.