All Episodes

February 1, 2024 • 35 mins

This time my guest is an old friend, Ray Fleming from InnovateGPT, who is co-host of the AI in Education podcast. And this a joint podcast where we had a chat about the future of education in the age of AI.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Hello and welcome to another episode of the Data Revolution podcast and welcome to my

(00:17):
guest today, Ray Fleming.
Well thank you for the welcome but I actually want to say welcome to the AI in Education
podcast and welcome to my guest, Kate Garothers.
We've known each other for a long time and I would love us to spend some time talking
about both data and AI.
That sounds fabulous.
Thank you for having me on your show.
Well Kate, tell us a little bit about you.

(00:38):
So I'm the Chief Data and Insights Officer at the University of New South Wales in Sydney,
Australia and I'm also the Head of Business Intelligence for the UNSW AI Institute which
is a research institute.
Wow, so using data in the business and using data for research.
Yep.
Wow.
So everybody on the AI in Education podcast probably knows me way too much but for your

(01:01):
listeners I am the Chief Education Officer at a company called Innovate GPT.
Dan, Bo-In and I host the AI in Education podcast which we started about four years
ago before it was trendy.
It was way before it was cool.
And we really focus on the use of AI in education.
My background is I've always been in education technology and now I'm in a startup learning

(01:25):
how we turn generative AI into successful businesses in Australia.
It is very interesting area and very topical and my podcast is really just about anything
about data because I genuinely think that data is in a revolutionary phase at the moment
and it's driving every single change that's happening in the world of technology is being

(01:47):
driven by data.
How very dare you.
That's what AI is doing.
That's what I tell everybody that AI is doing.
But I guess I put it into the context of the potential for it to change businesses and
business processes and industries.
But it's all driven by data.
Okay.
Do you think data is different in an AI world?
I think that the data that drives AI is a lot of the same data that's driven our traditional

(02:13):
operational silos that's been unlocked and set free by the technology.
So one of the things I've seen is that I've worked on a lot of education AI projects where
a huge amount of the effort and resources gone into cleaning the data, making it tidy,
making sure it's accurate, you know, removing all the 999s from the student code.

(02:35):
That's why I've told you this.
So universities were told that whether they didn't know the postcode for a student put
it down as 9999.
And then a few years ago, they discovered that 9999 is an official Australian postcode
for the North Pole.
So I'm imagining all these letters to students going to Santa at the North Pole every year.
Well, I don't think we're sending paper letters to students nowadays anyway.

(02:58):
But it is about, you know, making sense of the data seems to be step one is, oh, well,
let's get it clean and make sure it's entirely accurate before we can do anything with it.
A lot of places seem to start there.
But for us, we've already had our data in a cloud-based data platform for a number of
years.
So we've kind of already gotten through all of that.
We're doing the interesting end of that piece of work.

(03:21):
We're doing a whole lot of AI-related proofs of concept at the moment, which don't have
that problem.
And do you think it's because you spent your time cleaning and tidying your data that
allows you to do it?
Or is it a cultural way of thinking about it?
We've done a lot of focus on getting the data right at the point of entry.

(03:41):
What's an important thing is, rather than just constantly fixing data, we've actually
worked with the student people to say, let's get address validation in place.
Let's have the right postcode from day one.
We've done a lot of that work.
And, you know, that's over a long period of time.
But the other thing that we've done is get our data into one place so that we can mash

(04:04):
up different bits of it.
We've kind of broken down those data silos that live in the different enterprise systems
and we've been able to bring them together.
I don't think education is any different from other big organisations that the data belongs
to lots of different people at different times.
Yes.
And so how have you coped with that problem of different people filling their own data?

(04:28):
We've got a well-established data governance program and we've got the role of data controller.
We don't call it a data owner.
One of our law professors came and argued against having data owners, so we changed
them to data controllers.
But we've got people who are responsible for the data, who get to say what it can and
can't be used for and have to agree to the purposes for which people want to use it.

(04:51):
That's a pretty well-established way of working here at UNSW.
The interesting thing I find is, if I think back on projects I've worked on in education
where it's been about data and AI, often a large part of the project budget is being
spent on collecting and cleaning the data.
And often, sad to say, people run out of energy or budget by the time they get to apply it.

(05:13):
Often, I've seen scenarios where they build a fantastic model that's predictive of something.
Student dropout, for example.
But there's only one person in the organisation that understands it and only one person that
runs that model and they can only afford to run it once a month because they've run out
of budget.
I mean, it sounds like your focus has been pulling all that data together, and that's

(05:35):
great because you're ready for the AI wave.
And we've been working with our colleagues who do a lot of that modelling and getting
them into the cloud and getting them familiar with how to run it in the cloud.
So they used to run Excel models that they'd press the button on a machine and then go
away for a day and hopefully it wouldn't die.

(05:58):
And that was the old world.
And now it's in the cloud where we scale up, run it, and it's a nice world for them to
be in.
But it took a lot of socialising and chitchats to get them to trust that they could do this
stuff.
What about somebody who's coming into it afresh now?
We'll talk about that AI perspective on that in a minute.

(06:18):
But if somebody's listening to this and they haven't been able to bring all those things
together already.
Do they have to go through the long journey that you've gone through?
It depends if they want, if they actually want to do it at scale and across an organisation.
If you want to do it at scale, you've got to put in the hard work about getting the
decision-making rights agreed and all of that.

(06:40):
Because I actually actually started with data governance.
We had four separate consultants reports that all said, you know, data warehouses are fine.
Why don't you do data governance?
So I did data governance first.
And it turned out to accidentally be the right thing to do because we had all the discussions
about decision-making rights.
And by the time we came to build our cloud data platform, we'd already had all the fights.

(07:02):
It was a very easy implementation and a lot of people are trying to do the two at once.
And that's a lot more challenging, I suspect.
I think one of the reasons I'm so excited about what's happening with generative AI is
that I feel there's a different definition of data that we're much more thinking about.

(07:22):
First of all, unstructured data.
But the second part of it is the data that we can infer from unstructured data.
You know, I'm pretty comfortable to give chat GPT my resume and ask it to tell me what
soft skills I've got.
And each time it gives me a pretty good answer.
I will say it's never consistent.
I don't ever get the same answer twice, but the answer is correct because I suspect if

(07:47):
I gave five humans my resume and said, can you infer my soft skills?
I'd get five different answers.
And so I don't see that the job that AI is doing is any different to the job that a bunch
of humans would do.
And that's probably good enough.
I was really funny.
I was talking to someone about this earlier today and I was like, look, it's just guessing.

(08:08):
If I ask you to guess the next word, you're just guessing too.
But AI and us, we're all just guessing the next word with generative AI.
That's all we're doing.
So that's why I think that the potential with generative AI is very different from the potential
with what I think of as the computing AI, the machine learning, you know, all of that

(08:29):
binary stuff.
This is much more dealing with fluffy clouds of amorphous data that you can't pin down
in the same way.
You can't turn it into binary.
You bring some inferences out of it.
But I see it then as giving us much more potential to change the way we do things inside an
organization.
Well, I really see generative AI becoming like the interface layer where we will literally

(08:52):
end up talking to the machines.
And then we're going to have stitched together both the generative AI, large language models,
and a bunch of other things like cognitive services and machine learning in the background
feeding into that.
And it'll be opaque to people.
They won't even understand that it's happening.
So I see generative AI becoming the interface of the future.

(09:17):
And I absolutely agree with that.
And I see that then the other side of it is using it in the background to change the way
that we do processes.
The cross-cylinder working in an organization.
There's a lot of fuss around co-pilots at the moment.
It's going to be everything to everybody.
But I look at that and go, we're solving a problem in a silo.
In fact, we're solving a problem in a silo of one, which is me as a user needing to read

(09:42):
or reply to emails or create a new email and bring me the information to make it easy.
That's a great task for generative AI, but I'm not sure it's going to actually lead
to humans being more productive because I rather suspect I will pay less attention to
the emails I write because it'll write them for me.
And I suspect I'll pay less attention to the emails I receive because it will summarize

(10:02):
them into a couple of bullet points.
And so I'll...
But there's the other piece of it.
There's a great video of a brand we both know and love with co-pilot that's going to do
all...let ordinary people create their visualizations by just interacting with an interface.
Now I've shown that to my team and said, so any of you who do visualizations, you probably

(10:23):
need to think about what your other skills are.
But that's assuming that visualizations are what we need in the future.
That's not what we need in the future.
And the thing...the future I see, this is why I see generative AI as the interface because
what we want to be able to drive off our underlying data pipelines is analytics and BI, AI and

(10:45):
ML, low-code apps and data integrations and APIs.
And increasingly what we want to be able to do is deliver that nugget of information
to a human being while they're in the workflow process at the exact time that they need it
and automate that and actually even more to automate that decision-making process.

(11:08):
So take the humans out of the loop completely.
So if we take that as what we want to do, then visualization becomes something that's
not that important anymore.
It's the whole world for some people right now.
But in that new world, it's the nugget of right information at the right time in the
right process.
Yeah, because nobody ever wants to look at the dashboard.

(11:29):
They just want to know what's different on it this week from last week.
I once...it's far enough back in my career now that I can admit to this.
I once wasn't clear who was using the BI reports that I was circulating to the organization.
So I stopped the team producing them.
And it took six months before one of the directors came back to me and said, I haven't seen that

(11:50):
email in a while, which told me everything I wanted to know about the information that
was going out there.
We were doing it because somebody had said, well, we must have this.
Oh, no, no.
I was part of a project at Citibank many years ago where we saved millions of dollars by
just turning off all the reports.
We used to have a whole team that printed out reports and put them in little post boxes

(12:10):
and people would come and collect them and we'd just stop putting them in.
And then when people came and asked for them, we started putting theirs back in.
We saved millions.
Gosh, this is going to get scary because then when we start thinking about organizations
using AI, are people going to start creating things that nobody wants?
Yes, yes, absolutely.
And I can conceive a future where people are writing emails using AI that AI's are reading

(12:33):
and nobody's actually reading the emails because we're solving last century's problem because
AI is now doing email, which is last century's technology.
There's a brilliant mean that I saw last week, which was two users on two different
offices, one saying, this is brilliant.
It's taken my five bullet points and turned it into a long email and the user on the other

(12:54):
end is going, this is brilliant.
It's taken this long email and turned it into five bullet points.
Yep.
And nobody's reading either thing.
So the AI's are just doing it themselves.
So where I want to take you then is I believe that we can change.
We should, we must change the way we run processes in organizations on the back of what is now

(13:16):
possible with generative AI.
And there's some things that I'll have to change before we can do that.
But I think of a process, look, I'll talk about an education example because the world
I know best.
I did a six months professional certificate in project management.
Don't tell anybody and I'm not at Google anymore.
So it's fair.
I can now admit it.
I was a program manager, which is managing multiple projects that I'd never formally

(13:38):
been to all project management.
So I thought I'd better go and do some training in it.
So I did a professional certificate.
It took me about 10 minutes to sign up, which was find the course, make sure it's the course
that I wanted to do, mentally go through that.
I'm not prepared to sign up to this for six months.
And then I signed up through Coursera in exchange for my credit card number.
It cost me, I don't know, three or $400 to do the course over the course of three or

(14:02):
four months.
And I chose to do it on my own pace.
But if I had gone to a local Australian university, it would have taken me about three months
to sign up for that course because I would have had to go through a whole recognition
of prior learning process to decide if I was a fit and suitable person to take the course.
I would have had to wait until the course started.

(14:24):
I would have also had to understand all of the academic language that was on the promotion
of the course and who this course is suitable for, etc.
And to me, I look at that and go, well, there's the business opportunity with AI, which is,
well, can we wipe away that whole process?
Can the system go and read my LinkedIn resume and give me an instant answer about, yes,

(14:47):
you'll get into this course.
Can the delivery of the course be done in such a flexible way that it doesn't have to be
run on a specific timetable?
Well, now universities are actually doing stuff like this.
So there's on demand online courses that you can do in universities.
So they're actually responding to that kind of demand.
But on the whole, universities are still in last century.

(15:09):
So they're thinking in big courses with set prerequisites and stuff like that.
And they really haven't considered the notion of disruption from outside because the course
areas have been around for a while and they haven't eaten their lunch yet.
But I suspect that the kids that are coming through now are less likely to be interested

(15:34):
in qualification from a prestigious institution and more interested in the skills they'll
get.
So I think the world's starting to shift underneath the higher education institutions.
And I'm pretty sure that they haven't quite realized it yet.
And it's probably because most of the people who are in the business of setting up the

(15:56):
interface are not actually meeting the students.
So I tried with Chan GPT, just with the consumer version of it.
I tried a couple of things.
One is I fed it my resume and said, tell me what my skills gaps are, tell me what my next
career move could be.
Now look at this catalogue and recommend the courses for me.
It was very good at doing that.

(16:17):
The other thing I did was take the whole end to end problem of, hey, I'm a year 11 child
or a year 10 kid and I want to be an astronaut, what should I do?
And it started to make academic recommendations.
It started to make sports and social recommendations.
It started to make hobby recommendations.
And I just advanced it.
It's the only bit of time terrible I've ever done where I said to it, okay, now I'm in

(16:39):
year 12, which university should I choose?
Which courses should I choose?
And then I advanced it a step further and said, okay, now I'm in year two of my university
degree and I've discovered I suffer motion sickness.
So I'm not going to be an astronaut anymore.
Thinking I've learned so far, how do I then apply that to something new and what else
could I do?
Now, to me, thinking as a consumer and thinking as an individual, let's ignoring all the structures

(17:02):
of the organization and all of the people that are there doing parts of that problem
and just treating it as a whole.
I think that's the opportunity for business.
I think that's why so many people think that Genitiv AI is going to change the game in
many industries.
Well, that's just the most obvious thing.
That's only possible because there is a large corpus of information about degrees and courses

(17:26):
and programs online, which Genitiv AI has access to.
But the really interesting thing that Genitiv AI can do is it can help us to reimagine processes
and it can help us remove friction from processes.
But at the moment, we actually need skills to do that.
So we can't use Genitiv AI to build those kind of Genitiv AI solutions just yet.

(17:50):
I think it's coming.
That's the thing is you actually need people with skills.
So prompt engineering skills, getting the underlying data ready so that you surface the answers
your business wants, not the ones that the Genitiv AI surfaces.
Because that's the secret source for a business.
You want to guide people down a particular path.
You don't want them to go down wherever chat GPD is taking them.

(18:14):
What you were doing was taking just whatever chat GPD, but as a business, we might not
want to do that.
We might want it to go down a certain path.
We might want it to go down a more profitable path or a higher business value path, whatever.
Yeah.
So I guess the rework that scenario in a UNSW mindset would be you want the bot on the website

(18:34):
that says, tell me your dreams and we'll get you there.
And it can be a year, 11, year, 12 advisor that leads you the university route.
And it knows the catalog for UNSW.
And it knows the prompts because we're not assuming that a year 10 student knows the
right questions to ask about their future studies or their future career aspirations.
So one of the things that the old world of AI was about, I can't believe I'm describing

(18:59):
it as the old world of AI because it's not that old.
But what I think of as the computing binary world of AI was we must get the data accurate,
must be 100% correct before we can do anything with that, otherwise we'll be making mistakes.
And in this new world, well, everybody says part of the reason they don't, they're frightened
of using genitive AI is because it makes mistake.

(19:21):
So we're on a bit of a learning journey as individuals.
But one of the key things I think when I'm talking to people is to reset the baseline
to say what we should be looking at, is this better than humans are currently achieving
rather than is this perfect, the old IT way of thinking about things.
Is this better and better could be a higher quality result.

(19:42):
It could be a faster result, or it could be a higher volume result.
The example of read my resume find the course for me.
I could do that a thousand times.
I could do that 10,000 times over in a day.
So in that case, it's probably a higher quality result.
It's probably a faster result and it's probably a higher volume.

(20:04):
But that isn't the way that we've traditionally thought about data projects.
Well, the thing is though that that's a very consumer oriented view.
Thinking about it from a business perspective, you want to drive consumers to consume your
products and you have a hierarchy of your products that you want to consume.
So the business imperative will be always trying to focus in on that kind of thing rather

(20:27):
than just the whole piece of whatever the consumer wants.
That's the thing we need to keep in mind.
AI for business is going to want to drive specific consumer outcomes that are in the
business's best interest.
And we always need to keep that in mind because at the moment, generative AI is just a thing
that's out there on the internet and everybody's playing with it.

(20:49):
And then not assuming that somebody is choosing their path through that.
Let's continue with that student who wants to be an astronaut.
The first iteration of that journey would be we're going to build that for a particular
university and the answer is always going to be that university's courses.
If you want to be an astronaut, then go to go to go to astronaut university.
Yes.

(21:09):
But I suspect then we'll have people coming in from the outside saying, well, actually
we can solve this for every student and kind of what career services do for students.
We can just solve this for every student and we can have the catalog for all of the education
institutions across Australia.
And then why stop there?
Let's do this for the US and why stop there?
Let's do this for online and how do you become a software astronaut?

(21:31):
And all of those things.
But the underlying principle then is we don't wait until everything is perfect.
We start the journey and learn things along the way.
Well, realistically, we've always started the journey because the data is never perfect.
So your characterization and your waiting till things are perfect, they never have because

(21:51):
perfection is not easy to get.
And there's not a lot of value to the organization in sorting out data quality.
That's why everyone's data quality is not the best.
But a lot of organizations have been working to solve their data at point of entry.
So reduce the quality issues that way.
But the issue is always going to be getting collaborations.

(22:14):
If you wanted to provide a career service, so you would either be doing it outside of
the university sector for many people and they would choose not to cooperate, would
they put no in their robots.txt for you strapping their data?
There's not a lot of incentive for people to cooperate with that kind of proposal.

(22:36):
I think really the issue is going to be how do you do that if you want to do that as a
business and get people's buying, get the business buying.
Yes.
And so that's then a long human centered process of getting people to buy into something that
they may not necessarily see to start with.

(22:56):
Well hilariously in Australia, we had this thing called My Equals, which will centralize
repository for your qualifications.
And all the universities signed up to do this many years ago now.
And if you've studied and you get a login when you finish your studies and you can log
in and you can see your testamers, you can see your transcripts, that was supposed to

(23:19):
be some whiz bang blockchain solution originally.
But no one could agree.
So we've got this website that has static hosting of content.
That's the kind of problem that you get, that getting an industry to align on something
is problematic because it's a whole bunch of people.

(23:39):
And this is the element that we haven't talked about.
And I think it might be interesting to dive into because we talk about data as plumbing
and then we talk about AI as a thing on its own, as a special entity.
But behind it all is people.
People who give you their data and trust you with it and people who in the business who
use the data.

(23:59):
So what about that angle?
Yeah, well that's something I think is going to hold back the use of generative AI.
Switch over to a parallel example and then we'll come back to generative AI.
So the parallel example I think about all the time is self-driving cars.
Statistically self-driving cars are safer than human driven cars.
And we know that because we've got the data of injuries and deaths per mile.

(24:23):
So they're safer.
But every time a self-driving car is involved in a serious accident, it's reported on the
front page of the newspapers not just in the town it happened, not just in the country
it happened, but sometimes around the world.
And there's a reluctance of people to hand over control to a self-driving car if it's
not going to be perfect.
So 85% of Americans wouldn't trust a self-driving car, even though statistically it's safer.

(24:47):
I think part of the problem is at least 85% of Americans would say they're above average
drivers.
So they're probably happy to put other people into self-driving cars, but not themselves.
And I kind of think we're going to see the same with AI stuff, which is, well, sure,
I'm happy for AI to do that thing to that person, but I don't want it replacing me because

(25:07):
I'm an above average person.
I think I have a different slightly different perspective on that.
I think that once people have access to the affordances of, say, a self-driving vehicle,
so if we gave everybody a Tesla, they would start using that because it's just easier.
And convenience trumps any other consideration all of the time.

(25:30):
Trump's privacy, trust Trump's driving because it's just easier.
Do you think there's an agency thing there as well, though, which is that self-driving
car has got to have a steering wheel in it?
Because I want to feel that I'm going to take control, but we know from stories in the US
that people don't touch it.
There was that case where the guy driving on the freeway and he wasn't driving.

(25:51):
He was letting the car do in the driving and he was watching the movie.
But he would have absolutely insisted this car's got to have a steering wheel in it.
As long as there's a steering wheel and there's the possibility of exercising agency, people
will let them happen.
And I think exactly the same thing will happen with AI, specifically with Generative AI,

(26:12):
where I won't want that, but I'll let it write emails for me.
Or in the university sector, I'll let it mark an assignment as long as I get to review the
marking.
The interesting thing is we know that AI is a better marker than humans, more consistent
and achieves a more accuracy on marking, but most professional academics wouldn't accept

(26:38):
that.
Well, the dirty secret of all marking is that most of it happens for a bottle of red wine.
And depending where your assignment comes in, the bottle of red wine probably has a big
influence on where it happens.
I mean, I am joking, but people have, so academics have a lot of marking and they have it all

(26:58):
at the same time, so they have an avalanche of marking and it is a slog to get through
it all.
And so there is a lot of variability, I would suspect, in the marking that people do.
And if we could map the time that people start and the time that they finish and the marks
that they give, we'd probably see some really interesting variations in that.

(27:20):
But we trust human beings.
Probably sometimes, wrongly.
Not trust, but faith.
We put too much faith in them.
You joke about the marking in the red wine, I read some research recently where I'm hoping
it isn't related to the red wine, but if you are, if an academic is marking papers, you
want to be in the first batch of papers they mark, because they give more generously than

(27:41):
they do to the second batch and the third batch.
So basically the scores go down.
We know that AI wouldn't do that.
Well, we hope that AI wouldn't do that.
It would be consistent throughout.
But there is that bit about professional judgment that goes into it.
Look, I'm going to admit to my human failings here.
There are times when I don't respond to an email straight away because I don't know what
to say and it can hang on and hang on and hang on because I don't quite have the full

(28:06):
answer.
Whereas it would be much more beneficial if I just gave that email to AI and said, okay,
well, here's the best answer I can give and got it to write the email for me than prevarigating
for a week.
I have two response times for emails, five seconds or five days, nothing in the middle.
And so having this conversation is making me realise it's easy to talk about the gap between

(28:26):
humans and technology, but I am as guilty of it as anybody else.
The difference, I think, for me is I accept that we shouldn't be looking for a definition
of perfect and that therefore I need to learn by doing.
So I try to do as much as I possibly can using AI and then learn from my mistakes.

(28:49):
I think we need to also just keep in mind that generative AI is in the popular mind
a year old.
So it's still terribly new and a lot of people have adopted it, but a lot of people haven't
and there's a pretty standard sort of change adoption curve.
You and I are probably in the first among people to adopt that, but there are going

(29:11):
to be people who are in the Laggards group who will not adopt this for another five years.
So we're still really early in the adoption cycle of this as a technology and we have
to remember that and there are people who have not even tried it in the world.
I know that's hard to believe, but many people haven't tried it.
I ask at every event I talk at who is using it.

(29:36):
Most of the time the numbers are quite high.
I saw some academic research from the University of Melbourne, I think it was, where the answer
was that 80% of students are using it, 18% of academics and the student number had gone
up dramatically between term one and term two.
The academic number had gone from 16 to 18%.

(29:58):
So yeah, there are different co-votes with different views.
I suspect students would be one of the first groups to be using it because of the obvious
advantages to them and they are supposed to be expanding their minds.
So they'll try new things that haven't been tried by other people.
Also, if you just consider that students are learning how to write and now they won't have

(30:18):
to learn how to write because they can get AI to do it for them.
This takes us back to one of the fundamental questions of education.
Like, what are we here to do?
Because do we want people to learn to write or do we want them to generate their ideas?
Because that's a pretty fundamental question for educators to work out.

(30:39):
It's funny because whenever anybody says how busy the curriculum is, I always suggest
we drop handwriting because the main reason we teach handwriting is so people can fill
out government forms and they've finally all gone digital.
But actually, it's only a joke because it's important to hand write things, to be able
to encode ideas and to remember things.
Yeah, it's actually important neurologically for us to learn this and we lose a lot by

(31:01):
not learning it.
No, my handwriting's gone to hell in recent times.
But the thing that we need to think about is what is the fundamental nature of education?
And I think this is the digital transformation that we're on the brink of in education is
all about what are we educating for?

(31:21):
I can pretty much imagine that back in the time of Socrates when they used to transmit
knowledge orally, when they started writing stuff down, people are like, oh, we'll be
ruined, we'll break everybody's minds, they'll be useless.
And I think we're having the same sort of moment now.
So here's a thought to finish on that you're going to have some views on as well.

(31:45):
The example I said about, you know, I want to be an astronaut, guide me on that, that's
like creating the Google Maps or the Apple Maps for education is I am here, I want to
go there, find me the route to get there.
And the nice thing is along the way I can change my mind.
I can say avoid toll roads, you know, I can do a bunch of things.
That's great.
If we could create that for education, wouldn't that be awesome?

(32:07):
But the thing I think about that links into what you've just said is I never got lost
until I got I never got lost because I would look at a map.
I had all these spatial skills.
I would remember vaguely the map as I was going on the journey, and I would know if
I was going wrong.
And then when I got in car navigation, I found myself getting lost because I switched my

(32:29):
mind off and let the navigation do everything for me.
And since that time, I have learned to navigate in a different way that I'm more aware of
where I want to go.
If we create some of these things with Gen2BI that makes life easy, are we going to let
people switch their minds?
I feel we're going to end up dumber in five years time.
Well, they're the fundamental questions that we as educators need to grapple with because

(32:54):
my father was of the generation that used logbooks and slide rules.
And he was deeply suspicious of my calculator and was convinced that it would rot my mind.
But here I am.
And so we need to distill what are the fundamental pieces of knowledge that we need to transmit
because education is about transmitting knowledge.

(33:15):
And we don't need to do it ourselves.
We've moved on from the oral transmission of knowledge in ancient times to written
word.
And now we're moving into the AI digital world and what are the essential elements of knowledge
that we need to transfer?
And that's the fundamental question that we all need to answer.

(33:35):
And I don't think we're quite grappling with it just yet as a higher education industry,
but this is what we need to work on.
So for me, the lesson out of this conversation is thinking about the I think about just huge
piles of unstructured data being the thing that is feeding the engine of Genitive AI.

(33:55):
I think what you've taught me is we've got to think about the structured way that that
unstructured data might get used.
It's not just a free for all.
We want to think carefully about it, but there's going to be a lot more unstructured data
being used to achieve an end goal maybe in the future.
Well, the thing is, there is so much unstructured data out there in the world.

(34:18):
We finally got a tool that helps us access it.
We've never had a tool that can do that before.
We've always needed the data to be structured into databases and applications so that we
could do stuff with it.
And now we don't need that.
And that's part of the brave new world that we face.
And I'm thinking from an AI point of view for people listening to this interview on
your podcast, the thing that I feel really strongly about is we've got to change the

(34:42):
metrics to not is this great?
Is this perfect?
But is this better than humans can achieve?
Faster, better quality, higher volume, that that's a different outcome from the outcome
we've been thinking about often in the past?
Yep.
And I think the essential takeaway for my podcast, yours, is that we need to reconceptualise

(35:03):
what we're doing with education in the new world of AI because the fundamentals will
drive what we do with AI in the education context.
Wow.
There's some big things for us to be thinking about.
That's brilliant.
Hey, Kate.
Thank you for coming on our podcast.
Thank you for coming on my podcast.
Brilliant.
Thank you.
And that is it for another episode of the Data Revolution podcast.

(35:25):
I'm Kate Gruthers.
Thank you so much for listening.
Please don't forget to give the show a nice review and a like on your podcast app of choice.
See you next time.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.