Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Hello and welcome to another episode of the Data Revolution podcast.
(00:20):
Today, I've got the lovely Linda McIver as my guest and I'm going to ask her to introduce
herself because that's easier for me, I think.
Hello, Lee.
Hi, Kate.
Thanks for having me.
My name is Linda McIver.
I run the Australian Data Science Education Institute, which is a charity I created to
(00:41):
build the data literacy and critical thinking and STEM skills of kids around Australia and
ultimately around the world.
I love that big mission that you've got.
So tell me how you got into this.
Why did you come to start running this?
It was a series of happy accidents, really.
(01:01):
I used to be a computer science academic and I was, I loved that, but I particularly loved
the teaching part and I always felt like the research that I was doing, which was in computer
science education, wasn't really having an impact.
I wasn't translating to the classroom and I wanted to have more impact.
I wanted to have more contact with the students.
(01:23):
I wanted to feel kind of more invested and engaged.
So I left academia when my second child was due because it just wasn't meeting my needs
and they were offering packages and I was having a baby and it all kind of fell into place
quite nicely.
And then I did a bunch of things for a while that wound up with me being a secondary teacher.
(01:49):
I did freelance writing.
I did communications work.
I did all kinds of things.
But I wound up, as I say, a secondary teacher and that was great in that I felt that like
much closer contact with the students than you have in a tertiary setting and I really
felt like I was having an impact and making a difference.
But I was doing two things.
(02:11):
So one was I was teaching a year 11 subject of my own devising, which I put together with
a co-teacher by the name of Victor Rajewski and we were able to do anything we wanted
in that course and we created a subject where we taught what we thought were really interesting
parts of computer science and then we got the kids to solve a real problem.
(02:33):
And we did that by connecting with academics and getting the kids to solve data problems.
So we connected with scientists who had data needs but not quite the computational skills
to solve them and got the kids to solve the problems.
And of course, school projects, some of them did work, some of them didn't.
You expect that and we would warn the scientists up front but some of them were really powerful.
(02:55):
I had kids in that very first class of year 11s doing cancer research and that code still
being used and that was in 2011.
So it was really some of those projects were amazing and just blew the academics away and
it was so satisfying.
(03:16):
At the same time, I was teaching someone else's subject in year 10 and we were trying to engage
the students with code using toys.
They were making robots push each other out of circles or follow lines.
They were drawing pretty pictures in scratch based interfaces and they hated it and they
(03:40):
were so unmotivated and disengaged and it was just awful.
I have to admit I hated it too.
Eventually, I persuaded the head of that subject to let me do a data unit where they were learning
the same coding skills but they were learning it in the context of real data.
So we used election data, we used data about micro bats, we used all kinds of different
(04:01):
data sets.
The change in motivation was amazing.
They were so into it and it didn't matter whether it was a topic of particular interest
to them, it mattered that it was real.
And so I thought this is great.
Now I know how to engage kids with code who didn't want to code before.
I'm going to quit because I wanted everyone to be able to do it, not just the kids in
(04:24):
my classes.
So part of the reason I could do that was because I was half time so I could use my
own time to find the data sets and build the projects and all that kind of stuff.
Teachers don't have that kind of time.
And also, for the most part, don't have those skills.
So I thought, well, if I can build the projects and find the data sets and annotate the data
(04:47):
sets and train the teachers to do this stuff, maybe I can kind of take the hard work out
of it and make it possible for more teachers to engage kids with these two other projects.
So that's why I started ADSI with the goal of doing that.
Really it was to start, it was to get kids to code but it's become more and more obvious
to me that the critical thinking and the data literacy are actually the important parts.
(05:12):
Yeah, that's really interesting.
There's nothing more soul-destroying than teaching someone else a super boring course.
It's bad for the students and it's bad for you.
I've been there too.
But so when you're looking at the world now with especially the high school students,
(05:32):
what are you seeing in terms of their technology capability?
We call these kids digital natives because they've grown up texting and messaging and
on Instagram and on Tiktok and all the rest of it.
But they're not really digitally skilled.
They're not really digitally literate.
They are consumers of the technology.
(05:56):
They might create in terms of creating Tiktok videos but they're not creators in terms of
creating the technology.
They don't make it.
They don't understand how the technology works.
They don't understand the risks of the technology.
Yeah, they're perhaps digital artists but they're just not data literate and they're
(06:21):
not really technology literate.
And part of the reason for that is the courses that we run at schools are awful.
My own kids, one in particular, was very keen on tech and then was kind of pushed through
the sieve of a bunch of courses where learning technology was about learning to use PowerPoint
(06:45):
and about making Photoshop produce the identical image to the one in the textbook and he would
rather poké his eyes out with a fork than do that stuff.
He hates it now and I was just, you know, I watched that happening and I was like, oh,
no, stop.
We could be getting into real stuff.
But I mean, to be slightly polemical here, I am not a power user of electricity.
(07:07):
Like I just know how to turn the light switch on.
That's not a bad thing.
There are engineers out there who, power engineers who know stuff about power, you know, and we
function quite well.
Are we kind of facing a world where we don't need so many people who need the making skills?
(07:29):
So I agree with you to a point.
I don't think everyone needs to code.
I used to, you know, I was a computer science academic.
I wanted everyone to learn to code.
I couldn't care less whether you can code or not now.
What I care about is that we understand enough about technology that we can have a say in
the direction that technology is taking us.
(07:50):
And at the moment, we are not there.
And you can see that in the response to chat GPT, where I was talking to a bunch of teachers
at an event a while ago, and it was months after chat GPT had hit the headlines, and
they were all very excited about it, using it every day.
And they did not know that chat GPT did not tell the truth.
(08:13):
And so I showed them some examples.
You know, I went through it.
I had done.
Oh, the ever famous hallucinations.
Yeah, I'd asked chat GPT for to write my bio eight times and got eight different results
with eight different places where I did my PhD and eight different topics where I did
my PhD.
None of them correct.
And they were like.
(08:34):
It did the same for me too.
It invented two jobs, one with a very worthwhile indigenous institution, charity that I'd never
heard of.
And the other with the Red Cross.
It's just ridiculous.
So it's not good with truth.
No, exactly.
And what worries me about that is people don't know, you know, so the real thing that we
(08:57):
need to build here is we need to build enough technological literacy and enough data literacy
that people can think about these things critically and that people can go, well, hang on a minute.
I how do I know this is accurate and actually challenge these kinds of things because at
the moment we are very susceptible to the hype from the people creating these systems.
(09:22):
And when we have things like a bunch of people who are senior in the industry going, oh,
the real risk is, is that they become sentient and kill us all, which is a very much look
over their distraction tactic to avoid us thinking about the very real and present harms
of AI, which are in the bias and misinformation kind of spheres.
(09:50):
We all need to know enough that we can have that conversation intelligently and that we
can go, hang on a minute.
You are spinning us a marketing line here.
Can we look at the actual reality of the situation?
Because at the moment we're not equipped to do that.
Yes.
AI is a little way off from being sentient.
Just a touch.
(10:10):
Yeah.
So, I mean, it's a fascinating area though, thinking about how we're going to get kids
the right skills.
So I mean, you used to think that you had, everyone had to code.
Do your thinking evolve now for the kind of skills that high school kids in particular
(10:34):
need?
Above all, they need critical thinking and they need creativity and at the moment our
curriculum is actually forcing that out of them.
You know, kids, there's this research that shows that kids when they, they're sort of
before kindergarten are incredibly creative.
(10:56):
They are great problem solvers.
They think outside the box.
They're fabulous.
And we progressively through the education system, squish them into the box.
We cut off.
Into the next plan box.
It don't fit.
Yeah.
We just, we force them into this narrow idea of getting the right answer and, you know,
that solving the problem means getting 100% on the exam.
(11:20):
And so to get them instead to do projects where they have to critically evaluate their
own work, they have to say what's good about it, but also what's bad about it.
And who their problems, so if they're solving real problems, then their solutions will help
some people, but they will probably harm others.
So actually being able to evaluate that and recognize that.
(11:41):
I mean, imagine, imagine if our governments did that.
Okay.
That would be different, wouldn't it?
Well, not all of our governments really don't want to hurt people.
That's really interesting because to me, that kind of speaks to using really, really simple
tools, things like the Open Data Institute's Data Ethics Canvas, when you're thinking about
(12:05):
doing data projects, which I'm a huge fan of, and we'll share a link to that in the
show notes.
But that kind of like you have a one pager with a bunch of questions about your project,
I think is really powerful.
Yeah.
Super important to be able to do those kinds of things and also know that those kinds of
(12:25):
things should be done.
So Laura Summers has a wonderful device called the Ethics Litmus Test.
I don't know if you've seen it, but it's like a bunch of sort of cards that you get that
ask you questions.
(12:46):
It's a bit like a card game.
It's like, is this rather, the big question is, is this ethical?
But the Ethics Litmus Test has a bunch of smaller questions that just get you thinking
around the lines of, well, is this the right thing to do?
Really simple, small things that just get you asking the question.
(13:08):
It's not, ethics is difficult and it's complicated and it's huge, but we can actually approach
it in a fairly simple way and achieve a lot.
But you have to start there.
You have to intend to build ethics in and that's not built into a lot of our systems
to even think about the ethics from the start.
(13:30):
Yeah.
Well, you know, I think that the whole space of ethics, everybody is like, oh, it's so
hard.
And that's why I like these tools like the cards that you just mentioned or the one pager
where it just asks you a series of simple questions.
And if you just think about them for a minute, it really can stop you from doing some quite
(13:54):
egregious things.
Yes, exactly.
And so building that into education where you've got kids solving real problems and thinking
about the impact of their solutions and actually measuring the impact of their solutions, that's
immensely powerful.
And it's something that you can't look up the answer for in the back of the textbook.
It's something you can't get 100% for.
(14:17):
But it's a way of thinking about the world that actually has the potential to help these
kids become creative, critical thinking, rational problem solvers.
Yeah, that's an excellent thing to hope for.
(14:39):
It really is.
It's an interesting thing too.
I keep saying interesting.
You know, it's my most favourite word.
I did a semantic analysis of all my blog posts ever and it is my most used word.
But it is truly interesting that how we do education for engineers now is a lot of group
(15:00):
work and we get them to peer assess each other.
So you know, that critically evaluating not only your own work, but other people's work
because that's how we work in the workplace anyway.
So a lot of what we're doing in the education space now is teaching them to work as part
of teams and small T teams, not large T teams, you know, collaboratively working in groups
(15:25):
because that's the way you work in the real world.
And increasingly, you know, we're kind of moving away from the old closed book exams,
which were just good tests of memory into a world where, you know, if you've got an
open book exam and you don't know where the answer is in that pile of textbooks you've
got on the table in front of you, you can't find it in any exam anyway.
(15:48):
So you have to know stuff even with an open book exam.
So that's so we're starting to shift how we assess people's work and we're trying to
make it more aligned to what they'll find in the real world once they graduate and get
a job.
I think that's really important.
We know that exams are terrible ways to assess people anyway.
(16:12):
They are fast and easy and relatively robust to create and to mark relatively, relatively
fast and relatively easy.
I've had some, yeah, I've not a fan myself.
I don't miss it.
But they're simple, you know, they're straightforward and they're to some extent what we've always
(16:34):
done and we know how to do them.
But they, you know, when I see a doctor, I don't care whether she's got 100% on her
exam.
I care whether, first of all, whether she's going to listen to me, whether she's going
to take me seriously, which is a big problem, especially for non-male patients.
(16:56):
And whether she's a good diagnostician, those things are almost impossible to test in an
exam setting.
Yeah, and increasingly, you know, like we're doing a precinct with a joint precinct with
the health department and the local area health service and another university and it's a
(17:17):
combined clinical teaching and treatment facility.
And one of the things that we're trying to do with that is think about how we can use
technology to enhance the students learning in that situation.
So it's really quite challenging to work out how you can use the technology to help teach
(17:42):
as well as treat people in a clinical space.
Yeah, you know, that's an interesting problem and one that we haven't really solved.
I went to a super computing conference, actually the first one I went to, I think in 2012,
and I heard a talk from someone at Intel who had shadowed a cardiologist for a day.
(18:03):
And the cardiologist saw six patients while she was shadowing him and four of them had
the same issue.
And the fifth one to her eyes looked different, but the cardiologist was in kind of in the
pattern for the diagnosis that he had just given four times and diagnosed the same thing.
(18:25):
And she said, I wonder what would have been different if we had a personalized AI that
was just listening in on the on the consult and was able to say, did you check this?
Did you measure this thing or did you ask the patient about this issue, you know, just
kind of to throw in the bits that that the doctor might have missed?
You know, what other medication is the patient on or, you know, just those kinds of questions
(18:49):
because though a lot of doctors bristle at the implied criticism, doctors, especially
specialists that I've met who would not take kindly to any advice from anyone else, not
even a no, I, you know, it's a social, sociological problem rather than technical one.
(19:10):
But it could really make a big difference to pick up those things where, you know, we
have, we are human, even specialists, even medical specialists are human on some level.
And they, they, they, you know, make mistakes and they fall into thinking traps like we
all do.
And one of the things that's happening in medical, medical space right now is, you know, there's
(19:40):
a lot of people who are working on private closed generative AI and other AI applications
for health space.
And they're very conscious it's all got to be locked down and very secure.
And increasingly, it's going to end up being four or five big companies globally.
(20:01):
A lot of consolidation.
But you get then the benefit of the aggregation of the data and then improved diagnostic capability,
which is going to really power a lot of advances.
And we're already seeing huge advances in medical research, things like vaccines, chemical
(20:25):
research, you know, new chemicals.
So are you always going to really reshape the world in quite compelling ways that can
be both good and bad?
It's going to be really, right.
And that's, that's why I think it's so important that everyone has enough technological literacy
(20:46):
to, to kind of to help steer it, you know, that we can go hang on a minute.
I'm not sure that this is safe.
How can, how can you build safeguards into this system?
Things like that, that at the moment, the tech industry is just running away with it
and not subject to much in the way of restraint or ethics, really.
(21:14):
So I think, I think a lot more conversation is going to be needed in the ethics space
and probably a lot more conversation in front of students where they can start to discuss
the issues and start to understand some of the implications of bad decisions based on
(21:34):
data.
Yeah, absolutely.
And we used to have those, you know, my year 11 class was largely discussion based and
we would just talk, talk through the issues and think about, you know, the implications.
And a lot of the time we didn't come to any good conclusions, you know, some of these
problems are not open to easy solutions, but to even have the conversation would be a step
(21:59):
forward over what we have now, which is technology being implemented on aspects of our lives
where we have no idea and no control.
Yeah.
So what, what's, what's next for you and your institute?
Well, the big goal is changing the curriculum and changing the way teachers are taught to
(22:24):
teach and not just in, in data and digital technologies, but right across the board that
we're actually building these real authentic problems of experiences into the way we teach,
but also that we rethink the way, rethink what we define as the basics.
(22:49):
That's a, keep an eye out for that.
I'm not ready to say too much more about that yet, but it's a work in progress.
You know, I want to really change things so that we're engaging kids and, and kids know
why they're learning stuff and can see the point and actually want to learn.
Well, that sounds fabulous.
So I'm looking forward to hearing more about that.
(23:11):
So where can people find you online?
So the Australian Data Science Education Institute has a website.
It's abc.org, a D S E I.
Very easy to type, but not so easy to, to say.
I do not think that through.
We've also got put it in the show notes.
So don't you worry.
It'll be in the show notes if you want to find it.
(23:33):
So thank you very much, Linda.
It's been great having you.
It's been lovely to be here.
Thanks, Kate.
And that is it for another episode of the Data Revolution podcast.
I'm Kate Crothers.
Thank you so much for listening.
Please don't forget to give the show a nice review and a like on your podcast app of choice.
(23:54):
See you next time.