All Episodes

September 4, 2024 • 35 mins

My guest for this episode is Lynn Gribble. She is an Associate Professor at UNSW Sydney Business School who is passionate about embracing new technology to help in the education process. Lynn has been at the forefront of new technology in education, from pioneering voice feedback to innovating with technology for grading and feedback. She is multi-awarded and a though leader in her field, even presenting to a Parliamentary inquiry on the impacts of Artificial Intelligence in education. Lynn is also a cat lover, ice skater, and YouTuber and an all around nice human being.

We chatted about how we need to start rethinking everything now, including how we assess students in the age of AI. I am definitey asking Lynn back on the podcast as we only covered a small fraction of the stuff I wanted to chat about

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Welcome to this episode of the Data Revolution podcast.

(00:18):
The Data Revolution podcast is about exploring the intersections between society, culture,
data, AI, privacy and security.
Today my guest is Associate Professor Lynn Gribble from UNSW Sydney and we are talking
about how education will change in the age of AI.
She's a remarkable person.

(00:39):
She's a nationally and internationally awarded and recognised education focused academic
who is known for her work in digital innovation and teaching.
So her teaching focus is in on engagement belonging and personalisation of students,
learning experiences.
But she's been in the avant-garde with online learning and pioneered voice feedback and

(01:02):
the use of technology and grading and feedback.
So she's always been on the leading edge of change in the education space.
So it would be really great to talk to Lynn today.
Hi and welcome to another episode of the Data Revolution podcast.
Today my guest is Associate Professor Lynn Gribble who's a colleague at the University

(01:26):
of NSW in Sydney, Australia.
And we're going to have a chat about how AI is being used in education.
Welcome, Lynn.
Thanks so much Kate.
It's lovely to be chatting to you.
Yeah, I was really interested.
So you co-chair I think the community of practice around AI in the education space.

(01:46):
And there's been some really interesting conversations happening in that group over the last couple
of months.
So I really thought it'd be interesting to get you on and have a chat about that.
I think that, you know, it's in terms of AI and I just sort of also want to say I'm largely
talking about generative AI.
So we tend to use AI as a shortcut and there's so many different AIs.

(02:09):
But if we look at generative AI, it hit the market November 30, 2022.
And everybody thought the world was going to stop.
And I think we're now in 2024 and we need to say, okay, understanding what AI is, understanding
the tools, understanding what generative AI means for education has been some really

(02:34):
interesting conversations.
And our community of practice is really focused on cutting through the noise.
You know, every day there's a new tool, there's a did you know, you know, and people are just
feeling overwhelmed by that.
And along with that, there's some really important conversations about understanding how AI works

(02:59):
and how it is, you know, programmed and what it means.
And just being a little bit calmer because I think there's been a lot of science fiction
brought into the conversation and that's just been not useful, I think.
I think the whole world is sort of went mad, in my opinion, and almost went historical.

(03:23):
And I think they did the whole Gartner Hype Cycle where they went up in complete history
and were about to come down and now all the real work will start for the next couple of
years of working on how to really use it in the real world in practical, meaningful ways
for human beings.
I would agree with that.
And at the time when it came out, before there was the hype, there was, I had a conversation

(03:48):
with somebody and they said to me, you know, it will never be able to write as well as
I can.
And I just looked at them, I said, you're not really telling me that, are you?
Because it does and it can.
And it won't write the same way you do.
But it's, there's already, AI was already replacing journalists at places, I don't want

(04:10):
to sort of name names because that wouldn't be fair.
But you know, in mainstream media and for very large tech companies.
And I think that the really important conversation with AI, particularly generative AI, is where
is the human in the loop?
H-I-T-L.
If I have a dollar for every time I write that on something, where's the H-I-T-L?

(04:37):
And if you can think about that when you think about AI, then I think everything changes
because you're able to think about the ethics, you're able to think about privacy, you're
able to think about data, you're able to think about how you might use it.
So I start with always where am I or where is the human in the loop and then move from

(04:59):
there.
And I think the challenge also when you talk about that Gartner hype cycle is, you know,
I worked at Optus during startup and I always look at people who haven't moved with smartphone.
And I like to use that as a good example because if you haven't moved to a smartphone
and you move to it, you've got a lot of learning to do.

(05:21):
You've got to first of all understand what it is, then understand how you might use it,
then you've got to change how you use a phone and then you can go on with that.
I think it's the same now for the late majority and the laggards in the cycle.
They're yet to work out how it works, you know, how they might use it.
And that's challenging because there's a whole lot of people who've run ahead and they're

(05:45):
already using it or talking about it or having a bigger conversation.
So I think the conversation is very broad and it's hard to know who you're talking to
in that conversation when you start.
So probably my first question for you is where are the kids?
Where are the people that you're teaching?
Are they part of the group who've run ahead or do they have the same adoption curve that

(06:07):
the rest of us all have?
The adoption curve is very similar.
So when I presented some work to the parliamentary inquiry into artificial intelligence and education
in January of this year, we put a timeframe on that of 13 years.
And we said that was from the get-go.
So go back to November 22 and take 13 years.

(06:31):
So some people would be raised in a generation where this was truly integrated to everything
that they did and they would learn it and be on board with it and it would be comfortable.
And then for 13 years, we're going to be a little bit unclear on where we are in the
catch-up model because when we talk about the students that are in my class, there are

(06:54):
some that are so nervous, like I don't want to use it.
And I say, you're already using it.
You just don't know you're using it.
That's a different problem again.
So I think that when we look at this, if I look at my daughter and her peers that are
in their early 20s, they're very clued onto the fact that it doesn't sound like a human.

(07:21):
You still have to do your own work.
Otherwise, when you present it, it won't come from your heart.
Imagine if I was reading this today, you and I would be having a very, or you were reading,
we'd be having a different conversation.
So there's that.
She tutors young people who might be as young as five or six and could go all the way through

(07:42):
to HSC and she talks about their learning curve.
And I think that's interesting because there are some of them that are really on board.
But for our students in a university, what we're seeing is really important.
There is still the fear.
There is still the unawareness.
And then there are those that are using it who believe that they know more about it than

(08:04):
say you or I might, right?
Because they go, oh, well, I think I can get away with this or I think I can get away with
that.
And because universities had to like the rest of the world in November 2022, or let's call
it December for ease.
And in Australia, of course, we all go on leave.
And so that meant that by January, we were in catch up mode.

(08:27):
I think there's still a bit of that catch up mode.
What does it mean for assessments or what does it mean for how I might teach?
And students are still catching up on that.
And with UNSW just, and I say just introducing co-pilot as a product for us to use on a daily
basis.
I've been using Gen AI since November 2022, but there would be people who haven't spent

(08:52):
that money or invested that time.
And the same for our students.
They may not know that the AI is there.
So a great example would be, I spoke to a student, I said, this writing reads like a
machine has written it.
How did you write this?
And the student talked to me for a little while.
And in the end, it was this really moment of confusion on the student's behalf.

(09:17):
And I was going, I don't think they understand the question.
So when you sat down at your computer, talk to me about what happened.
And they said, well, I wrote the first word and it gave me another word.
So I pressed enter.
And I went, uh-huh.
And they said, then it gave me another word.
I pressed enter.
And I said, that's not writing your own work.
And they went, isn't it?

(09:37):
And I genuinely believe that the student has broken their minds.
Yes.
You know, that we're just accepting the next word and they didn't see that that was not
your agency.
It wasn't.
But one of the things I wonder, you know, as a neurodivergent person is how some trackers

(10:00):
of people not writing their own work will flag neurodivergent work because, you know,
we tend to use words in different ways to neurotypical people, you know, so we use big
words a lot.
We, you know, so there's things like that.
There's some real equity issues emerging that are really interesting to think about.

(10:21):
It depends on what the assessment is measuring.
I'm not worried about the use of the big word.
So if I'm talking to you and you use a word that I wouldn't normally use, I get a sense
that that's part of your conversation and how you speak.
That's fine.

(10:42):
It's when a student uses a word that, like, I would have to go look up the word.
And I think to myself, this is not plain business speak.
And then when you find that it's not about the word, remember, it's about things like
burstiness.
So for people who are not familiar with burstiness, they'd hear today as we're talking, there's

(11:03):
a cadence to how we speak, you know, sentences vary in length and I'll move between tenses
because I'm a human, right?
So I have to go back and fix that.
And I won't get it 100% even with a grammar checker.
But when I ask chat GPT, which is apparently now and now, right, like it's like clinics,

(11:23):
I love that, that it's got its own agency.
But it doesn't write the same way you and I will write.
So it loses that burstiness.
It loses any depth.
And it doesn't matter how much I try and program it because remember that a GPT is just predicting

(11:44):
the next word in the sentence.
Well, interesting, like auto correct.
I've downloaded my Twitter archive back to 2007 and I'm training my own LLM on it.
So I will see what happens there.
But one thing I wanted to ask you about is how people are using it in a classroom context?

(12:06):
Like what sort of things are you seeing with people using this January Bay Eye technology
in a classroom setting?
I'll start with the education side and then I'll start with it.
Then I'll go to the student side.
So from an education side, really, really helpful.
It can reduce my admin.
I can run a poll in class, get the key themes, talk to those all within nanoseconds that

(12:32):
would have taken me much longer to be able to get a pulse of the class and what that
class might need or what the questions might be.
And so using it in the classroom means that I can personalise at scale of 400 plus students
in a lecture hall right here, right now based on what's going on.

(12:52):
That's enormous.
I can, I introduced all new assessments at the beginning of this year.
I had been working internationally at the back end of last year and that had given me
time on trains, planes and automobiles to really rethink assessment.
And then I sat down and I went, wow, you've written eight new assessments.

(13:14):
That's going to mean eight new rubrics, eight new lots of instructions.
So I wrote my instructions.
I have some visual processing that makes that sometimes tricky to get right.
I was able to ask the AI to help me, to support me in doing that.
And we were done and dusted in two and a half hours.

(13:36):
That would normally be a day per assessment work.
So that's a huge productivity.
Huge productivity gain.
So then for students, it's about getting them to recognise where it is like that student
who just is so used to auto correct.
Right.
It's about knowing what that means as part of your learning.

(14:01):
So if you're going to take a very surface approach to learning, then of course it undermines
that because you go, that sounds really good.
But for somebody who might be a discipline expert, they'll go, that's, you know, not
right or it's a little bit light on or.
So I think that's a real trap for students.
And then because I teach in the business school, being able to communicate in a written form

(14:23):
or in a verbal form that actually has that authenticity in it is a core program learning
outcome and also a core course learning outcome.
So we have assignments where we say, actually, this shouldn't have any AI near it.
Don't correct your grammar.
Don't, don't do anything like that.
We just want to hear your natural voice.

(14:45):
We want to hear the arms, the Rs.
If you're talking or we want to see it written away, that's really natural and from your
heart.
And so that can bring students unstuck.
We're very clear with students that you can't translate a full document.
That's simply because if I came to you Kate with a degree from France, right, and the

(15:05):
testament to was in French, you would assume that I can speak French.
So it's passing off an ability that I may not have if I'm writing in one language and
translating it.
So we've had to do a lot of instruction, a lot of explanation and help students understand
that.
And I think that it must be complex.

(15:27):
I speak enough Chinese to get by, but I don't write in Mandarin.
I write in Pinyin, which is the English form.
So I can imagine how complex that might be if I was trying to change, you know, to write
in a different way.
So I think when we look at this, we've got a long way to go.

(15:48):
But if you can reimagine your teaching, your learning, how your students are coming into
the classroom and think about what it's going to look like in their jobs, because in medicine
now, AI is doing a whole lot of stuff.
And then we come back to the human in the loop to tell us, is that a good or a bad diagnosis,

(16:09):
for example?
Or so it's the same in business.
It can do the hack work, but I need to have that discipline knowledge.
I can't say to it write an HR policy because it's not going to get the true depth of philosophy
underneath that.
So it's really about keeping personal agency in the process.
How did you reshape your assessments?

(16:30):
Can you talk a bit about that?
Because I'm really curious about how we need to reimagine assessment in the age of AI.
So a couple of things have happened, and I have spent this, I've been very fortunate
to spend a large part of this year actually, traveling the globe talking about moving to
assessing process rather than outcome.

(16:53):
So use AI, tell us about the prompt you use, and we want to show, we want to see in draft
form, so that is showing us how you're changing it, not only what you've changed.
So we don't want you to do an edit job.
What I'm really after is why you changed it.
So writing comments of the meta comment of this slack depth or it was too wordy or it

(17:18):
wasn't personal enough or it didn't take in this.
Critiquing the outputs of AI.
Which is a skill in the world now.
Yes, so there's been a lot of talk about critiquing, and I tend to go not just at critiquing,
because critiquing assumes that you have a deep ability to analyze, and not every student

(17:39):
is going to have that.
So fantastic if they can.
I can be asking them for things like what are the assumptions that you've had or the
AIs had.
Tell me why you changed it.
I don't want you to tell me that it's why it's wrong.
I just want you to tell me why you changed it and give me what is better.

(17:59):
Because if you can make it better, then that's the value that you're bringing to a business
or to an organization or to what you're doing.
So critique and alone is I can ask the AI to critique itself.
So I've tried to move away from anything that the AI can do to try and take that to a situation

(18:21):
of as a human when I look at this, what would I do?
How am I going to bring value to my employer?
That's the real thing that universities need to focus on.
Because if what you're doing is what a machine can do, then why would I pay you?
I can pay somebody to just program the machine.

(18:42):
So the value comes from the conversations, the value comes from being able to engage
with that, going and doing more research, fact checking, bringing your humaneness, your
ethics, your values, understanding things that right now only humans can do.

(19:06):
And that is that you and I can be having this conversation today.
And as we're talking, I'll think, oh, that relates to something completely unrelated
to what we're talking about.
And I can say, have you thought about joining this or what if we look down that?
So you and I can do that as humans conversing going back and forward.

(19:26):
Remember that generative AI is still programmed and it can only do what it's programmed to
do.
It's not sentient.
No, it's a guessing engine.
I like that term, a guessing engine.
The other thing I often remind people of is that it's a seeking not a searching tool.
So when you use Google, it goes searching, it goes out, says, what can I find?

(19:48):
Because of the LLM, GPTs are, they're pre-trained, right?
That's their 101.
So they're just collecting all the things.
I always often use the example on my desk.
It's just looking around on my desk.
It can't go off my desk.
And if I say go off my desk, it can go a little bit beyond that, but it will run out of parameters

(20:09):
to go outside.
You and I don't have, and our parameters are a bit more endless, a bit more boundaryless,
because we might learn something or add something or so.
And remember that Google is updating every day.
It still has some limitations, but remember because of the LLM, even if it's programmed

(20:36):
to go out and look at Google, the programming behind it, and I'm not, you're a data scientist,
Kate.
I'm the, I shouldn't be having this conversation with you.
But I think for our listeners, the challenge then becomes to understand that it still can
only search on the table.
It doesn't know, so that's part of the challenge.

(20:59):
So that's why I was really interested in how you might approach it, because we did a proof
of concept with another colleague in chemical engineering earlier this year, and it was
like, could we ask an AI to mark assignments?
And it actually does a reasonably good job, because the marking rubric for this hasn't

(21:22):
changed, this particular set of assignments hasn't changed.
The question's the same for every student.
This is an engineering thing.
And we've got a purpose of like seven years of assignments.
So it was super easy to train the AI, and it did a reasonably good job to mark it, because
it was just a pass-fail mark.
But it was really interesting to start to think of how might we ask AI to do routine

(21:48):
stuff?
And so we can free up the teachers to do more value-add stuff.
So how can they personalise their feedback more if we can take the bulk?
I hate marking, so I've been on a G-CAD against marking for a very long time.
The challenge is that students do want to know that we've looked at their work.

(22:11):
They think that is part of the reciprocity of the product of education, that a human
will look over their work.
There have been lots of good tools that have used data for a long time.
For example, in Moodle, we have personalised learning designer.
And personalised learning designer allows us to do a whole lot based on the back end

(22:32):
of what the student's doing on Moodle.
As you know, some of MacIntyre's groups have been working on the learning analytics, and
that allows us to do a whole lot for students that we know what they're doing, what they're
not doing.
So, we have feedback all the time.
The challenge with assignments becomes this as we keep changing assignments, and we should

(22:54):
be changing assignments regularly.
And assignments should not just be busy work.
So if students aren't getting dialogic feedback, so we can do corrective and directive feedback,
absolutely with machines.
And that's where I think the value is, is, you know, fix the grammar, tell them what
the right grammar is, use machines.

(23:17):
That's a complete waste of the teacher's time.
But really substantive, meaningful feedback is probably better coming from a human rather
than a machine.
Well, with the change now to the assessments I'm doing, where there's process assessments,
there is no comment on grammar.
There's no comment on format.

(23:38):
It's just the format, you either did it perfectly, largely did it perfectly, or you didn't pay
enough attention to it.
And the grammar is either great needs help, or you need some serious help.
And turn it in, it's had quick marks for years.
So that's a really quick thing to do, drag and drop.
And it's really shifted.

(23:59):
So I teach in very large courses.
I might have 20, 25 tutors.
And as much as I will say to them, please don't spend your time doing corrective and
directive feedback, you're not paid to mark up their assessment like they're a PhD.
Your job is to engage the student in how to think differently.
Moving to the process really changed that.

(24:21):
So it seems to be a bit about changing teacher's mindset as well as educating the students.
Absolutely, absolutely.
There is great value in being able to do the component parts of a report, for example.
But I realised long ago a lot of my students are English as a second language.

(24:45):
And so therefore getting them to write an introduction and a conclusion is often not
good use of word count.
And they're very bland and they've been taught generic ways to write it.
And so there's not a lot of value add in that.
So the value add might be to get them to reflect on how was it doing this and can they actually

(25:06):
apply X model to X situation and to what level of depth can they do so?
Because you have to really understand the problem to be able to use that model to do
that.
And AI can sort of do it.
It just doesn't do a very good job.
So the other thing was to move the...
I might keep start their thinking but they can't use it to complete the task.

(25:29):
Well the other thing is that if you are surface learning you will go into thinking that's
enough.
But therefore the rubric has to say well actually it's not enough, that's unsatisfactory.
So there's a double edge sword here that needs real thinking about what is the work, mapping
that work and being really clear on if I have a student who leaves my course and they meet

(25:52):
a colleague of mine and they're working for them.
I want to be proud of that student being able to do the work.
So if that can't happen that makes me nervous.
It seems to me though that we're going to be facing a real disconnect between those teachers
like you who are thinking really constructively and carefully about how to evolve assessment

(26:16):
and those who are just going to stick to the old ways.
And I just remember back to when I was studying law and you get your first year law subjects
and you just Google it and there were written assessments because they didn't change the
assessments they've been the same for Eons and you could find the answers pre-written
for the questions that were given.

(26:38):
And that's a discipline that...
And they love their closed book exams and all of this stuff.
So there's a discipline that are not really wanting to change and then you've got people
like yourself who are using AI thinking about how to change their assessments and you're
going to have students here because we've got...
We're at home at the double degree where they're going to be experiencing both kinds.

(27:00):
It'll be kind of interesting to see how the students respond to it.
I think that's what the workplace looks like though.
You know you work for one manager and they want you to work this way.
You work for another manager.
They want you to work that way.
And I have a law degree.
So somebody who holds a Masters of Labor Law and Relations, can I write a piece of advice

(27:21):
from scratch?
Absolutely.
That is a core skill.
Can I look up legislation?
Absolutely that's a core skill.
Can I look up a recent hearing or proceedings or a finding?
Can I pick holes in the finding?
They're all important things.
Can I go to a mock trial or go to the commissioner and front a commissioner and say blah blah

(27:46):
blah?
Of course.
If I can't do that, I haven't done the work.
So I think that when we talk about the answer that's online, turn it in and pick that up.
But I often talk to students about there's that difference of when you really know something.
So you've got a law degree.
So let's talk law.
I know when I dropped out of law.
I was like, this is not for me.

(28:10):
But imagine if I was reading something, let's say on a piece of law that I had no exposure
to, I could read it and understand it.
I have good standard law knowledge.
I understand contracts, et cetera, et cetera, torts.
So if I'm reading something though, and I think, yeah, I understand that.
And I sit down to write about it and all of a sudden I go, oh, what's the word again?

(28:33):
That's the tipper that I don't really understand it because I can't use the language.
I can't use.
So sometimes people will talk about, oh, we shouldn't be jargonistic.
And we're not jargonistic because in academe, we talk about the discourse.
That's the language of the subject that we're studying.
So for in psychology, we talk one way, we're business another way, law another way, et cetera,

(28:56):
et cetera.
So often students don't know what they don't know till they face that problem.
And in the past, that problem would have been turned up as a turn it in match with something
that had been insufficiently paraphrased.
Today we see it turning up as a possible AI generated comment.

(29:16):
And then what it tells us is that the student can't talk about it.
So then we need to move to things like Viva Votches.
We're saying, talk to us about this, because if a student can talk about it, even students
with varying ranges of neurodiversity have to be able to communicate in some form or
other about it.

(29:37):
And in my experience of people would say ADHD or autism, they're very comfortable to be
able to tell you anything you ask them to talk.
You know, at nearly ad nauseam, right?
So and I, you know, it's I'm, I say this with great love, it's actually a better process

(30:00):
for them to have that chat.
So in one of my areas, I've actually now got an interactive oral, so they do two component
parts of a report, and then they come in for a chat.
It's a 10 minute chat.
It's a conversation a little bit like having today.
And if I asked them, you know, can you tell me this?

(30:21):
And they can't, they're, they're allowed to bring any piece of paper they'd like to, but
no screens.
Right.
So not a phone.
All the books or whatever.
Yeah.
So you can have a book.
You can lay it all out.
You're not allowed to start with a presentation.
Sometimes people go, I've prepared.
I get, no, no, put that down.
Use it as notes.
We're going to have a conversation because this is how the workplace works.

(30:42):
And what we found doing those was really interesting.
The students who were, who possibly would have done well were able to be exemplary.
The sessions, they really knew their stuff.
They were not going to be able to write a paper that scored a hundred, but they could
do a conversation that scored a hundred.

(31:03):
No problems.
And the students that were struggling, not one of them, when I looked at them and said,
I think we both know that you're struggling to answer this.
Every time the student went, uh-huh.
I said, so would you like me to ask you a different question, see if we can get to the
knowledge point in a different way.
But at the end of every conversation, after that time, when I sat down to Mark, I knew

(31:28):
that those students knew how they'd gone.
And that's a big difference.
There was had, with 75 students, we had no remarks, no requests for, I thought that my
Mark wasn't this or that.
In fact, a lot of students came back and said, I really enjoyed the process.
I felt I had the opportunity to really show you what I knew, myself and the tutor group.

(31:50):
And those that didn't do so well when I said to them, look, I think we both know that today
has been difficult.
They went, yeah.
I said, well, you know, and they were supposed to, they were only being asked to have a conversation
about what they'd already submitted.
This wasn't.
Yeah, so you weren't asking them anything new?
Nothing new.
If you'd used the tools in the previous two assignments, then you should have been away

(32:14):
and running to be able to know what those were.
And of course, I just had both assignments open on my screen and was just sort of saying,
oh, you talked about this.
Can you tell me a bit more or about assignment was every student spoke about their own work
and what they knew?
Yeah, it sounds like a really good process.

(32:37):
It's really interesting because, you know, you're in business and I'm just automatically
thinking about how it might translate to other disciplines and stuff.
It's a really interesting area of how we're going to have to evolve assessment in the
modern world.
It's really such a fascinating area.
Thank you so much for your time today, Lynn.
I really do appreciate it.

(32:58):
It's been great to have a chat and I'll catch back on again a bit later.
Anytime happy to chat AI until the cows come home as they say, because it is ever evolving.
What I have found is that the last two years have challenged me as an educator.
What can you do differently?

(33:18):
How could you do that differently?
This is the new world order.
Let's get excited about that.
Let's play.
Let's explore.
Let's experiment because I'm old enough and, you know, I don't want to throw anybody else
under an age bus to remember when mobile phones weren't around.

(33:38):
Now I can't imagine not having my mobile phone.
Yeah, I can't have a good time without mobile phones, but I'm lifting the time of grocery
dial phones.
Right.
So therefore, imagine if you and I had sat back and said, I'm not having one of those
mobile phone things.
They're terrible.
And what that would mean for our lives.
So this is the same with AI.

(34:00):
Think about the fact now that everybody, the person who struggles most with literacy, can
now write.
The person who struggles most as neurodivergent can now engage.
They can practice.
We can have things that we never thought were possible to support education.
24 seven.
And that's a new world order that is so exciting.

(34:22):
So rather being terrified, it's time to say, let's look, let's play.
Let's be excited and also come back to the what's the work when we have a graduate.
What does this piece of paper with this imprint say they can do?
And let's make sure they can do it.
Let's help them to do it.
And let's help them to understand where they are as the human in the loop because that

(34:46):
changes everything.
That's a fabulous approach.
Thanks so much, Lynn.
And that is it for another episode of the Data Revolution podcast.
I'm Kate Crothers.
Thank you so much for listening.
Please don't forget to give the show a nice review and a like on your podcast app of choice.
See you next time.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.