Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Fonz Mendoza (00:30):
Hello everybody
and welcome to another great
episode of my EdTech Life.
Thank you so much for joiningus on this wonderful day and,
wherever it is that you'rejoining us from around the world
, thank you, as always, for allof your support.
We appreciate all the likes,the shares, the follows.
Thank you so much forinteracting with our content.
As you know, we do what we dofor you to bring you some
amazing conversations so oureducation space can continue to
(00:53):
grow and we continue to amplifymany voices and many
perspectives.
So I wanna give a big shout outto our sponsors.
I wanna give a shout out toBook Creator.
Thank you so much for yoursupport, Eduaide and Yellowdig
as well, for believing in ourmission of bringing you these
conversations week in and weekout.
(01:13):
So thank you for all that youdo.
And today I'm very excited tohave a two-time guest.
And you may be saying well,Fonz, you already had a
four-time guest, you've gottwo-time guests, and so on.
Well, it's because that's theway the show works.
It's like sometimes, you know,I want to catch up with my
previous guests, and especiallynow in the age of AI in
education, and I want to gettheir perspectives so they can
(01:35):
bring in their expertise and,just you know, amplify their
voices and give them a platformto share their knowledge and
their practices and theirperspectives.
So I would love to welcome tothe show Jen Manly.
Thank you, Jen, for joining mehere today.
How are you this evening?
I'm great.
Jen Manly (01:53):
I'm excited.
I know you said two-time guest,but we're having a really
different conversation, so Ithink it's going to be great.
Fonz Mendoza (01:59):
Yes, absolutely,
and a very different
conversation, for sure.
But, Jen, for our guests thatare watching right now or
listeners that may not befamiliar with the first show
that we did as far as that topicis concerned, but if you can
give us a little briefintroduction of what your
context is in the educationspace, Totally.
Jen Manly (02:19):
Yeah.
So my name is Jen Manly.
I have been in education for Idon't know 13 years now,
something like that.
It feels like it has not beenthat long.
I started as a middle schoolcomputer science teacher.
I taught high school computerscience and now I teach a course
at the University of Marylandevery semester.
The current course I'm teachingis gender, race and computing,
(02:41):
so it's a really interestingclass.
My context in education I createcontent to help teachers work
less without sacrificing theireffectiveness.
I believe in keeping greatteachers in the profession, and
the way that I kind of attackthat problem is by thinking
about how we can applyproductivity science to the work
(03:02):
that we do, and then also likesetting boundaries around our
time and viewing teaching as youknow, teachers as professionals
.
But my context for this episodeand what we're going to be
talking about is I have beenteaching about the ethics of AI
as a computer science teacher.
I've taught it to middle, highschool and now college students
for the last eight years.
(03:23):
Eight years, seven years, 2018.
So about four years before ChadGPT was released to the general
public.
And something I'm reallypassionate about is helping
educators use AI and view AIfrom a, I would say like ethical
lens, but really more beingcritical about when we're using
(03:45):
it, understanding that it is atool, but also that it's not a
net neutral tool.
So I'm excited to talk with youabout it today.
Fonz Mendoza (03:53):
Yeah, and I'm
really excited about it too as
well, because one of the reasonsthat we're talking about it
just prior to recording the showis really a particular post
that really stuck out and I knowthat we'll get into it because,
just like you mentioned rightnow, you definitely share a lot
of great content as how to bevery critical of AI and when to
use it and when it has its place, and maybe some you know when
(04:15):
you just could do a basic Googlesearch to you know to do
something like that as far asresearch or finding something
out, and so we'll get into that.
But I want to ask you you know,I know that you've been doing
this since 2018, as far asteaching computer science, the
ethics of AI, working withmiddle school, high school and
then, of course, higher ed, Iwant to ask you let's go back to
(04:36):
November of 2022, when the newsbroke out.
It's like hey, chad GPT isavailable.
What was your initial reactionand your initial thoughts as you
heard the news?
And, of course, this is beingreleased.
Jen Manly (04:50):
Yeah, so I would say
my initial thoughts.
Well, I guess I can't likepinpoint exactly what it was
released, but I want to look at,like, the first two months
right after something that was Iwas really concerned.
And the reason that I wasreally concerned was because it
came out and there was this massum accepting of it, especially
(05:14):
in the education space, withoutany context or consideration of
a lot of the critical componentsof AI um that we in the
computer science space have beentalking about for years.
And I think back to, you know,maybe like December, January so
December of 2022, January of2023.
And I'm watching ed techexperts people who have been in
(05:37):
ed tech for a very long time,you know, starting to come out
and publish books on using AI ineducation.
And I just remember feelinglike I have been teaching this
for four years at this point andI would not consider myself an
expert you know, I'm notsomebody who is at.
You know, at the time I now Ifeel pretty confident in my own
prompt engineering right, but atthat time I was not using it
(06:03):
and programming it in the waythat you know.
All of these experts at Google,amazon, like people in the tech
space had been using AI formuch longer than that.
Lots of people had been verycritical, and so it was
surprising and also concerning,when ChatGPT was released
publicly, that in the educationspace particularly, it came off
(06:30):
as a mass acceptance.
And to me it was surprisingbecause I don't feel like that
is the energy that we have formost things in education.
Like most things in education,it takes us time to fully accept
, right Like I think about.
I was on a national curriculumwriting team.
We wrote that curriculum for ayear.
We then piloted it for a yearwith teachers in a classroom
(06:54):
before we then released anupdated version of that
curriculum to, you know, for themasses, and so I think it's a
mix of concern and surprise forthe masses, and so I think is a
mix of concern and surprise, andthen also understanding that
this is something our studentsimmediately had access to, and
so what are you doing to makesure that students are still
(07:19):
understanding how to use itresponsibly and that it's not
detracting from their overalleducation experience?
I know that was a lot.
Fonz Mendoza (07:27):
No, no, no, it was
actually perfect because it was
actually very similar views andI don't know.
It's very weird, and I had RobNelson on the show a couple of
episodes back and we're talkinga little bit about the AI Fight
Club, where you've got reallytwo sides.
You've got those that are allin and gung-ho and then some
that are a little bit more, youknow, cautious, cautious.
(07:48):
Well, maybe I consider myselfmore kind of trying to be in the
middle, obviously, because Ilove to bring both sides of the
conversation to the table.
But then there's also the otherside where it's like no, no, no
, like we're going too fastwhich is something that I truly
believe in too as well that thiswhole move fast and break
things doesn't really work.
And you know, we've seen a lotof things that have kind of
(08:09):
failed.
We've seen some things that youknow show some promise, but at
the same time, it's like who'sreally to decide?
Like, yes, this is going to bevery effective and this is going
to be the most, this is goingto be the solution to
education's problems.
Because I always go back and Ialways say, well, you know,
there really isn't anything newunder the sun.
I remember people feeling thesame way about the Internet.
(08:30):
You know people feeling thesame way about iPads in the
classroom and then Chromebooks,and these are going to be the
things that are going to, youknow, drive up test scores and
are going to revolutionizeeducation, and it just seems
like, yes, at the very beginningthere was a huge acceptance,
which was very scary becauseeverybody just started jumping
in and diving in and not beingvery cautious to the other side.
(08:52):
As far as I always focus on thedata privacy side, I always
focus on do parents know?
I know that parents may befamiliar with this, but do
parents know that this is beingused in schools?
and this needs to go past justthe tech form that you sign at
(09:12):
the beginning of the year andsay hey you know you can in at a
time with, you know, dealingwith burnout, dealing with poor
teacher retention, and theneverybody jumps on the boat
saying, yep, this is going to beit.
(09:33):
This is going to personalizethe learning, this is going to
figure exactly what is wrongwith each student and give them
exactly what they need to beable to, you know, succeed.
And that's really what teachers, we want our students to
succeed, but at what cost?
And is it really going to beeffective?
And so I know what the lastconversation that we had, we
(09:55):
were talking and dealing abouthow can teachers, you know,
fight burnout?
And you talk about agile.
You know, what is it?
Agile?
What's the word?
It's okay because we can cutthis.
Yeah, what is it?
Jen Manly (10:08):
It's called Scrum,
but it's like project management
.
Yeah, okay, perfect, let me goback into that.
Fonz Mendoza (10:12):
I know in our
previous episode we talked about
you know project management,like kind of like Scrum Masters,
you know using agile to do that.
And so what I see here ispeople that are really saying
like, hey, we can just easilyput a student on a computer,
it's going to figure out exactlywhat they need and we can all
take it from there and that's it.
(10:34):
And then, of course, now we'vegot the other sites like, no, no
, we need that human connection.
We need all of this humanconnectivity as far as making
sure that the teacher is stillpresent, they're active, they're
engaging students.
So we're seeing so manydifferent perspectives.
But I want to ask you because Iknow the previous show we talked
about you know projectmanagement and it just seems
(10:56):
like, hey, this product it'sgoing to help you just manage
your classroom and it's going tomanage your workload, and you
really just come in and you needa worksheet.
Hey, just prompt it and I canmake you a thousand worksheets
in 30 seconds.
So I want to ask you about that.
Now, in your perspective and inyour classroom experience, and
maybe what you have seen, youknow both at the middle school,
(11:19):
high school and even higher edlevel what is it that you're
seeing now as far as teachers'acceptance of this?
Is it really making their livesa lot easier?
Is it really making themproductive?
Jen Manly (11:32):
Yeah, I think it's a
really good question and I
actually spoke about this atiTech Iowa in November of last
year October of last year andone of the things that I think
like a good first question forteachers is is this actually
going to make you faster?
Right, like there are certaintasks that teachers are
(11:56):
outsourcing to AI, that it isfaster to outsource it to AI and
the product that AI is givingyou is something that is going
to be, you know, usable in yourclassroom.
That's going to make sense foryour students.
So a good example of this islike creating a rubric.
Right, you have created anassignment.
You want it to create a rubricfor you that you're going to use
(12:18):
to assess students.
That is a very easy task for aCLOD or chat GPT, right, and
we're not dealing with dataprivacy.
So that might be a situationwhere a teacher you know I think
about.
When I was teaching middleschool, I had four preps and
none of them.
I had anybody else teachingthem, right?
So I had 230 students, fourunique preps, two of them didn't
(12:42):
have curriculum and I was afirst and second year teacher.
That was all on my own, becausethere were only three other
teachers in the entire districtthat were teaching any of the
classes that I was teachingRight, and I think back to where
I was then being able tooutsource creating the rubric
would have actually beenextremely helpful.
Right, being able to outsourcepotentially a worksheet right,
(13:04):
like I want it to make me a notecatcher, that is something that
is very helpful and is muchquicker than how I could do it
for myself.
The challenge is that a lot ofteachers I guess I shouldn't say
a lot, some teachers arelooking at AI and they're saying
, well, if it can help me withthis thing, then I want to use
(13:24):
it for everything.
And that is not like A, andthis is, I think, the post that
you saw from me right, where Iwas saying don't use AI for
things that you can Google, andthere's lots of reasons for that
.
But to me, the number onereason is it's actually less
efficient.
Like, if you're trying to savetime, but maybe you're not great
at prompt engineering, you'regoing to go back and forth with
(13:47):
ChatGPT or Claude or whateveryou know system you're going to
use.
Let's actually be critical ofhow much time using this tool is
taking us, and maybe there arethings and certainly for
teachers.
There are a lot of things thatyou can do faster yourself,
right, you can do faster becauseyou already have a template
that you've used for other unitsand you're just going to reuse
(14:10):
it, right, and so I think, likethat's number one is there are
absolutely things that AI canhelp teachers be more productive
, help teachers be moreefficient, do it more quickly,
and then there are other thingsthat it's less efficient to use
AI.
It's not giving you the outputthat you want.
(14:31):
It's not creating output that'sfriendly for kids or, like you
talked about and this isactually a big concern of mine
it doesn't actually consider thefact that it doesn't actually
(14:57):
consider the fact that we'reusing, you know, over that, over
reliance on it.
Fonz Mendoza (15:02):
You mentioned
something like you know, if I
can use it for this, well thenthat means I can use it for this
and this and this.
And you know, essentially theway that I saw AI and playing
around with it, even before therelease of 2022, with, you know,
there was Writer, there wasJarvis, there was you know,
other programs and so on.
To me they seemed more likethey were just, like you
(15:25):
mentioned, productivity toolsand tools that can help you kind
of do some like copy, you know,make it a little bit faster,
and so on, especially if you'redoing marketing.
But it just seems like there'salways something in education
that we kind of try and just putthose you know square pegs in
the round holes of education andjust make it fit in Like we
(15:46):
just got to do it because it'sgoing to help out, because if it
helps out on the outside, it'sdefinitely going to help me out
with everything here.
And I do agree with you thatthere are some things that it
will definitely make things alot easier, possibly like some
things that you may need totranslate, some things that you
would just need to.
You know, especially with thetranslation portion.
(16:06):
I believe that that isdefinitely very helpful.
Or I know I had Paul Matthewshere on the show stating just
different reading levels orLexile levels just with the same
passages and things of thatsort.
So those things.
But just to want to doeverything on it, it can be a
lot more waste of time,especially, like you mentioned,
if you are not putting in theright input to give you that
output that you want.
You can spend too many minutesor too many hours just trying to
(16:28):
get certain things.
But then I know that there areplatforms that already
pre-prompt for you per se andthey have a myriad of plethora
of tools that are there that youjust click and say, hey, I need
a rubric, and I just tell itlike this is what it's going to
do, and then it's going to goahead and produce it for me, and
of course it's not going to befor free.
Most of these things they havethe freemium, and so those are
(16:50):
some of the things too.
That talking about the adoptionwithin many districts is
something that to me, also kindof worries me and concerns me,
because with mass adoption andnot knowing really where the
industry may be going, or thatuncertainty that, as Chad GPT
continues to change, to grow, itdefinitely wants to be more
(17:13):
profitable.
So then, as those platforms thatconnect to those APIs, then
those prices start going up tooas well.
And then, of course,accessibility to teachers, to
districts, to schools you knownow you've got, you know upper
echelon districts that haveaccess to this, and then, of
course, smaller districts may beout of luck.
But also the changes in thatinformation, you know, as far as
(17:37):
the knowledge cutoff date, likewe were talking about, so going
to Google and maybe findingsomething that is, you know,
right on par with you know, 2025, the news, or something that's
most recent, makes more sensethan to just say, hey, type it
in cloud and type it into chatGPT and tell me what it gives
you.
And even some of theseplatforms, in their privacy or
(17:57):
terms of service, it'll sayknowledge cutoff date 2023.
And I believe it was July 2023for a lot of them.
So we're not actually gettingyou know, or is it when they say
they can search the web?
But is it really?
Jen Manly (18:11):
searching the web.
Fonz Mendoza (18:12):
Yeah, so I want to
ask you now on that as far as
you know, cause I know you, youtalk a little bit about the
ethics and the data privacy andwith your students.
So I want to ask you as far assome you know, because I know
you talk a little bit about theethics and the data privacy with
your students.
So I want to ask you as far assome of these pitfalls you know,
how is it that you address thiswith your students as you're
teaching them about AI?
Jen Manly (18:32):
Yeah, totally, so,
okay, so I guess I'll go through
how I actually teach this withstudents, how I actually teach
this with students.
So I'm really fortunate that Inow teach at a university that
allows us to have these types ofconversations, right, these
critical tech conversations.
So the number one way, theplace I start from, is that AI
(18:54):
is not a net neutral tool, right, like I think sometimes we look
at it especially through thelens of, you know, well, it's a
computer that's making decisions, and so it's neutral, because
computers can't be biased andit's like, well, the people who
program them have implicitbiases that they may not even be
(19:16):
aware of.
Right, and so no tool that isprogrammed like AI is net
neutral or or not biased.
That's the starting point, andthen the next piece of this is
understanding all of thedifferent ways that AI it is
(19:38):
non-neutral, right.
So.
So a couple of places that wecan talk about this.
The first is understanding theenvironmental impact of AI usage
.
Right, huge demand on waterresources.
So, ultimately, when we thinkabout AI and the environmental
(20:09):
impacts of AI, we're talkingabout the data centers that are
needed in order to run, you know, all of these different
searches that we want to have,and when you think about it as
one search right or one prompt,it's not that much.
But the problem with generativeAI is that it's not just you
(20:30):
using it right.
We've done this mass adoptionof generative AI and so you are
part of this bigger usage thatis contributing to increased
water usage, which is a problembecause lots of people lack
access to clean water.
That's contributing to lots ofenergy uses burning of fossil
(20:50):
fuels.
These are not neutral things,and if you don't care about that
, we can also talk about thehidden labor of AI, right.
So there was this study thatcame out right after ChatGPT not
study.
The article that came out fromForbes right after ChatGPT was
released to the general public,and the way that ChatGPT trained
out racism, misogyny, sexism,right.
(21:13):
All of these things that exist,because ChatGPT's knowledge
base is the entire internet andthe internet is problematic.
They paid African workers $2 anhour to be exposed to these
extremely traumatic andproblematic things, and it's
hidden labor, right?
We think, well, AI is a computer, it's a robot, but it takes
(21:35):
people to be able to do thatwork, and so you know, we can
think about the environmentalimpacts.
We can think about the hiddenlabor we can also think about.
You know what groups arefurther marginalized by AI usage
, right?
So this is something that Italk about with students to use
(21:57):
AI for grading.
I think using AI for grading isincredibly problematic because,
number one, your K-12 studentscan't consent to their data
being used.
But also, when we think aboutbiases and how they manifest,
people are like, well, AI can'ttell that this is coming from
(22:17):
you know, a Black student or aBrown student or a female
student.
But there are ways that AI isbiased against certain groups
that are not explicit.
So, for example, one of thestories I talk about is Amazon
used to have a hiring algorithmthat was secret.
They didn't tell anybody theywere using it until they decided
(22:37):
not to use it anymore.
And the reason they decided notto use it anymore is because
they found that the hiringalgorithm was discriminating
against female candidates.
Because the knowledge base forthat hiring algorithm was
successful, amazon engineers whowere predominantly men, right?
So certain characteristics thatwere coming up in female
resumes were not being acceptedor seen as qualified simply
(23:02):
because the knowledge baseconsisted mostly of men.
And so there's all of theseways that AI is non-neutral, but
we don't talk about it, rightand I think that's the first
piece is understanding that youcan make an informed decision
(23:22):
about when you're going to useAI personally and if you really
think about it, it's probablynot as often as you're currently
using it.
Fonz Mendoza (23:33):
And I want to
highlight a couple of things
like you're talking about,especially that energy usage,
because over the weekend I sawsomebody post and it was part of
a thread where you know, we're2025 now and they were actually
April 2025 and they just they.
They posted like, oh my gosh, Ijust found out about how much
(23:53):
energy is being used and so on,and of course, everybody using
with the Ghibli trends and theaction figure trends and all of
that, they don't think aboutthose things.
You know, it's just like hey, Iwant to fit in, I want to do
what everybody's doing and wejust follow suit and follow
along.
So I followed this post on thisthread and this is somebody
that is very well known, butthis was what they posted and it
(24:17):
says it says here it sayshere's one thing that might help
though it uses massive amountsof energy, but compared to the
energy we use on meat production, it is very small.
But it actually has the abilityto solve this problem, which
gives me hope.
Eating one less hamburger a daywould have a far bigger impact
than using AI less, than usingAI less.
(24:40):
And I was thinking to myself.
I was like, okay, veryinteresting, which that sparked
a huge conversation on LinkedInand everybody was just like
posting and everything talkingabout this.
And you know, obviously it'sit's coming more to the
forefront as far as that isconcerned, but there's still so
much hype around it where youjust had, you know, companies
(25:02):
doing their big annualconference and showing like, hey
, here's our new AI library, andnow we've got this and we've
got that, and really all thathype covers all what we talked
about, because I remember seeingthat 60 Minutes interview, too,
about the data workers andgetting paid that very, very
(25:22):
small wage of $2 a day and thehorrific things that they were
seeing.
It just really blew my mind.
The other thing that I wantedto talk about, too, is as far as
the use of AI we were talking alittle bit about.
You talked about grading.
Obviously, we'll talk aboutsome plagiarism detectors, as we
know that they don't work.
(25:45):
But I saw a recent post too aswell, on TikTok, by somebody
that I follow, stating that, forexample, Turnitin we all know
is a plagiarism detectionplatform that now they're kind
of relabeling or rebranding in away, because I guess there's so
much hype that these detectorsdon't work.
Now they're saying, oh, we arean integrity checker, so now
(26:08):
it's kind of like, well, let'sflip it around.
And to me I'm thinking you'restill doing the exact same thing
.
Now you're just relabeling forprofit, and that's really what
it is, and that's the way that Isee it.
So what are your thoughts onplagiarism detectors?
And maybe now that you hear alittle bit about because I've
(26:29):
not just heard it from Turnitin,but there's some other articles
that have come up from highered stating oh, it's the
integrity of education, so wecan still use AI, but we want to
show them how to use it withintegrity.
Jen Manly (26:48):
So I know it's a
two-part question.
There might be a lot there, butgo for it.
Yeah, let's do it.
So you know, let's talk aboutoriginal turn it in right, the
original turn it in.
That's not doing AI detection,that's just doing plagiarism
detection.
I think it would be anirresponsible use of Turnitin to
just base your interpretationof plagiarism just by looking at
the percent right.
Like the percent is really like.
(27:08):
If Turnitin flags it, that is,you going in and looking at
what's flagged right, that'sreally an indicator to you as an
educator.
Something about this is alittle bit off, and maybe it's
that the student didn't citetheir sources correctly, maybe
it's that they pulled entirequotes and like that's not
really great, but they did citeit right.
Like it's really a flag for youto actually then go in and look
(27:32):
deeply at every paper.
It's a helper tool, right, likeif you have and like I said,
when I taught middle school Ihad 230 students.
Now teaching college, I oftenhave 120, 130 students a class
Like I don't have the ability toread every single paper that a
student turns in through thelens of is this plagiarism or
not?
So turn it in for plagiarism.
We'll talk about AI separately,but for plagiarism is a helpful
(27:56):
tool insofar as this is a flagfor me to then be able to say
this paper looks questionable.
Let me look at this a littlebit more deeply, right?
I still think it would beirresponsible to accuse a
student of plagiarism withoutdoing that deep work yourself,
right?
And so the problem with AIplagiarism detectors is that
(28:19):
since the advent they have said,there's been that disclaimer
that, like, we're not 100%accurate at detecting AI, and
what they have found inresearching a lot of these AI
detectors is that they tend tobe biased.
They tend to ping moreregularly incorrectly for
(28:40):
neurodivergent students and forstudents whose first language is
not English, right, like?
There's a big thing that cameout recently that was talking
about how the MDASH is anindicator of AI.
I've always used the MDASH, Ilove the MDASH, it's one of my
favorite pieces of punctuation,and so for me again, as somebody
who's understood AI, I wouldnever use an AI detector as an
(29:03):
indicator of anything, becausethey say on the front end this
actually is not accurate.
So we know that, and we knowagain that it's biased towards
certain student groups.
What I think is interesting foreducators is that when you start
receiving student work that iswritten with AI.
(29:26):
You can see it Like there arecertain qualities that I pick up
on, where this doesn'tnecessarily sound like something
a student already turned inthat they wrote in class.
There's an excessive use ofbullet points, and all of the
bullet points are formattedexactly the same way, right,
with a few different wordsswitched out.
We were talking before westarted.
Exactly the same way, right,with a few different words
switched out.
We were talking before westarted recording about how
(29:48):
sources right, like a good placeto check.
If you're like, I kind of thinkthat this might be, AI is
checking to see if the sourcesexist, because a lot of times
they don't right, they're madeup sources, made up resources,
and so that's like a good firststep up resources, and so that's
(30:08):
like a good first step.
But I think you know, ultimately, especially as these tools get
better, they get more human.
Students learn how to promptright.
There's no guarantee that it'snot written by AI if you give
them an assignment and they cango do it at home or they can do
it on a computer where they haveaccess to those tools, and so
(30:31):
for me, like even at the collegelevel I look at having
conversations around AI usagewith students as just that, as
starts of conversations, right?
So if I suspect that a studentuses AI, I'll say that I'll say,
hey, this doesn't sound likeyou, potentially right, these
cards sound like they werewritten by AI.
Right, and give them anopportunity to be upfront about
(30:53):
it and to be honest like that'show I approach a lot of
conversations about plagiarism.
Anyway, right, this place ofwhat caused the student to need
to cheat?
Right, Because a lot of times,students don't want to cheat,
but they're short on time orthey don't understand what
they're doing and they make adecision to plagiarize or to use
AI in ways that are notnecessarily acceptable.
(31:17):
But I think, like, as we'renavigating this, it really we
have to stop looking at it aswe're trying to find when
students are using it and we'relike it's like a gotcha moment.
It's really opening aconversation, especially if you
are an educator that is using AI.
Right, if you're using AI incertain components of your
(31:39):
classroom, but you expect yourstudents not to be using it at
all, it's a little bit of adisconnect.
So, and then I think the lastthing I'll say, and then I would
love I think this is a biggerconversation, right, but, like I
think it forces us to becreative about how we design
assignments and assessments sothat it's either not possible to
(32:02):
use AI or it's disadvantageousfor students to use AI, that it
actually is better for them tonot use it at all.
Fonz Mendoza (32:10):
Yeah, and you hit
on a lot of great things there.
The one that I really want tohighlight too, like you
mentioned, the original intentof like, turn it in, and, like
you said, it's a flag.
It's for me, as a teacher, togo and say, okay, this, I'm
using this as a tool to help megive proper feedback, but I
think it just goes back now tojust that over-reliance of well,
(32:31):
this is what it gave me.
You cheated, you know, that'sit.
I'm just going to give you thezero, because this is all AI and
it's just amazing how quicklythat just turned into that.
And now one of the things that Isee, too, is just a lot of
platforms stating hey, let'swork on your students' writing,
so, as they input their ownwriting, in their own words,
(32:52):
this platform is going to givethem that feedback immediately
that they need.
And I'm thinking, okay.
So I know that as a teacher,you know it can be difficult to
give feedback to maybe up to 30kids and maybe in some schools,
like you know, just depending onthe class size, but there still
has to be that human componentof at least checking it and
(33:12):
checking it and saying, ok, thisis the feedback that it's
giving.
But now let me go and do a oncethrough.
And because that, over reliance, that is what scares me, and
I've said this from the verybeginning too, because as soon
as this came in, teachers usingit, and then all of a sudden
it's like, hey, like this output, like the very first output,
it's like, oh, this is truth,this is gospel.
(33:34):
Here you go, guys, here's yourhandout.
And especially, like withhistory or science, and I'm
thinking well, whose history areyou sharing?
You know, whose science are yousharing?
Because it's that confidencethat this platform has my best
interest in mind and all I gotto do is just plug in the
standard and it has all theinformation from the Internet
(33:55):
here and it's going to be anaccurate output that I'm going
to be able to share and it's not.
But it's really scary that alot of teachers really see that
platforms that have thesepre-prompts already, that you go
in there and we were mentioningand talking about it in the
pre-chat, you know thinking thatthis is all completely true
(34:16):
when their knowledge cutoff datestops at 2023 and maybe it
might've moved up a couple ofmonths, just depending on open
AI.
But people think like, hey,this is truth.
And so my question always tothose platforms is well, since
you are up to date, you know, mystate got new standards.
Does your platform have newstandards?
(34:38):
And they're kind of like, well,yeah, I mean, you can put those
in and everything should befine.
I was like no, it's not.
I need to know that you aregiving proper information,
because then you're going to beteaching or creating something
that the students don't need tolearn, or maybe it's not
presented that way when it comestime to learning it and, of
course, for state testing.
(35:01):
Then another thing that I saw iswhen Chad GPT was doing the
images, people were like theyshowed a picture of a water
cycle and somebody said, oh wow,this is the most amazing thing.
Look at what Chad GPT can do.
And that whole thing was wrong.
The water cycle, you know imagewas correct, incorrect.
And somebody said, well, it'sjust a little incorrect.
(35:22):
I was like, no, no, there's nolittle incorrect.
It's either incorrect or it'snot.
And this one was incorrect too.
And I'm thinking to myselfyou're, you're posting this with
just here you go, guys, lookwhat I just created.
I'm thinking to myself that'sjust an example of what I mean
not checking that output and,like you said many times, it's
going to take you a lot longerto just say, ok, look, my book
(35:44):
already has this, or guess what?
My content coordinator alreadycreated this handout and all I
need to do is just tweak it.
And, you know, make it a littlebit more engaging, add a hook
to it and I'm done.
But it's almost something likeoh, I did it.
You know, openai or ChatGPT orthis platform did it for me and
I'm good to go and I'm ready,and that's kind of just so scary
(36:07):
for me, you know.
So let's talk about AI literacyfor all, all right.
So I want to ask you, you know,everybody talks about AI
literacy.
We've been talking about thewell, the Internet, interwebs,
talking about AI literacy, andthey talk about it.
So so much Many people.
They're still considered to bebehind, and if you're not using
(36:28):
AI in your classroom, you'redoing a disservice to your kids.
So I want to ask you, as apractitioner, as an educator in
the classroom, do you feel thatyour maybe small use of AI is
going to be hurting your kids intheir future jobs, or what are
your thoughts on that?
Jen Manly (36:47):
Yeah.
So I actually like I think it'sa really interesting
conversation because I think theway that most people are
interpreting AI literacy is thatevery student everywhere should
be putting things in to acomputer, should be like going
in and and prompting chat, gptor quad or the the edu approved
(37:09):
you know version of it and Ithink, like that's one of the
biggest mistakes.
So I think about how we teachAI and computer science and you
know for anybody who's who'swatching and you're like, how do
I bring AI into my classroom?
Computer science teachers andcomputer science curriculum
providers have been teachingabout AI since before the advent
(37:31):
of mass use of generative AIand there are so many good
unplugged lessons that helpstudents to understand the
ethics of it, for sure, but alsothe tech of it.
Understanding, like, how doesAI actually work?
What are you doing when you puta prompt in?
Where is that informationcoming from?
Right?
Like, what is the basic versionof an LLM, a large language
(37:54):
model?
How does that actually likegenerate information, which I
think is a really good piece ofbackground for everybody, right?
Like, when we talk about AIliteracy, I think a lot of
people are taking that to meanstudents prompting AI and I just
think, like number one, forcertain age groups, that's super
irresponsible.
(38:15):
And number two it's notactually AI literacy right Like.
Ai literacy is understandingthe how, understanding the
drawbacks of it right,potentially when you should and
shouldn't use it, and thengetting to the point where we
now can write our prompts.
We now can use it for you know,research or using or creating
(38:38):
outlines or however you want touse it, helping you study, right
, whatever tool, way you'reusing AI with your student and
so like.
I think that's a major missingcomponent that, again, computer
science teachers have been doingfor a very long time and there
is a lot of great freecurriculum that exists that
(39:01):
covers that, that covers AIunplugged, that covers the why,
covers the how, covers theseethical considerations that we
should have.
Okay, so that's number one.
Number two again, I think weneed to be very thoughtful about
when we're actually havingstudents use AI tools, right.
So, for example, if I'mteaching you know college
(39:24):
students, they're adults, right,like they are of age, they can
consent to their data being used, they can consent to their work
being input into a GPT orwhatever else, and so I feel
really good about havingstructured ways for them to use
AI and understand what AI canand cannot do, right, and I
(39:49):
might feel OK about that withwith high school students, right
, but even then they're children, they can't consent, they don't
fully understand whatinformation they're sharing, how
that information is being used.
I think a big question for mewhen I think about a lot of
these ed tech platforms is okay,yeah, you can grade my
students' work, but you're usingthat work to further build your
(40:12):
product, to develop yourproduct, to make more money.
My kids can't consent to that,right.
And so you know, I think about,let's say, we want students to
start learning how to prompt, tostart practicing how to prompt.
That might be something wherestudents are writing those
prompts and the teacher ismodeling it.
Right, it's on the board, onthe, you know, whatever board
(40:35):
you're using, and you, as theteacher, are actually going
through how to create prompts,how to use that to get
information, ways that you might, you know, tweak it, which,
again, like another positive ofthat, is that we now have limit.
We're now limiting the numberof prompts that we're putting in
because we're looking at itthrough a full class length,
(40:57):
versus 30 kids in a class allinputting into a machine at the
same time.
So I think like it's wideningour understanding of what AI
literacy is.
And also, you know, and here'sthe thing, like this is not new
to education, right, in math, weteach kids how to do things on
paper before we let them usecalculator.
(41:19):
We teach kids to write a draft,you know handwritten, before
they start learning how to putit into computer.
Right, they do an outlinebefore they start writing the
full draft.
Way back when I took my firstcoding class, mr Rose shout out
to Fearless Day.
Apparently he's still a teacher.
But Mr Rose would not let meget on a computer to write Java
(41:41):
unless I could handwrite it andnot have errors.
So we see this a lot ineducation, where you have to
understand the basics before youcan use the tool.
And I think a lot of educationis looking at it backwards.
We're saying put the kids onthe tool and that is AI literacy
.
That's not AI literacy.
That's using a tool withoutfully understanding how or why
(42:04):
we're using it.
You know.
Fonz Mendoza (42:06):
No, I agree with
you a hundred percent, because
it's something that I've seenfrom friends, you know, and it's
just because it's like theydon't know what they don't know
yet and sometimes maybe they do,but it's just like you know
what, it doesn't matter.
This makes my job a lot easierand they're just willing to just
(42:28):
, you know, kind of just takethat, let that take over, and
say, hey, I'm being moreproductive, I'm finishing this,
I'm doing this, but later onit's like they don't see the
cost or the dangers of thatuntil it's too late.
And for me, it's just reallybeing very cautious with how
you're using it, when you'reusing it and, like you said, I
always feel like there's a timeand a place to be able to use
that tool effectively.
So, kind of, as we startwrapping up, I want to ask you
(42:50):
something that kind of is a nicesegue into that, because we
talked about how AI can becomean over-reliance for even
students as well, like throughyour experience and being able
to see even that in higher ed,and just kind of questioning
things and saying like, hey, youknow, and giving them the
opportunity to say, ok, let mego ahead and rewrite, or giving
them a second chance.
But I want to ask you, in yourexperience and maybe your
(43:12):
thought, with so many tools thatare out there, you know, how
can educators still encouragethe students to maintain agency
in their learning process andreally use those critical
thinking skills when they caneasily just jump on?
Like I said, I was looking fora citation or just some
reference and somebody just said, well, put it in and it'll tell
(43:34):
you how to do it.
I was like, but it's notaccurate.
So how would?
What are some suggestions fromyou and your experience on how
students can still maintaintheir critical thinking skills
and not be over-reliant on thetech?
Jen Manly (43:46):
Yeah.
So I think and you know again,when I think a lot about AI, I
think about a lot of thingsrelated to AI, and it goes back
to what we already do with goodteaching, right, which is even
before AI came onto the scene.
Right, we were really criticalabout how many different pieces
(44:09):
of tech we were introducing tostudents, right, and we were
being thoughtful about like,okay, well, if I, you know, we
were talking about the citations, right, if I were going to
introduce a citation machine,which existed before mass use of
generative AI, right, where itwould just you put in your
information output a nice littleformatted bibliography for you,
right, I'm going to introducemy students to one tool that
(44:34):
they can use for that right, andbefore they use that tool, I'm
going to teach them how to do itmanually.
Right, they need to understandthe why before they're using the
tech that'm going to teach themhow to do it manually.
Right, they need to understandthe why before they're using the
tech that's going to help them.
And so I think, teachersthinking about, well, how do we
give students agency but stillbuild their critical thinking?
One of my favorite people tofollow who has been kind of
(44:56):
critical about AI is Sinead Bond.
I don't know if you know her.
Hey, mrs Bond, she's veryactive on Twitter, but one of
the things she talks about isyou know, I'm just going to have
more handwritten assignments,right, like she's an English
teacher and she's like, insteadof putting students in a
position where, potentially,they're going to use an AI tool
(45:20):
in a way that I don't want themto, I'm going to be very
critical about how I structureassignments so that it's
actually not advantageous forthem to use AI.
It's more advantageous for themto do it the way that you know
that I want them to do it, whichis either handwritten or you
know they're just doing it on aGoogle Doc, right, like normal.
So I think, for teachers, numberone is understanding that
(45:43):
student agency existed before wehad the mass advent of
generative AI.
I was teaching student agency.
I was teaching teachers studentagency and student voice and
student choice in 2017, right,with no tech, like you can build
in student agency totally onpaper.
Okay, so student agency and AIare not synonymous.
(46:07):
Certainly, AI seems to makestudent agency easier and does
in some ways, but you can createa classroom where students have
voice and choice and controlover their learning without any
tech at all.
Right, and so I think, likethat's number one is that those
things are not synonymous.
And then number two is likemodeling what it looks like to
be thoughtful, right.
(46:28):
So when we're thinking aboutwhat tools we're going to
introduce in our classroom, thatwhen we introduce that tool,
students already know how to dothat task without the tool.
The tool is going to add value,it's going to make it easier,
it's going to make, you know,their output more robust,
whatever it is, but you've beenthoughtful about why you're
introducing this particular tooland it's very clear to the
(46:51):
students, right?
They're not being given here's,10 different tools you can
choose from.
You as the teacher.
You, as the adults in the room,have exercised that you know
caution and thoughtfulness inorder to make it easier for your
students, because they don'tlike student agency doesn't mean
that they can choose between 10tools.
They do not have the capacityto do that at this point.
(47:14):
That is a very advanced skill.
That is still your job as ateacher.
So I think, like if you'rechoosing to bring in any tool
into your classroom, knowingthat part of your job is to be
the curator of those tools, andthat we are being very
intentional about communicatingwhy we're introducing a tool
into our classroom, whether it'sAI or not.
Fonz Mendoza (47:35):
I love it.
I love it.
Well, Jen, it's been an amazing, amazing chat.
Thank you so much for just yourinsight and your experience
that you brought into thisconversation.
Obviously, you know being amiddle school, high school and
then higher ed teacher and thenyour background in computer
science.
Thank you so much.
I really appreciate it and youknow just a lot of gems, a lot
(47:55):
of great things to digest, and Iknow that our listeners will
definitely find a littlesomething there that they can
sprinkle onto what they arealready doing Great.
So please make sure guys, all ofyou listening, make sure you
follow Jen, especially on TikTok.
She is huge on TikTok, sothere's strategic classroom at
strategic classroom, so followJen right there.
She'll share some great videos,time hacks, great things for a
(48:18):
lot of you as educators, just tohelp you and just be more
productive and more efficient.
And she, it's amazing.
One hundred and thirty ninethousand plus followers, five
million likes it's amazing.
And she was a guest here on myEdTech Life.
I feel honored to have a bigTikTok star here on our show and
she's fantastic and just theway that you see her, she's very
(48:39):
genuine, very authentic andthat's the way that.
That's the content that you'llsee.
So nice.
Yes, please make sure you followher for sure.
I promise you you're going tolove everything that she puts
out.
So, Jen, thank you so much.
But before we wrap up, Jen, Ialways love to end the show with
the last three questions, sohopefully you're ready to go.
So I want to ask you as we know, every superhero has a weakness
(49:01):
or a pain point, and forSuperman, kryptonite was that
weakness and pain point.
So I want to ask you and sincewe're talking about AI, well,
I'm going to flip it on that andjust say, in the current state
of education, what would you sayis your current AI kryptonite?
Jen Manly (49:19):
I think it's grading.
I have a lot of big feelingswhen it comes to using AI for
grading and I sometimes get alittle bit too invested.
So I will say using AI forgrading is the thing that kind
of sets me off.
I get a little bit passionate.
Fonz Mendoza (49:40):
There you go,
great answer, all right.
Question number two If youcould have, a billboard with
anything on it.
Jen Manly (49:49):
What would it be and
why?
The best quote that I thinkevery teacher needs to hear is
that your worth is not definedby your productivity.
So a lot of times, you know weare just trying to get it all
done, and it can make us feellike we're not good teachers if
we don't get through everything,and who you are as a teacher is
(50:11):
so much more than what you getdone or what you don't get done,
and so I actually, for a verylong time I had that as my phone
background because I justneeded to be reminded of it all
the time.
Fonz Mendoza (50:20):
Nice, Excellent
message.
Love it All right.
And the last question if youcan trade places with one person
for a single day, who wouldthat be and why?
Jen Manly (50:31):
I'm going to pick my
almost four-year-old Jack.
He is three and a half, almostfour, and I just would love to
go back to I watch him play andjust get so excited about
everything.
Right, he's very into magnettiles and baseball right now and
I wish I could go back to thatlike constant state of discovery
(50:53):
.
It's very cool to watch and Ibet it's very fun to be in.
Fonz Mendoza (50:57):
So I love that.
That is a great answer.
Well, Jen, thank you so muchagain for just sharing your
voice here.
As you know, we do what we dofor all our amazing guests that
listen is just to amplifycreator, educator, professional
voices, and your voice isdefinitely a great voice within
our space.
Please make sure you visit ourwebsite at myedtechlife, where
(51:23):
you can check out this amazingepisode and the other 322
episodes.
So we're excited.
It's five years in the making,a lot of great episodes, and I
promise you you're definitelygoing to find some little
nuggets there that you cansprinkle on to what you're
already doing great.
So thank you so much.
And if you're not following uson socials, what are you waiting
for?
Please make sure that youfollow us on all major social
(51:45):
platforms at my EdTech Life andif you haven't done so yet,
please subscribe to our YouTubechannel, give us a thumbs up and
share the content, because wewould love for all those
wonderful AI algorithms to getour content out to many more