All Episodes

October 16, 2024 21 mins

Dr. Laina Lockett, the STEM Education Specialist in the Stearns Center for Teaching and Learning, joins us to talk about the “big thing” right now in education: artificial intelligence-based text generators. We explore actionable strategies for continuing impactful and engaging teaching in this new educational context. 

Resources: Your host, Dr. Rachel Yoho’s, publication on inclusive teaching now that we have AI text generators: Yoho, R. (2023). No, Let's Not Go Back to Handwritten Activities: Inclusive Teaching Strategies in the Context of ChatGPT. In The National Teaching & Learning Forum (Vol. 32, No. 6, pp. 1-4). https://onlinelibrary.wiley.com/doi/pdf/10.1002/ntlf.30379, Stearns Center for Teaching and Learning at George Mason University recommendations for teaching considering AI text generators:  https://stearnscenter.gmu.edu/knowledge-center/ai-text-generators/, Inside Higher Ed article mentioned in the episode:  https://www.insidehighered.com/news/tech-innovation/artificial-intelligence/2024/07/23/new-report-finds-recent-grads-want-ai-be  

Check out our website!

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Rachel (00:08):
Hello and welcome to the Keystone Concepts in Teaching
Podcast, a higher educationpodcast from the Stearns Center
for Teaching and Learning, wherewe share impactful and evidence
based teaching practices tosupport all students and
faculty.
I'm your host, Rachel Yoho.
In this episode, we're going tobe discussing how we keep
teaching with meaning and impactand to support all students and

(00:30):
faculty, of course, as well,with the emergence of these new
artificial intelligence textgenerators like ChatGPT.
I'm joined by this episode'sguest, Dr.
Laina Lockett.
Dr.
Lockett has teaching experienceat the college level from being
an adjunct instructor at thePratt Institute in Brooklyn and

(00:50):
being a teaching assistant atRutgers.
Dr.
Lockett served as a graduatefellow with some summer research
programs at Rutgers, where sheled workshops about writing and
presentation skills.
She also has experience workingwith faculty and teaching
assistants from the RutgersAcademy for the Scholarship of
Teaching and Learning.
And Dr.
Lockett has a PhD in Ecology andEvolution from Rutgers

(01:12):
University and a Master's inEnvironmental Science from
Towson University.
So thank you so much for joiningus for this episode, Dr.
Lockett.

Laina (01:21):
Thank you so much for allowing me to be here today.

Rachel (01:25):
So as we get started, I know we have lots of thoughts,
there's lots of concerns outthere, lots of people writing
articles and all these thingsabout ChatGPT and all of these
other AI text generators.
But can you tell us a little bitabout some of the major concerns
instructors have right now aboutthe emergence of these new AI
text generators like ChatGPT?

Laina (01:48):
Of course.
I personally think that theremay be three main concerns that
faculty seem to have and I thinkmaybe you might guess that the
number one concern might becheating.
I think also some faculty or agood number of faculty are
concerned that relying tooheavily on these types of
technologies will lead to lessskilled students.

(02:10):
And I think a third concernactually is, and continuing to
see, the spread ofmisinformation.

Rachel (02:17):
These are excellent points and I think they
encapsulate some of the majorconcerns really well.
And I think, and I agree withyou, that a lot of the initial
concerns are around things likecheating.
That this isn't really doing theassignment or this isn't really
doing the activity or learningin the field or profession.

(02:38):
And so let's talk a little bitabout, what are some of the
quick sort of gut reactions thatmany instructors might be having
right now about how toessentially"ChatGPT proof" their
assignments.
So can you tell me a little bitmore about this related to some
of these themes you were talkingabout?

Laina (02:56):
Certainly, I actually had to chuckle a little bit because
I think that gut instinct tochat GPT or AI proof your
assignment is not going tonecessarily go the way that a
faculty member may want it togo.
So, for example, if we'rethinking about the idea of
cheating, it might feel like weshould lean on those AI
detection software systems thatare out there, but that's

(03:17):
actually currently not a great,reliable resource to use.
I've actually tried it myself.
And when I was, you know,putting in different text
examples, it was just as likelyto tell me that what I actually
wrote was written by AI and whatwas written by AI was actually
written by a human.
So you don't want to rely tooheavily on those types of

(03:38):
things.

Rachel (03:39):
Yeah, those don't sound good.
As we're thinking about it, wecertainly don't want to be in
those types of academicintegrity hearings if it's not
very reliable, let's say.
And so what else can we do?
What else do we want to bethinking about right now?

Laina (03:54):
I think another thing that we could be thinking about
is, being transparent.
So we might be afraid that ourstudents might be less skilled
moving forward, we might thinkabout how we can be giving them
information about why it'simportant to learn the skills
that we're teaching them.
And so I'm a fan of Duolingo, Ihave lots of different languages

(04:17):
open, but I have not gotten veryfar in my learning tree, for
many of them.
So if I were to ask AI to writean essay in Swahili, I've only
done, you know, several lessons.
So I wouldn't be able toactually check the accuracy of
the output for AI.
And so I think that applieswhether it's languages or
different content.

(04:38):
You have to have foundationalknowledge because you're going
to need to vet these types ofsoftware, because especially
when we're thinking about textgeneration, it's going off of
what's the most probable nextword, not what's the accurate
next word.

Rachel (04:52):
That's an interesting point because one of the things
that I've seen, one of therecommendations we might
consider would be havingstudents do basically side by
side assignments.
So one side would be, or thefirst part, would be them doing
the activity, say, by hand.
Whatever by hand looks like intheir field or discipline,
whether that's written out,whether that's calculations,

(05:13):
programming, whatever that mightbe.
And the second would be havingChatGPT or another AI text
generator do the assignment andthen comparing them using some
of those skills.
So is that the type of activityyou're talking about, or what
else might we be consideringhere?

Laina (05:30):
So I think that's a great assignment that you could do
with your students.
I think it can be a great basisfor having discussions.
I think there's some other waysthat we can incorporate AI into
things that we may already bedoing.
So something like a think pairshare activity where you have
students answer a questionindependently, then they work
with a partner, a small group,and reanswer the question.

(05:52):
You could think about duringthat deliberation step having
students turn to AI to try tohelp them if they're, you know,
having this tie or disagreementof ideas.
That could be a place.
I also think a place that I'veused it personally is in an
assignment that I already hadthat was scaffolded.
So when I've been working as anadjunct at Pratt, I teach STEM

(06:13):
writing courses.
And so part of that writingprocess is peer review.
And that's always been an areathat my students have struggled
with because they like beingnice and no one wants to tell
their peers like this could bebetter.
Right.
So I still have them do the peerreview and we usually focus on a
particular concept.
So maybe we're thinking abouthow to be more concise.

(06:35):
But then after they do theinitial review, I have them use
AI to also give a similarcomparison on that same kind of
area of focus and have themassess what the AI says should
be fixed.
And then they can use that aspart of that process to give
their peer feedback.
And so then they don't have tofeel like the bad guy, but it
also helps them think criticallyabout analyzing writing as well.

Rachel (06:59):
That's really interesting because when we
think about this, I mean,teaching peer review is
exceptionally hard.
It doesn't sound like a thingthat should be, but I think we
probably need an episode in thefuture just talking about how to
do meaningful peer review.
But as we think about providingfeedback, as we think about, how
do we design the assignments,can we go back a little bit to

(07:21):
what you were talking about withscaffolding and can you tell us
just a little bit more aboutwhat that means to you in this
context and what that could looklike for our faculty, our
instructors who are listening?

Laina (07:34):
Sure.
So for me, scaffolding, I usethat as a strategy for larger
projects that students work onover the course of the semester.
So instead of giving them a setof instructions at the beginning
and hoping they do what I wantthem to do by the end of the
semester, I break down thedifferent steps of the project
and have them turn thoseseparate steps in.

(07:56):
And that gives them anopportunity to get feedback
before that final part issubmitted that's going to be
worth the larger portion of thegrade.
And so different steps mightlook different depending on the
final product, but I do thiskind of blended project where
they do a scientific researchpaper, and then they turn a
scientific concept from theirpaper into an art installation.

(08:20):
And so we have several stepswhere they work on research, so
finding peer reviewed articlesto support their scientific
ideas and questions that theyhave.
We have several rounds ofwriting and thinking about, you
know, outlining and buildingthat out based off of the
information that they found.
Then we even have a draft ofwhat are you going to create.

(08:43):
So they'll do a sketch.
So a step in the scaffoldingproject doesn't always have to
be a written thing that yourstudent turns in.
And then I also have them do adraft budget before they submit
the final thing.
So the, the final product is agrant proposal and then they
make the thing that theyproposed essentially.

Rachel (08:59):
This sounds like it translates really well across
disciplines, because not onlyare you talking about the
science side, the writing side,the art aspects, and creation
there, but I think here whatwe're looking at is how
scaffolding can be a great wayto, unfortunately,"Chat GPT
proof," or really look at how dowe still teach with meaning and

(09:20):
with impact and supporting allof our students in the context
of these AI text generators.
Because when we don't wait untilthe end, or we don't have one
big project, when we look atsmall deliverables, not only is
that a great, and very inclusiveway to create a learning
experience, but also we'relooking at here ways that we are
essentially checking to see ifthis is our student's work as we

(09:43):
go along and how we have theseassignments build over time.
So I think that's particularlycompelling.
And so to build on that as we'retalking about some of the
concerns or some of the gutreactions that we've been
talking about.
One of the things that oftencomes up is instructors who want
to have students just doeverything by handwriting now.

(10:04):
We're gonna set all thecomputers aside, put them back
in the bags, not bring them toclass.
We're forgetting abouttechnology.
And so can you tell me a littlebit about your reaction to that
or what we can do when we havethat sort of initial gut
reaction to ChatGPT or some ofthese other AI generators.

Laina (10:24):
I think it can be helpful to pause and do a little bit of
self reflection.
So I think that there are placeswhere doing handwritten
assignments or even somethinglike an oral presentation has a
place.
But I think that if we're justhaving students do those
activities as a way to try toget around these new

(10:46):
technologies, that kind ofmisses the point.
So when we're thinking aboutwhat we have our students do in
our class, we should always belinking everything back to the
learning outcomes and making ourdecisions based on those.
And so, for example, in one ofthe classes I teach at George
Mason, I want my students topractice oral communication as

(11:06):
scientists.
I think that's really importantthat they be able to have a
command of the concepts and beable to also relay it to a lay
audience because when they go totheir careers, they're going to
have to talk about scientificconcepts.
So I do have a space for that,but I don't do it because I'm
trying to get around AI.
I think if we're trying to makeour classrooms inclusive of

(11:27):
everyone, we want to make surethat we're making choices that
aren't going to disadvantagestudents just because we're
fearful of a certain outcome.

Rachel (11:35):
I like that statement right there.
And it's really about the fear.
One of the things that we mightbe thinking about is how we are
approaching our assignments andour activities.
And that's really what we'retalking about here, is whether
that's coming from a place oflearning, or from a place of
fear, like you were justmentioning, Laina.
I think this is a great way, aswe're thinking about some more

(11:57):
proactive and more inclusivestrategies that we might be
considering, to really considerwhat the motivation is.
You know, are we wantingstudents to just look at pens or
pencils just because we'reafraid of the text generators?
Because, well, unfortunately, Ihate to tell everyone this, but
you could do this stuff with thetext generators and then

(12:18):
handwrite it in a lot of cases.
But even so, is that the bestway to do it?
Is that inclusive?
I mean, I can type a whole lotmore in the same amount of time
than I can handwrite on a page.
And I personally don't want togo back to reading handwriting
or trying to guess what peopleare writing anymore.
As we've gotten away from thatin teaching and grading, it's

(12:39):
been a great improvement.
And so what are some of themore, say, other proactive and
more inclusive teachingstrategies we might be
considering in this neweducational context?

Laina (12:52):
So I think we've actually already talked about some of
these things because I thinkthey're just foundational to
teaching such as thinking aboutincorporating scaffolded
assignments.
If we aren't already doing that.
Again, I think being clear withexpectations is going to be
really helpful as we moveforward with trying to be
inclusive and also incorporatingAI into our courses.

(13:14):
And I think also taking sometime to update assignments, that
might be a good place to go aswell.
But I think one that maybe isn'tthought about as much is the
idea of being mindful aboutconsent.
So while it's great to use thesetechnologies, and I think it's
going to be really important forour students to learn how to do

(13:36):
so specifically in relationshipto their own fields, a lot of
these programs do require you toset up accounts, and I think it
is perfectly reasonable for ourstudents to not want to do that
because there is still a lot ofunknowns about these programs
and how they're working.
And so I think that as we reviseour courses to help our students

(13:59):
develop these digital literacyskills of using generative AI,
we do need to make sure thereare spaces for them to still
learn this without forcing themto do something that they're not
comfortable with.

Rachel (14:11):
That's a really important point to be thinking
about privacy and consent andall of these things.
We might be thinking about thisless often in our day to day
because so many of the differenteducational technologies, the
plugins, the polling systems,whatever we're using in our
teaching, these are so highlyvetted by the institutions.

(14:32):
And so these other things likethe AI text generators, we might
be thinking about thoseconcerns, or potential concerns,
less often.
So some of the ways that we'veheard, there's many
recommendations out there,certainly, but we might consider
having example prompts andresponses from an AI text
generator.
So instead of asking ourstudents to do those
interactions, having thosealready, providing that as part

(14:55):
of the assignment, part of theinstructions, and say, okay, now
take this, now use this, now dothe side by side comparison like
we were talking about there.
But certainly, these are thingsthat we can model.
You know, we might be thinkingabout not just the logistics,
not just prompts or responses,but we might be putting our
students into, for instance,scenarios.

(15:16):
As a practicing professional inwhatever our field is, we might
be thinking about, well, howwould I or how would I not?,
perhaps, use AI text generatorsor any of these other related
tools as starting points, ornot, for our work and our
practice, and really think aboutwhat that could look like then

(15:36):
in the classroom as learners, asindividuals developing into that
professional practice.
Here we're really looking at howcan we be creative and sensitive
to a number of different issues,not only trying to say prevent
cheating, but also include ourstudents and some of their
potential concerns.
Because I think a lot of us havea lot of those same sorts of

(15:56):
concerns as well.
And so as we're expanding ourconversation a little bit to
talk about some of these biggerpicture things, how might we,
for instance, design coursepolicies for our syllabus around
some of these AI textgenerators?
What would we consider includingor perhaps not including?

Laina (16:17):
So I think when it comes to AI policy for your class,
there are a couple of things.
Currently, George Mason doesn'thave a universal policy, so we
can't just, you know, look thatup and slide that into our
syllabus.
I would say, talk to yourdepartment chair or your course
coordinator, if you have that,to make sure that you're in line
with the context in which yourcourse falls, but I think again,

(16:40):
it's really going to tie backinto your learning outcomes.
So I don't think that everyclass necessarily should have
the same policies because youwant to make sure that you're
setting your students up to dothings that you can properly
assess the outcome.
So Bloom's taxonomy, it's ahierarchy about different things
that we might ask our studentsto do.
So at the bottom of thatpyramid, we have what you might

(17:03):
consider more basic skills likerecalling information, right?
So if your course learningoutcomes focus on something
that's more basic," so to speakyou can't see my air quotes, but
hopefully you all get my pointI'm going for so these simpler
things that we might ask ourstudents to do that AI can do
really well, then you might notwant to have a lot of AI use in

(17:28):
your course because how are yougoing to tease out what your
student actually knows versuswhat the AI is generating.
But if you're working with maybemore senior students.
And your outcomes allow for morecreativity, and developing new
ideas, things like that so thatwould be kind of in that higher
level of Bloom's taxonomy itmight make more sense to allow

(17:49):
more freedom for students.
The Stearns Center has on theirwebsite, a table that gives some
sample language.
And so I think what you'll seeif you take a look at that
website is that it doesn't haveto be black or white.
I've heard someone use a trafficlight analogy, right?
So we can have classes where itdoesn't make sense to use it.
And we can have classes whereit's a free for all, but your

(18:11):
students have to be responsiblefor what they submit.
Right.
But then there are definitelygoing to be classes where it's
okay some of the time, but notall of the time.
And so that might be our yellowlight kind of classroom.

Rachel (18:25):
I like that comparison quite a bit because that gives
us some very tangible things inour transparency and our
communication with the students.
If we're using the yes it's okayto use here, ah use it with
caution in these spaces orhere's how to use it in these
spaces, or not at all.
You know, any of these types ofthings that increase

(18:45):
transparency are always reallyuseful here.
And so as Laina mentioned, wehave some great recommendations,
some guidelines on the StearnsCenter website that we'll
provide some additionalinformation in the show notes
and links from the podcastepisode.
But as we wrap up for today, itsounds like this concept really
represents a keystone concept inteaching because we're really

(19:06):
looking at how we continueteaching, how we have meaning,
how we're not just beingreplaced, let's say, by these AI
text generators, but we'rehaving meaning and teaching to
support all of our students intheir professional preparation.
So can you reflect on that as wewrap up in this conversation?

Laina (19:25):
Absolutely.
So I think.
that there is kind of onetakeaway I would say that I'd
like everyone to walk away withand that is that we can do it
one step at a time, right?
So there are lots of options ofthings that we might think about
doing to revise our class, butwe can do it one step at a time.
And especially since thetechnology is going to keep

(19:47):
evolving, it might feeloverwhelming to think about all
the things that you could do,but I'd encourage you to pick,
you know, maybe just one thing.
So maybe it's something likeupdating your slides on digital
literacy to include someinformation about generative AI
and how that fits into it.
Or you may consider having adiscussion with your students
about the ethics of generativeAI in your field, right?

(20:10):
So you can start someplace smalland go from there.
But I do think it's importantthat we do start to think about
these things because there is anarticle that came out in Inside
Higher Ed, and the point wasthat employers are starting to
expect students to be familiarwith these tools.
And so I think it would be adisservice if we don't
incorporate AI anywhere in thecurriculum.

(20:31):
The article went on to say that70 percent of recent grads think
that AI needs to be part of theundergraduate curriculum, and I
would have to agree with them.

Rachel (20:39):
We're learning in a new context, and being able to use
the tools that students will beusing in their future
professions I think isessential, no matter what that
tool might be.
If we ignore it, that'scertainly not only to our
students detriment, but to oursand the institution and their
learning.
AI text generators, ChatGPT, allof this is certainly a big
topic, so I'm sure we'll berevisiting this in the future.

(21:01):
But I appreciate your time andwe can't wait to share our next
episode with you for KeystoneConcepts in Teaching, so please
come back for our next episodeas well.
So thank you so much, Dr.
Lockett.

Laina (21:12):
Thank you.
Advertise With Us

Popular Podcasts

United States of Kennedy
Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.