All Episodes

October 1, 2025 46 mins

Faculty adoption and use of AI in higher education varies widely. In this episode, three colleagues from the University of Mississippi: Josh Eyler, Emily Pitts Donahoe, and Marc Watkins, provide their perspectives on AI use in higher education. Josh is the Senior Director of Center for Excellence in Teaching and Learning and Assistant Professor of Teacher Education, Emily is the Associate Director of Instructional Support in the Center for Excellence in Teaching and Learning and Lecturer of Writing and Rhetoric, and Marc is a Lecturer in Composition and Rhetoric and Assistant Director of Academic Innovation.

A transcript of this episode and show notes may be found at http://teaforteaching.com.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Faculty adoption and use of AI in higher education varies widely. In this
episode three colleagues at one institution explore different perspectives on AI.
Thanks for joining us for Tea for Teaching, an informal discussion of innovative and effective

(00:23):
practices in teaching and learning. This podcast series is hosted by
John Kane, an economist... ...and Rebecca Mushtare, a graphic designer...
...and features guests doing important research and advocacy work to make higher education more
inclusive and supportive of all learners.Our guests today are Josh Eyler, Emily Pitts

(00:50):
Donahoe, and Marc Watkins. Josh is the Senior Director of Center for Excellence in Teaching
and Learning and Assistant Professor of Teacher Education, Emily is the Associate Director of
Instructional Support in the Center for Excellence in Teaching and Learning and
Lecturer of Writing and Rhetoric, and Marc is a Lecturer in Composition and Rhetoric and

(01:11):
Assistant Director of Academic Innovation. Welcome back, Josh, Emily, and Marc.
Thank you. Thank you for having us. Thank you.
Thanks, great to be here.Today's teas are:... Josh,
are you drinking tea?Well, you know that I never drink tea,
but I have a delightful water here.I will continue to say that that's the
foundation of tea. It is.

(01:33):
How about you, Emily?I'm appalled that so many people come on
this podcast and don't drink tea. I am drinking a Scottish morn tea, one of my favorites.
Very good choice. It aligns with mine this morning. And how about you, Marc?
I am drinking a tea called Red Dragon that my wife got for me in Portland,
and it's quite good. I really recommend it. It came from Willow Tea House.

(01:54):
That's what I like to hear. Very nice. That sounds like legit tea.
And I am drinking a black raspberry green tea.That's a good choice. There's no cherries
involved. I almost did.
Oh yeah, I know. And I have a Scottish morning tea.
We've invited you here today to discuss the impact of generative AI on teaching and learning

(02:14):
in higher ed. We see a pretty broad spectrum of opinion from faculty. Some faculty oppose
all student use of AI because of concerns it will impede student skill development. Other
faculty have fully embraced student use of AI and argue that we need to prepare students for their
futures in a world in which generative AI tools are ubiquitous. Most faculty,
though, are somewhere in between those extremes. To start this discussion, could

(02:38):
each of you provide an overview of your general position on AI use in higher ed by students?
Oh, boy, that's an interesting conversation to begin with. I don't really know about my own
personal stance, if that's even something that I have much control over, because so much of
this isn't being directed by faculty members, it's being directed by students bringing these products

(03:00):
into the actual conversation. I think that changed a little bit last year when the new dimension came
where we had a lot of this technology starting to be embedded in existing university systems. So I
try to provide my students guidance. I don't advocate for them to use AI in assignments.
I don't make them use AI in assignments, but I do make it clear that if they are going to use them,

(03:21):
they have certain responsibilities, and they also should have certain expectations
from me as a teacher to be a little bit more informed about this, and also from their peers,
if their peers are using AI with them in some way, shape or form, too. So really, it's about
just kind of guiding them in certain ways. What this is going on depending on what style
class I teach. If it's a small low-enrollment class too, I'll have a values-based statement

(03:42):
when I talk about… I invite them to discuss how they'll use or not use AI, versus a large class
that might be a writing intensive course too, like our first year writing classes, where I use a stop
light approach, where Emily has actually helped develop this too, about red light, yellow light,
and green lights for students to do so.I thought a lot about what I was going to say
today, because I want to not seem like I'm the physical manifestation of that Simpsons’ meme

(04:07):
of “old man shakes fist at clouds.” So what I'd like to lead with is that, as a faculty member who
believes very strongly in shared governance and academic freedom, I completely support any of my
colleagues who want to experiment with it, build it into their courses, support students, all the
things that we'll hear about in some shape or form today, but as an individual faculty member for my

(04:31):
own courses, I will never build intentional use, and I want to clarify what that means in a second,
intentional use of AI into my courses, and I don't want students to be using it either. And what
I mean by intentional, because I know Marc very rightly says people are using AI even if they're
not aware that they're using AI, right? If I'm using Google and I forget to phrase it correctly,

(04:54):
AI is giving me things like that. So when I say intentional use of AI, what I mean is me
as the instructor intentionally building it into courses or assignments or activities, and students
intentionally going to AI to do the work of the course or engage cognitively with it. So there are

(05:16):
labels for these things. I don't put a label on myself, “refuser,” “resistor,” things like that,
but that is my own position pedagogically.Yeah, I think I agree with a lot of what Josh and
Marc have already said. I think on a curricular level, students should be exposed to generative
AI at some point during their college careers, because they're going to encounter it as students

(05:36):
and kind of beyond school in their professional and personal lives. So I think it's important
for them to know what it is, how it works, and to develop I think what some people have called
critical AI literacy, understanding the kind of implications ethically of AI use,. understanding
who benefits from AI and who benefits from the narratives that we have around AI. And so I think

(05:59):
all of that is important. There are some classes where it's really appropriate to incorporate AI
and think about how it might be used in a specific field, and some classes where I think it's totally
inappropriate. In my own classes, we do talk about AI a lot, and I give students the choice
of whether or not they want to engage with it. And I think it's interesting that Josh brought up the

(06:20):
word intentional, because I was going to say what I really want for my students is for them to use
AI intentionally if they use it, or to refuse to use AI intentionally, if they want to refuse it.
And by that, I mean they should know kind of where it is appearing in their lives. And they should be
able to say, “I want to use AI and I want to use it for this specific purpose, and here's why,” or

(06:42):
“I don't want to use AI for any of these purposes, and here's why.” So one of the things that I think
I want to work on most in my own classes is developing that sense of intentional use,
instead of just like, “uh, I guess I'll use it because I'm late on this assignment, or I guess
I'll use it because everybody else is using it. Or I guess I'll use it in this way just ‘cause.”
I think all of you have kind of mentioned this idea of intentionality around student use. Can

(07:08):
you talk a little bit more about why the frame of intentionality… I mean, I think we can kind of
make some guesses… in relationship to maybe some of your general concerns of students using AI.
Sure. I think it's deeply important that everyone in the process is intentional about AI, whether
you're a teacher bringing it up in the class, setting up a course policy too, or thinking about

(07:28):
designing certain assignments where students might be navigating the technology or might be using
tools to bring in assessments that you would think you don't want them using it for too. So you have
to look at this from the basic framework of every single person that's going to be involved in that

sort of process (07:43):
student, teacher, in some cases  the actual librarians were actually helping them
navigate resource materials that are available to them, maybe even administration. We want everyone
within this process to be intentional about the technology. Just the past month alone, we've had
actual reports from police officers using it to write a crime report, lawyers using for criminal

(08:04):
and civil cases, all undisclosed. We even had a report of a judge using it. We want to really be
thoughtful about how this technology is playing out in broader society, too, and start thinking
about our roles as educators, and how we can start talking with students about what is ethical,
what's appropriate, and what is actually a good, helpful use case of this tool. I'm very concerned

(08:25):
that we've been looking at usage in terms of augmentation, collaboration, automation. I think
we can handle the first two in some ways. They all have their own ethical issues with that, too.
But I'm really getting more and more concerned with the automation aspect. We are starting to
see students also do the entire assignments. We're also seeing these new tools, like AI agents with
promises of being able to completely automate the entire coursework for students and also for

(08:48):
faculty too, and actually grade and assess student writing and other types of student assessments.
Yeah. I mean, for me, intentionality means a couple of things, but I think awareness is an
important pillar of intentionality, knowing all of the different questions that one must consider
before engaging in intentional use of AI. So what are the ethical dilemmas? What are some of the key

(09:13):
questions? And for myself, the ethics are divided into two categories. There's the outside of the
classroom ethical debate and the inside the classroom ethical dilemmas. So how are people
aware of the choices that they could potentially make, and then a critical thinking process for
working through the actual decision to use AI.I think for me, the reason that I'm concerned a

(09:35):
lot about intentionality is because I'm really concerned about student agency. And my biggest
concern about AI is the way that it can take away our agency. Agency is something I really highly
value in my teaching… I think, a guiding principle for my classes, as well as one of my most desired
outcomes for students. I want students to leave my class with more agency in the world

(09:59):
and the skills and knowledge to make better, more informed decisions for their lives. And so I have
a lot of concerns about how AI, but also just our general technological developments in this moment
are making it really easy for people to give up their agency. So I'm thinking of things like in
the book filter world, Kyle Chayka, and I'm not sure I'm pronouncing that name correctly,

(10:19):
but they talk about how algorithms control our tastes and our consumption and our lives. It
kind of used to be the case that we would go out and seek out the media that we wanted to consume,
and now a lot of that is kind of determined for us, and so there's this, I think,
huge loss of agency. And you could say the same about generative AI. When you allow AI to

(10:40):
tell you what something means, or what you think about something, or what to say about something,
you're giving up some of your agency, and you know a little bit less about what you think, or
what you want to say, because you haven't really kind of wrestled with that. So I think giving up
your agency in that way can be really seductive for students who are under a lot of pressures
and when the tasks that we're asking them to do are very difficult cognitively, but it's

(11:02):
also really important and rewarding to go through those processes and develop that sense of agency,
and I don't want students to give that up just because they think they need an A in the class,
or because they don't have the time to finish an assignment. So intentionality is so important
for me because it's a huge part of agency.So there's a lot of concerns that you've all
expressed about students using AI to reduce their agency and to develop their skills,

(11:26):
but are there some ways in which student use of AI might perhaps support student learning,
or might also reduce some of the equity gaps that we observe because our students come in
with very different backgrounds, and it's hard to design courses that will address
all the needs of all the students in classes.Yeah, I think we're seeing some really interesting
experimental research designs coming out of California, but they have something they're called

(11:49):
PAIRR, which is peer review from a human being, followed by AI, followed by review by instructor,
and then actually the reflection and review all put together. So it's an augmentation
type of process in that way too, where you get a standardized peer review and a writing sort
of session too, and you can add AI on top of it. But what they're finding is that the students that

(12:10):
use the AI can actually go through multiple rounds of peer review, then the actual teacher can review
both the peer’s comments, the AI's interactions with them, and ask the student to reflect on the
entire process holistically. So it's able to do a lot more in terms of giving feedback and rounds of
feedback than it is just from one human being or a series of human beings. We're also seeing lots

(12:33):
of other different examples come out here about AI being used in research and actually looking
to use different types of breakthroughs, but also being used to support students and different
student learning outcomes by going through a process of coming up and attaching different
research points to student learning processes throughout this. Sometimes this can be as simple

(12:53):
as a static sort of like summary that you give to students. Sometimes this could be as wild as
an AI voice, which I have some problems with, talking with the student too, and interacting
with them. So from every sort of response, there's a lot of caveats about what we're seeing from the
research studies and databases popping up here with AI in them. But it does look like that if

(13:14):
it is intentionally designed, there are some benefits for students and student learning.
I feel like I should jump in here to build on what Marc is saying, because I suspect that
Josh is going to have a totally different take. So in my view, I think that the question about,
are there ways that students can use AI to increase learning is really less about the kind of
particular usages and more about student attitudes toward AI, or the kind of orientation they take to

(13:39):
using AI. So I'll give an example to illustrate kind of what I mean. A couple of years ago,
I surveyed students at the end of this semester in my writing class about their uses of AI,
and I had one student say I used AI to correct my grammar, and so I learned a lot from that. And
then I had another student say I only used AI to correct my grammar, so I didn't learn very much.
And so the difference is not about the usage. They were both using AI in the same ways, but it was

(14:02):
about how they were approaching that use and what they were trying to get out of it. One student was
approaching it with the idea that AI could help them learn something here, and the other one was
approaching it with the orientation of AI can kind of correct a product that I've created. So I think
that the key is that when students approach AI, they need to be approaching it with a learning
orientation, or they're not gonna get much out of it. I mean, this is just like anything. And I

(14:26):
think the students you interviewed on this podcast recently, Kaija, was a great example of that,
and one of the things that she said is that she really wants to figure out things for herself,
and she uses AI in ways that support her in figuring things out for herself. I think the trick
here is that students don't always know where the line is, and have trouble determining when am I

(14:47):
actually learning, because we don't often ask them to think about that. So I think one of the things
that we need to be doing right now is working with students to develop their metacognitive abilities.
If they're going to use AI, they need to know what's going on in their brain when they're
using it, what specifically they're getting out of it. So, what uses of AI can be productive? I
think lots of them as long as students are able to understand and articulate what they are getting

(15:12):
from their use of AI. I think the question about equity gaps is maybe a little different,
and I feel like I need to think more about this, but I'm a little suspicious, to be honest, of a
lot of discussion about using AI to reduce equity gaps, because I have a kind of suspicion that AI
might just Be a band aid here on a larger problem, and we should think about how to fix the larger

(15:33):
problem, like if students are not educationally prepared, or if they are going into a workforce
situation where they're being judged unfairly by, for example, their proficiency in English, then I
think we need to change the thing that creates the equity gap in the first place, rather than trying
to paper over that with AI. I’m not saying that AI can't be used productively to reduce equity

(15:54):
gaps. But I'm a little suspicious of some of the ways that have been proposed for that.
Right, that's a really great point. So I'm deeply skeptical that either of those two things that you
said, John, will ever be true. I think the major issue right now is that, as Marc was indicating,
the research is so nascent, it's just developing. So for every study that someone holds up in

(16:16):
support of AI and learning, I can find one that says the opposite. We're just at the early stage
where it's more like a tit-for-tat kind of process than it is revealing anything that I
think we can draw on to make truly evidence-based claims about learning. So I think at this moment,
we are embedded in true academic argumentation. We're bringing to bear research that we know from

(16:40):
other fields and other areas and other aspects of the cognitive processes of learning, and applying
it to the AI question right now. Now that will change over the next few years. We'll get more
research. We'll be able to, I think, draw together claims that rest on a more solid foundation,
but I would need someone to be showing me evidence that using AI is more effective for learning than

(17:02):
other strategies that we have. It's really as simple as that. I think Emily has a great point
about the equity gaps here. So I think we learned a lot during the pandemic about access equity,
specifically with technology. In Mississippi, especially, we were learning a lot about what
access students had to technology and how that was unequally distributed. And so that's one aspect of

(17:27):
it. Another aspect is, when we talk about equity gaps, we're talking about systemic problems and
systemic solutions. So what systemic problem is AI supposed to be addressing? And how is it
a systemic solution? And so I think that it's hard for me to see how AI is a better answer,
or a more productive answer than other things that we have been talking about and looking at

(17:52):
that themselves have been incomplete answers to the equity gap problem, but that are drawn from
a wider body of research at this moment.Just to respond to what you said, Josh,
you're making me think that in the same way that I was arguing AI can paper over or be a band aid
for pre- existing equity gaps. I think it can also be a band aid for problems in the learning space,

(18:16):
in that a lot of people are turning to student uses of AI, or even instructor uses of AI, because
it has become unmanageable to teach so many students at once. for example. Students need AI
feedback because the instructor can't provide the volume of feedback that's needed. And so, I mean,
I do think that insofar as that's happening, we should be looking at what are the problems in the

(18:40):
learning space that we can and should fix. Can we get smaller class sizes? Can we get more support
and time for instructors in that kind of thing, instead of turning to AI? And we see a lot of
universities making these huge investments in AI tools or access to certain kinds of AI platforms,
when they could be using that money to support the kinds of learning interventions and teaching

(19:03):
support et cetera that we already know can improve student learning. So I do take your
point about that, and I agree that there are a lot of problems. I think ideally AI can be used by
students in learning focused ways. But the trick is that it's actually hard to do in practice.
In terms of equity gaps and fixing the underlying problems, that ultimately is what I think we

(19:24):
should be doing. But those problems start back in our elementary school system and go all the
way through with inequitable support and funding and quality of education, and we do have students
who are the product of that system, and we do want to do the best that we can to address the students
we have. And one of the things we do know is an effective way of doing this is small classes,
one-on-one interaction, and providing more tutoring support and so forth. But we also know

(19:49):
that first-gen students and students who perhaps face some degree of imposter syndrome are less
likely to be going in for extra help, less likely to visit office hours. And if we could break that
down, then we may not need as much support. But one nice thing about having Socratic tutor AI
systems, potentially, is that they're available 24 hours a day, seven days a week, and they're

(20:13):
available to everyone without any fear of being embarrassed by admitting to someone that you don't
understand something. My personal view is, I think there may be a role for some of those things.
John's in the debate now.Well, I just built a Socratic
tutor for my class of 270 students.So here I am John, shaking my fist at
the cloud, because when I hear you say that, my mind immediately goes to Justin Shaffer's

(20:38):
high structure course design. He was on campus presenting a lot of different workshops, which
is a pedagogical strategy to handle exactly what you were just talking about in a way that doesn't
involve AI. I mean, there are analogs to this question, Emily and I think a lot about equity
gaps related to alternative grading practices. And as soon as you begin to peel one layer of

(21:01):
the onion, what you see is that every aspect of our educational systems in America are wrapped up
in the question of the problem of grades, and I won't go on that train, we've been there before,
but I'm saying that there are analogs to the question of AI and equity gaps, and that
Emily's metaphor here of band aids is, I think, really an instructive one for thinking through

(21:28):
the ability of AI to address those gaps.We'll put a link to that podcast that discusses
those learning assessments in the show notes.As well as a discussion of Justin Shaffer’s
work. And I do use some very similar techniques, but even then, students,
when they're working outside of class, sometimes get stuck on things. And it might be nice to have
some sort of tutor, if it's designed in a way that doesn't directly just provide answers,

(21:52):
and that's hard to do, because most systems do break down when they're repetitively queried.
And I'll add, to carry the metaphor of band aids, sometimes a band aid is all you got, right? And if
a band aid is what you have, like, systemically, we need something else; on an individual level,
if what you have as a band aid, and the band aid is helping like, I don't judge anybody for that,
right? We all have to use band aids sometimes.Yeah, I think that we should be thoughtful

(22:16):
about this in terms of not just students and interacting with them, but John did mention
K through 12 and how this is actually becoming a major issue within their landscape. In fact,
the Walton Family Foundation report that was through Gallup came out about a month and a
half ago. It indicates that teachers are using it to address the material conditions of their labor
in K through 12, saving between six to seven hours of labor a week, translated about six weeks of

(22:38):
labor a year. That's going to have a huge effect on our world coming into higher education too,
because students are going to have certain expectations about when their assignments are
going to be returned. And if teachers are using AI in K through 12 to grade student assignments,
they're going to be expecting us to hand that back to them very quickly. And that deeply concerns me.
I was just talking with some of my students here for the first week of class, one of them

(23:01):
said that her mother is a 10th grade English teacher and uses it to grade essays. We need
to start really thinking about this in terms of how we are approaching equity within our landscape
of teachers here at the university level, and how this is going to be addressed on campuses. As Josh
mentioned, academic freedom is the sort of law of the land in terms of technology usage here. That

(23:22):
door swings both ways, in terms of someone who wants to not use this tool, but then someone who
wants to go overboard and start just automating everything. We need very clear guidance,
and we really need to start talking as communities about where we think this technology belongs,
in terms of that space for our own labor.I think Marc just opened up the door to a
policy discussion around AI use to some extent, because institutions are making some decisions

(23:48):
about the availability of tools, putting guardrails in some places and not others,
and the entire university system includes a mix of faculty, staff, and students who may
or may not be using AI. So can you talk a little bit about the role faculty might want to play in
policy development around this topic, or even what it might look like at the individual level

(24:11):
in a class when you're trying to negotiate the absolute use that some students are using AI,
and maybe the learning outcomes that you have or the policies that you have in place. Like Josh,
I'm thinking about if you're not allowing the use of AI, but obviously some students are going to
use AI, how do you navigate those situations?Right, and a lot of that's tied to my grading

(24:31):
strategies of collaborative grading, and so removing some of the incentives to use AI in
the first place is a part of that. And Marc has done a lot more at our university in thinking
through kind of institutional policy. I've been on a committee. Mark, do you want to talk
a little bit about this? I mean, my own view is that faculty should absolutely be deeply
involved in creating a policy at an institution. But Marc, do you want to say some things?

(24:57):
We are doing a pretty good job of involving faculty in the actual conversation. It does feel
like we are kind of in this endless loop, though, that the main guidance that we've established, and
also, I think other universities have established too, is it all goes back down to faculty freedom.
We're giving individual faculty members wide leeway in deciding what is appropriate and
what's not appropriate inside their classroom, what's appropriate in their discipline. We don't

(25:20):
really know how certain faculty are going to be experiencing using these tools to completely
automate certain things that I have troubles with because of my values, like, I would never
want to automate completely feedback. I would never want to automate grading, or emails even,
or letters of recommendation. Those are all things that I value, being a human being with
my students and the relationship that we have. It is becoming apparent, though, from some different

(25:44):
research that we've had too and some conversations with people that's other than faculty, think very
differently about that, and I think we're gonna have some problems with this. I don't really know.
I know, John, your background is in the economy. I don't really know many economic examples where
some members of the workplace choose to automate the majority of their labor while others don't,
and how that works out long term. So I do think we need some very firm guidance about automation that

(26:09):
keeps the role of AI into words of collaboration and augmentation. And really does have that to be
a little bit of a hard line about using that to be incorporated so heavily that it starts impacting a
person's day-to-day job, their day-to-day life.The only thing I'll add is that I think students
should be a part of this conversation as well, like both on an institutional level and on the

(26:30):
individual classroom level. I try to involve students as much as possible in creating my
AI policies and letting them choose kind of the pathways that they want to take, especially when
they’ve got all these different instructors doing different things in their different classrooms,
they don't always know what's going on across the university, but students do? Students have a good

(26:51):
eye about what's happening in their class on this side of campus versus the class on the other side
of campus, and I think can give us some really good insights. So I want to advocate for involving
students in policy discussions as well.Yeah, completely agree. Great point,
Emily. I was having a discussion with my students on our classroom compact yesterday,
and this was one of the discussion questions technology and AI use in education more broadly,

(27:14):
but in our class, specifically.From faculty's perspective, one of
the things we hear most often is concern about assessing student work when it may not necessarily
be their work. What are some strategies we might use to encourage students to behave ethically,
or perhaps to design assignments that are less subject to inappropriate or unethical AI use.

(27:36):
And I'll add interventions that might need to occur when unethical behavior has occurred,
because I think that's one of the things that faculty are deeply struggling with.
So one of the things I keep repeating about AI use these days is that we don't actually have an AI
problem in higher ed. We have a crisis of purpose that AI has exacerbated, and AI has made this
whole kind of higher education project untenable given the crisis of purpose that we have. So I

(28:01):
mean, I think that school is increasingly viewed by students as a transaction. I think a lot of
faculty have noticed this, and they're doing work to get grades, to get a degree, to get a job,
and of course, that's not students’ fault, because that's what we've told them college is for since
they were kids. But I think maybe we're just starting to wake up to all the ways in which
we as teachers have inadvertently encouraged that transactional mindset. So I think a good response

(28:24):
to AI would be thinking about how we get that out of our classrooms. And so grading systems,
I think, are the main way that transactional approach shows up. So if you take grades away,
you take away a really important token in that transactional economy, and then what's left,
right? If you take the grades away, well, like hopefully learning is left, and it takes students

(28:49):
a little students a little while to adjust to that and to understand what the new purpose is when
there aren't grades. So that's what I'm always trying to do in my classes. And we could talk
forever and ever about alternative grading and different approaches to grading systems that help
students move out of that transactional mindset, but if they're not in that mindset, if the point

(29:09):
is not just to create a product to get a grade, if the point is to go through a learning process,
and if we incentivize going through a learning process as opposed to just generating a product.
In my classes, I've had good success seeing less AI use or misuse than one might expect. But the
other thing, I think, is just talking to students explicitly about purpose. Why are we here? What

(29:31):
are we doing? And one of the things that I really like to cite when I talk about this, Josh and
Marc will have heard me talk about this, and maybe tired of it already, but there's a study that came
out about 10 years ago by David Yeager and a whole host of co-authors called boring but important,
a self transcendent purpose for learning fosters academic self regulation. And one of the things

(29:52):
that they found in this study, or that they show in this piece, is that giving students what they
call a self-transcendent purpose for learning, so that's a purpose that's about doing something good
in the world or helping someone besides yourself, giving students that kind of purpose can help them
better persist through tedious or challenging tasks. So I think one of the mistakes we've made

(30:12):
when we're talking to students about purpose is to focus exclusively on their self interest. How is
this going to help you pass the next test or get into the next class, or succeed in the next class,
or succeed in your job or whatever? And I really think we should lean into kind of bigger questions
about the purpose of our discipline. What kind of good does the work that we do here do in the

(30:33):
world, and how can you use the knowledge and skills that you are developing here
to do good things, to help other people? Because I think, you know, none of us are here in purely
self-interested motives, right? We don't get paid enough for that. And I think the same is true of
students. We all want to do good things in the world, and so one of the things that I'm really
trying to lean into as a way to motivate students and kind of mitigate the temptation of misuse AI,

(30:58):
is to talk to them a lot more about purpose and help them buy into that, create structures that
incentivize their engagement with the process.We'll put in a plug for David Yeager book 10 to 25
because it contains a summary of that and much other research done on student motivation. And
it's a superb book to address those issues.I believe we have a podcast episode on

(31:19):
that that we can link in the show notes.I would just add two things to what Emily said,
I completely agree about the motivation structure that grades bring to courses,
we've incentivized the use of things like AI when we put all the emphasis on extrinsic motivators.
So there was a piece Inside Higher Ed today that was basically a recap of survey results,

(31:40):
and there the two top motivations for students to commit acts of academic misconduct or cheating.
One was the pressure to get good grades, that was the top, and the second one was short on time. And
so there are pedagogical innovations to address both of those things. Emily addressed grades. I
would say that there are some alternative grading practices that address the second one as well,

(32:04):
the time. But there are other things, like flexible deadlines. There are ways to address the
time issue as well that take those top two reasons that students might turn to AI off the table. So
that's one thing. The second thing is, we have a lot of great work, Cate Denial, Peter Felton,
etc. many people who are pointing to relationship building, trust building with students. Now,

(32:26):
there's this great facet of human nature that when we feel like we are trusted, we want to be
more trustworthy, and so creating classroom environments that really build those kinds
of connections and relationships, I continue to feel, will mitigate some of this use as well.
I think one of the things that we see faculty doing, I think, out of desperation, really is

(32:48):
turning to AI checkers and things to evaluate or find a way to identify whether or not it's
the student's real work. And obviously a strategy is to try to mitigate the desire to cheat in the
first place, but some people will still do it. Rather than going to an AI checker or something,
what are some other ways to maybe address the issues that arise when students do cheat?

(33:14):
Well, I think cheating is a very loaded term here, too. This is what becomes very complicated,
because some faculty have policies that are banning AI in every aspect of learning, which I've
told them is not really appropriate or acceptable, because you can't possibly enforce this. And we're
starting to see a lot of students using this on their own time outside of class to work through
problems, to research. So we have to really define what is academic misconduct using AI in different

(33:38):
terms. We also have to be aware of what the actual technology can do, too. And now multi-modal AI is
here, it's free, the voice in ChatGPT, the actual turn on the camera, it can actually start scanning
things around you. That adds more dimension to this than just text, and so we have to be really
aware of that, and also the fact that AI agents can now automate quite a few different things in

(34:00):
the research process, are all adding to this sort of process. So to me, it's always going to come
back down to a conversation with a student that goes back to openly disclosing how an AI tool or
process was used, and bringing that up. And that works really well in a small classroom where I'm
supported and have time to do this. It does not work well when I'm teaching a large lecture class,

(34:21):
too. If I have 80 or 100 students, that's where this completely falls apart, and people turn to
AI detection services or tools. And there are a lot of them out there right now. In fact, I think
Turnitin just released a humanizer detector, because a lot of people have been using AI to
humanize the writing, to make it sound more human. And now Turnitin now has a system that supposedly
is going to detect when you've humanized it, and their other system that they just released too is

(34:44):
called Clarity, which is a process tracker that allows you to view students writing long term for
days, even weeks at a time. And I'm really against surveillance in this sort of sense,
because, again, it's not really a pro teaching sort of strategy. It also doesn't really work
very well. At the same point in time, I'm very aware that a lot of faculty are sort of struggling
with this. And what I tell them, if you do turn to AI detection and you're starting to bring your

(35:07):
students up on academic misconduct charges, you need to be aware of the fact too that there's very
little procedural fairness involved in this. And I do flip that around on them. I said, Look, if
you're going in for promotion and your dossier got ran through an AI detector and somehow came up as
positive, what defense are you able to offer it? And I'll give you a hint, there really isn't one,
unless you're keeping track of all your documents five or six years ago and how you wrote them.

(35:31):
So we need to be really cautious about how we're dealing with this. And I think the best thing
to do is to obviously bring them in and do this. University of Texas has a really great restorative
justice program for academic misconduct. Penn has been piloting one for 10 years now, in fact,
that's the second most likely outcome that they have for an academic misconduct is they go
through restorative practice to see if they can't mend the trust that was broken in the classroom

(35:54):
before taking a student up on charges.Yeah, I appreciate a lot of what Marc said,
and I appreciate the question Rebecca, because I feel like sometimes, especially when Josh and I
start talking about this, people think that we're a little like out of touch or something. Because
when we're talking about student motivation and engagement et cetera, some people are like, “Yeah,
yeah, yeah. Like, that's fine, but that won't solve the problem.” But obviously, as you say,

(36:15):
there's no kind of perfect solutions to this. I think that we should lean into that student
motivation and engagement piece, because really, I haven't seen any kind of viable solutions
for mitigating AI use outside of that that I'm comfortable with, but I do think there are room
for some kind of assessment security measures, or kind of, however you want to phrase that, as

(36:39):
long as we're varying the kinds of ways we assess students, occasional in class, worker assessments.
I think oral assessments are tricky in a number of ways, but may have a place. And I think in terms
of what to do when students have misused AI when you're following up, I think Marc is absolutely
right that conversations are the way to go when you can do that. I'm also like Marc, pretty

(37:01):
uncomfortable with a lot of process-tracking measures. I tried to do this on my own. I
recorded myself writing a blog post, and that totally changed the way I wrote. And I did not
have a good time when I was writing, knowing that somebody would see that later. And so I have a lot
of mixed feelings about it. One of the things that I do is I do ask students to write in Google Docs.

(37:22):
I do tell them about version history and show them how version history works. We talk about this. We
set these guidelines together at the beginning of the semester. If I have concerns about your work
based on my own reading of it, I might go into the version history to like, look at how your piece
came together, and I will ask you questions about why did you write this? What was your process

(37:45):
like? Can you tell me what you meant when you said whatever? And again, that's something you can do
with smaller classes. It's a lot harder in larger classes. So there's no perfect answer here. And I
have a lot of mixed feelings about all of it. That's for sure. There is no perfect answer. And
when I hear Marc talk about all the twisting and turning that ed tech companies are doing to try

(38:05):
and pile band aid on top of band aid, I guess I just immediately think, if only we use this
much time and money to solve other problems in our world of education, that's ultimately what
I think. Continuing my theme for the day, when I was a graduate student and first started teaching
writing, it was before Google launched, and so we were taught how to handle plagiarism,

(38:29):
was to have a conversation with students, was to ask them about their ideas, to say, “Oh,
this is an interesting word. Could you help me understand why you used it in this context?”
And that part of what would emerge from those conversations would be either an admission, “Okay,
you're right. I didn't write this” or it would be a really interesting intellectual conversation

(38:50):
about their work, “I used it because of this, and here's why I…” you know. So I still think that,
rather than using machines that may or may not have various degrees of fallibility in this,
I think those conversations are ultimately a productive, pedagogical way to handle it.
Marc's right. It doesn't scale very well, though, so large classes present a problem. In my own

(39:13):
classes, I use collaborative grading, which means that I don't give any grades. Students do a lot
of self assessment. I give a lot of feedback. They propose their final grade at the end. I
know Emily does this as well. I take grades out of the equation. I take deadline pressures out
of the equation by having best buy dates that they can petition to change for themselves if

(39:34):
they need to for some reason. If ultimately… I realize I'm an outlier here… if ultimately,
even within that context, a student is going to use AI that's going to be their decision.
If that's what they want out of it, then I'm gonna teach the way I feel like I need to teach,
and I'm gonna work with the students in the way that I think will be as meaningful as possible.

(39:55):
And people, ultimately, Rebecca, to your point, are going to continue to be people.
I just wanna just note that if there's a tool that's detecting humanization,
I wanna know like, is the opposite then the dehumanization like, it's an interesting word
choice. I just want to put that out there. But I was just mostly sitting here thinking that's
really like an interesting word choice. It is. I do think something for both Emily

(40:17):
and Josh that might be helpful to talk about… Grammarly just launched a new product about
AI agents that supposedly can actually grade a teacher based off of a rubric that the student
supplies from the teacher, and also includes sort of a scan of the sort of public persona
of who that teacher is, and gives it a best guess. I have some very personal feelings

(40:38):
about this that are not very positive, too, but I wanted to get you guys take on this about how
we're seeing that sort of transactional framework being taken up by other companies as well, too.
Yeah, I'm not a fan. I mean, there's so many reasons. The main thing for me is that it keeps
students tethered to that grade motivation, right? Like, if you've got these Grammarly tools that are

(41:00):
gonna guarantee for you, I can help you write an A paper, and I can grade it for you and tell
you that it's gonna be an A, then students are writing to conform to whatever Grammarly tells
them an A is. And of course, I think the idea that it would scan the Internet to determine what your
instructor might think based on their like Rate My Professor comments or whatever, is incredibly

(41:22):
disturbing. And the only good thing that might come out of it is that instructors might get
a sense of what it is like to have AI reading you in a way that might make them more cautious
about putting their student work into an AI.Yeah, I wrote a little bit about this on LinkedIn,
and I just want to thank Grammarly for helping Emily and I to make the case that grades are

(41:43):
truly subjective and kind of pointless. If the machine can gather some of these random bits of
information and suggest what might possibly be a grade that I myself would give, I think that that
just explodes the notion that a grade is something that meaningfully measures learning. It's absurd
to me, but I think that that has been a through line here, of a lot of my comments, but that

(42:07):
especially, I think, takes it to a new level.I guarantee Grammarly does not grade the
way that I would. My priorities are not Grammarly priorities.
I think the big picture we're seeing, too, is that there's the performance versus the reality,
and that's what we're kind of getting back into the overall concept here,
where we're talking about. Teachers are human beings. We're still flesh and bone for now,

(42:30):
hopefully that's going to remain for this future too. Yet there's still a huge desire to have that
performance. I'm sure Grammarly too thought about this. I'm sure they beta tested it too, because
it's what students want. I want to know what my teacher thinks about this essay at midnight that
I'm writing this. So I think we need to start thinking about how we can communicate our value
and what we provide to students too, in a way that is better than what a company or a product

(42:56):
or just AI in general can offer. And that's going to be incredibly challenging, because right now
at higher education, we are on the receiving end of a fire hose of these different products.
We are reaching the end of the time we've scheduled. So we always end with the question:
What's next? …which, in anything dealing with AI is something we're

(43:17):
all asking on a daily basis, it seems.Well, this semester, we are doing at the Center
for Excellence in Teaching and Learning a reading group on the book The Opposite of Cheating:
Teaching for integrity in the age of AI, and Marc is also doing a number of AI workshops
with our academic innovations group. And to go back to the grading conversation,

(43:38):
I think what's next is I'm continuing to work on a book on collaborative grading,
and Josh and I are working on another book on alternative grading, and so we're not going to
explicitly talk about AI too much in those books, but I think they have a real bearing on some of
the AI stuff that we're seeing right now.I think an important message to share what's
next is keep doing the work. We're going to keep doing the work of understanding, learning,

(44:02):
and working with our students. I think certainly we will continue to have conversations about AI,
but a message from my what's next, the message that I am continually promoting
to administrators here on campus is that AI is one thing among many things that are still at issue in
the world of teaching and learning, and so we need to still be addressing those as well.

(44:24):
Yeah, I think that's a really great point to end on, too. For me,
I'm focusing much more about how multimodal AI is affecting our mental health, in some ways,
especially teen mental health. We're seeing a lot of use cases for therapy and companionship,
for these tools that you can talk with. I think that's going to be something that explodes as a
topic over this next year. I also think it is going to be imperative for us to have

(44:47):
conversations with our students about how they're using these tools, because very few people are
really talking to them about it and asking them if you are using it, are you thinking you're talking
to a person or to in it, a machine or private company giving all your private information to
it? So I think more or less caution is where I'm headed for this with AI, I think we've got far too
many different products and different use cases out on the market to really keep track of, so I

(45:12):
would hope we'd be going to slow down, but I don't think that's going to happen anytime soon.
Well, thanks for a really great conversation. We always enjoy talking to all of you, and it
was great to have a diversity of perspectives and a more nuanced look at this topic today.
Yes, thank you. It's great talking to each of you, and we're looking forward to many more
future conversations. And I'd also like to put in another plug for the Opposite

(45:34):
of Cheating. We're going to be doing a reading group on that here as well this semester.
Thanks, John and Rebecca,Thanks everyone.
Thank you guys.If you've enjoyed this podcast,
please subscribe and leave a review on Apple Podcasts or your favorite podcast service.

(45:56):
To continue the conversation, join us on our Tea for Teaching Facebook page.
You can find show notes, transcripts and other materials on teaforteaching.com.
Music by Michael Gary Brewer.Editing assistance provided by Madison Lee.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.