Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Generative AI can increase efficiency and support student learning, however students can also use
it as a substitute for learning. In this episode, we explore ways generative AI tools can improve
course design and ways to encourage students to use AI tools ethically and responsibly.
(00:25):
Thanks for joining us for Tea for Teaching, an informal discussion of
innovative and effective practices in teaching and learning.
This podcast series is hosted by John Kane, an economist...
...and Rebecca Mushtare, a graphic designer......and features guests doing important research
and advocacy work to make higher education more inclusive and supportive of all learners.
(00:56):
Our guest today is Nathan Pritts. He is a Professor and Program chair for First-Year
Writing at the University of Arizona Global Campus. Nathan’s recent work has
been focused on the relationship between AI and human teaching. Welcome Nate.
Thanks for having me, Rebecca, John. Good to talk to you. Our teas today are:...? Nate,
are you drinking tea by any chance?I am indeed. I've got this Paris tea
(01:20):
by Harney and Sons. It's a fruity black tea with bergamot oil. It's very nice.
We have some of that in our office. It's a favorite of the current Associate Director of
the teaching center here. So we keep that stock pretty regularly. And Rebecca?
I have Golden Monkey today, John. Is it golden?
It has golden tips, yes.Okay. And it’s a black tea?
(01:43):
And it's good. It's a black tea. Was it one of the monkey-picked teas?
I don't know. Maybe it was AI picked. And I have a spring cherry green
tea. One of your favorites. Oh, yeah, definitely my favorite,
with those cherries. Nice.
It's an ongoing thing. Yeah, I hate it. Oh, okay,
(02:03):
you actually don't like it. No, I actually super hate it.
I did end up with a couple of bags of whole leaf tea because she bought some because it
smelled so nice.It smells so good,
but I just cannot stand the taste of it.Easy to get taken in by some of these.
It's true, the wafts are so good. Alright, well, we invited you here today not to discuss tea but
(02:24):
the impact of AI on the design of online courses. In your March 19, 2025 article in Faculty Focus
you discuss some of the ways AI tools can be helpful in augmenting our work while still
maintaining human connection in our courses. Can you describe some of the ways in which AI
can help us create more effective courses?I can talk about some of the ways that have
(02:47):
worked for me. I mean, I think one of the things to realize in course design, some faculty will
have a lot of people supporting them. They'll have colleagues to bounce ideas around with,
but at other times, a subject matter expert won't have anybody there to support their build. And I
feel like that's where AI can really help support the course development. It sort of works as just
(03:09):
another pair of eyes on the work that the faculty member is doing. It works to bounce ideas around.
And I feel like there are some very targeted ways that that can come into play. Always,
I'm trying to think about AI as a support for the human intuition, the human experience, what we
bring to the table. Ethan Mollick, who's a pretty well known AI researcher, talks about the human
(03:33):
in the loop, which is the person who's helping AI accomplish goals. I like to think of it the
other way around, as having an AI in my pocket. So I'm the human. I'm always going to be the
human doing the work, but I can turn to an AI tool to help me do things that I might not be able to
conceptualize myself. I kind of break curriculum work down into two different categories. This is
(03:56):
kind of what we came up with internally to help us think about some of the different ways that AI can
support course design to make effective courses. We think of them in terms of accelerators. That is
an AI that's basically helping a faculty member get started, move faster, maybe streamline some
of the processes. A faculty member is staring at a blank page, an accelerator might help them. We've
(04:20):
developed some prompts that will do that. But then there's also what we think of as multipliers. This
is basically to help enhance the quality of the work that the human, the SME, the faculty member,
might have already come up with. And that's what that article, Rebecca, that you referred to,
that's what that really talks about. The stress tester for assignments is basically a multiplier.
(04:42):
It's assuming that a faculty member would have developed on their own an assignment
for a particular class. And then will use AI to essentially test it, run it through some
battery of tests just to see if there are ways to improve it. One of the ways AI can do that in this
particular case is that it can simulate student perspectives. So you can feed the assignment
(05:06):
that you developed, or the discussion prompt, whatever it might be, into AI, and you can ask
it to simulate a variety of student responses. And you can give it some parameters. You can say,
I want some A level responses. I want some C level responses. You can ask it to mimic what your
student population might be if you're teaching first year versus if you're teaching graduate
(05:26):
level. And I think that can kind of help to show maybe where the prompt could be strengthened,
maybe students might misinterpret a particular aspect of it, or maybe they struggle with some
part of it. We've all had that experience where we develop an assignment, we give it to students,
and they just find ways to break it right out of the gate. We never thought to tell them not to do
(05:48):
a certain thing, or we didn't make it as clear as we needed to. This gives us an opportunity,
working with AI to potentially catch those stumbling blocks before we put it in front
of students. So we refine the prompt, we find blind spots, and hopefully this is a way of
making that assignment prompt that we've developed a little bit more ready for the wild. And let me
(06:12):
just say I get that an AI-simulated response to a prompt is very different from what a student
is going to say. Students are going to find new and exciting ways to respond to our prompts. The
AI doesn't work like that, but if we ask it to generate, say 50, say 100 responses to a prompt,
(06:34):
don't share them with me, just generate them and then thematically, tell me what you're coming up
with. It still gives us a baseline. It gives us a sense of where our language might be unclear.
Maybe we need an extra bullet point to clarify that we don't only want a thesis statement,
but we also need an entire introductory paragraph. Maybe we need an extra explanation for what
(06:55):
essay structure needs to be used in this prompt. We can't just assume the student's gonna know. We
gotta make sure to put that in. So this is just one of the ways I think that AI can really help
in the course design process. Again, I mean, I feel like you've got that human developing
the material, but then you've got the AI as that second pair of eyes, that different perspective,
something outside of our own head, that can lead us into areas we might not have gone
(07:19):
into on our own. I feel like we've all had that experience too of you're developing a class,
you come up with an assignment, and you just love the way this assignment works. You've used it for
5 or 10 years. You think it's fantastic. We fall in love with these things, but students change,
and the way that we interact with the content or the subject matter or the outcome might have
(07:41):
changed within the course. It might have changed institutionally. And we need to be aware of those
things, and we need someone to help us fall out of love with our assignments and really
test them. And again, I think in the absence of a colleague who might help us do that, for people
who are working in silos or people who might have tough deadlines, AI can really help with that
(08:01):
In that article that you just referred to, you also provide an example prompt, and we'll include
a link to the article, and then our listeners can go and take a look at the prompt themselves. And I
think you provided an example of using a personal narrative assignment in class, and you also talked
a little bit about how you can use the AI tool to refine your prompt based on the responses
(08:24):
that it has provided. Could you talk a little bit about that example and how you might use AI to
directly suggest improvements on the prompts?Yeah. So I teach mostly first-year courses,
first-year students. I'm the Program Chair for our comp sequence, so I oversee students who are
coming through both of our composition classes. In our writing courses, one of the ways that we
(08:47):
handle the fact that these are students coming in with a large degree of fear, a lack of confidence,
is we try to meet them where they are in terms of their own skill set, of course, but also in terms
of their own interests and ideas. So yeah, it's a narrative prompt that we try to work on in these
courses. We try to get students to look at their past experience and to reframe it in a way that's
(09:08):
going to help them see their academic goals, their lifelong learning goals, let's say. So we come
up with this prompt, and it's a prompt that asks students to talk about an experience they've had.
Now I can ask AI to run some simulations, and what might be revealed… in fact, when I tried this,
one of the things that was revealed… was that the way that I had worded the prompt was allowing
(09:31):
students to talk about emotional experiences, but not those that had any real direct correlation
to their choice of major, their choice of career field, and that's what I wanted. I wanted students
to find ways to talk about, essentially, their ethos, how they've developed this in a way that
helps to see their chosen career field in a different light. My prompt wasn't doing that.
(09:56):
It was allowing students to just talk about very traumatic or sad or happy experiences they had at
any age. There was no way of framing that into what I wanted. I assumed students would do it,
but they might not. So it helped me to really understand that, okay, if I'm going to ask
students to talk about a personal experience or a defining experience, I'm going to have to
(10:16):
add material to the prompt that clarifies that I'm not just looking for a personal experience,
I'm not just looking for a particularly emotional personal experience, I'm looking for a defining
one that led to their choice of major or career field. Most of my students are non-traditional
in the sense that they're already working so they already have a chosen career field. And this idea
(10:37):
of major versus career field, we typically think of it in terms of career field. But my point is
that the AI really helped me to see that one of the ways in which I could better this prompt
was to put a little more scaffolding into it and slant it toward what I wanted. Of course, I was
worried, as the classroom teacher, when you're teaching comp, when you're teaching any class,
it's not about right answers. You kind of want to be surprised. You don't want only one type of
(11:03):
paper or assignment to come to you as a teacher. So you're trying to write a prompt that leaves
a lot of leeway, let's say, for students to interpret and to give you unique responses,
but what I had done was left it so wide open that they weren't even meeting the guidelines,
reall,y weren't even doing what I wanted, and so AI was able to help me nip that. Now I could have
just put that in the class, and after a section, after two sections, I would have figured that out,
(11:28):
but I think that that would have been challenging for those students. Let's say this is a week two
assignment. Well, when week five rolls around, suddenly they don't have that stable week two
material to build upon. And so we're really looking at helping improve course outcomes.
We're looking at making sure students are learning what they need to learn in the class. And again,
(11:50):
AI is an avenue to do that. I hope that kind of explains what I'm talking about. I think
there are a lot of different ways to do it in a lot of different classes, but that's
just one sort of avenue I tried there. It's a really good example of how the AI tool
can do analysis of the assignment. Can you give some other examples of the kinds of holes that AI
might be able to identify for faculty? One thing I've been working on recently is
(12:12):
universal design for learning. So I teach at an all online school. Our courses are asynchronous,
and as a result, access and equitability, and of course, UDL principles are important to all of us,
but they're very much a part of how we design our classes. But so many subject matter experts are
not curriculum designers. They're maybe not even teachers. They're experts in their field. And so
(12:36):
the model now for a lot of universities is just to have these experts, these subject matter experts,
design courses. And they've got a lot of great ideas, but they don't understand some of the
basics of instructional design. So we were able to develop an AI-based prompt that helped ensure that
UDL principles were baked into materials of the course, while at the same time explaining those
(13:02):
principles to the subject matter expert that's working on them. I feel like so many AI tools are
kind of like a black box. You ask it a question, and it gives you an answer. And I think that's
one of the problems we see with student use. They have an essay prompt, and they tell AI to write
it for them, and AI does, and then they've got a product. They didn't learn anything. It's the
(13:23):
same thing in course design. We want faculty to understand some of these underlying principles.
And so back to this idea of UDL principles, we can create a prompt that says to any AI tool, “Okay,
here are the main principles of UDL, and here's some background information.” We could even give
the AI some background research studies. Maybe your AI tool has access to the internet and can
(13:47):
find some of these things on its own. You can then check and make sure it's understanding of UDL is,
in fact, correct, and that it's applying it appropriately, through some refinement
and testing. But what you get then is a tool that allows a faculty member to say, “Okay, hey, look,
I've got a discussion board prompt, and I want to talk about essay structure. And here's how I've
typically done it (14:09):
I ask students to identify
aspects of essay structure and talk about how
meaningful they are, but I want to think of some new ways to do it, and I want to make sure that
I integrate UDL principles into this.” And so now the AI can turn back to the faculty member,
not only some different ways to approach that course content element in that format… you've
(14:31):
identified the course content you wanted, which is essay structure, you've identified the format,
which is discussion board… and so now the AI tool will give you some options that you might not have
considered otherwise, but it can ensure that it's foregrounding or emphasizing discussion prompts
that will emphasize Universal Design for Learning principles, and it will explain those connections.
(14:54):
So the faculty member will then have an example discussion board, and it will say the reason why
this adheres to UDL principles is because of these reasons. So the faculty is learning right along
with it, and they're coming up with interesting material for their course design. Now they might
not use that. They might use that as a baseline to spring off and develop their own material,
but maybe they'll internalize some aspect of that, and they'll learn while they're doing it.
(15:17):
I think it's a really interesting moment. We talk about how students need to learn how to use AI. I
think faculty are in the same boat. There's a lot of fear, there's a lot of uncertainty,
but the more everybody interacts with it, the more they can learn about the strengths and
also the limitations, and maybe they can teach themselves a few things along the way that they
might not have been able to pick up otherwise.Sounds like a good opportunity to model how to
(15:41):
use it as a learning tool, which might be really good for faculty to see.
And I think that's how it translates into students as well. I mean, again, if we look at some of the
popular sites that allow students to upload their essays for automatic grammar checks. I’m not going
to name any names, but one of them starts with grammar, and I feel like what happens is that
(16:04):
tool just changes the student's work, and the student doesn't really understand why or how what
they wrote was wrong or less than correct… I don't want to say wrong… because there's no upskilling
involved. And so I feel like, whether it's a student application, or whether you're using
it internally for faculty development, baking in some kind of upskilling is what really is going
(16:27):
to make this seem more valuable to people, even those people who are sort of anti AI. I think
it can show the efficacy of the process, even if you're still skeptical of the product.
You mentioned that a lot of students will just submit work based on the prompts that
they receive. And in a September 2023 Faculty Focus article, you talk a little bit about how
(16:52):
some faculty may choose to provide AI tools with rubrics or other evaluative criteria from their
courses and use that to provide some feedback on student work. And there's a lot of concern among
faculty about whether we should be using it to grade student work, the possibility that we may
have students using AI tools to submit work that we then use AI tools to grade, and there's very
(17:15):
little thinking on either side. And you suggest in this article, we should focus on the uniquely
human attributes that faculty bring to their classes. Could you talk a little bit about
how we can bring uniquely human elements into our teaching, particularly online teaching?
We all hear this. There's a lot of talk about the things that AI can do well and the things
(17:37):
that it doesn't do so well. And I think one of the traps we catch ourselves in is AI is really
efficient and it's really fast. And so a lot of people, faculty members, are thinking, “Well,
I could never be as fast as AI, I could never be as efficient as AI.” And so they start to feel
pretty disheartened about all of that. And it seems like every day there's more and more that
(17:58):
AI is doing well, maybe it's not doing it well, maybe it's just doing it well enough. And so it's
pretty daunting. I think it's a little scary at times, and I feel like a lot of what I was
talking about in that article, and a lot of what I still believe, and when I talk with faculty,
this is how I try to bring it up, is just that I feel like we do need to lean into these elements
that are uniquely human, the things that only we can do in the classroom. For me as a teacher,
(18:25):
my attention, my attentiveness in the classroom to the student work. Those are things that AI can't
hack, but AI can hack feedback. It can give really good feedback. So how is my feedback different or
better than what the AI might provide? AI can provide their feedback 24 hours a day,
(18:46):
whenever a student wants it. I can't do that. I'm gonna check my email three or four times
a day. I'm gonna get back to students within 12 hours, but I'm not right there with them
all the time. I can't be faster, I can't be better, I can't be always on. So what can I
do? And I feel like that's where talking in that article about the idea of trying to find those
(19:06):
human handprints. This is something that Kevin Roose talks about and those hand prints, I think,
are those elements in the course, those elements of interaction with students that are something
unique that we could do. And I feel like that's something that each faculty member, each teacher,
needs to kind of come to grips with on their own. As I shared in that article, it was a pretty
(19:27):
harrowing process for me, because every time I came across something in my class that I was like,
“Oh, this is purely me. Nobody could copy this,” suddenly, AI can do that. I write snappy headlines
for my announcements and I write amazingly emotive and snazzy emails. AI can do those things. So it
was this moment of realizing, like, okay, all these things that I thought were conveying my
(19:50):
attention and my attentiveness in the classroom to my students, those things are something that
AI can do. So what's left? And the thing I settled on, and it's a pretty basic type of thing, is to
interact with students using video feedback for their essays. I think what might get lost in the
article, or maybe what I only came to realize more recently, is that the feedback I'm giving is more
(20:16):
developmental. It's experiential. It's not meant to say, “Hey, your thesis was wrong, and I'm just
going to use my face and voice to tell you that.” Really what I wanted was for them to see my face
and hear my voice as I read through their papers. They got to see how I reacted to sentences that
they wrote. They could see if my brow furrowed or if I kind of stifled a laugh or whatever. They
(20:39):
could see when I was stuck. They could see when I was excited. They could register all that in
my sort of running commentary on their essay. And I feel like that was one of those moments
where I realized, okay, this is it. This element of connection is what I can uniquely contribute
(21:02):
to the class. This is what I'm doing as a teacher in an online, asynchronous environment. It's very
different if you're in front of a classroom of living, breathing students, but in my environment,
that, to me, seemed like a moment of connection, and that became kind of a baseline for me as I
looked throughout other aspects of the course. What are some things I can do that only I can
(21:23):
do? Or that by me doing them, they become more meaningful than if AI was doing them. There's an
example in our courses. It's an online class, and so we have pretty complex analytics and dashboards
that show us how students are performing in the class. You can look at a discussion board and you
(21:43):
can see that a student forgot to turn it in. But you can also look at this dashboard that will say,
“Hey, this student has been late on three of their previous discussion boards. This is a pattern.”
So that's great. AI, some type of algorithmic tool has identified that. We probably could even create
an automatic email that lets the student know, “Hey, you didn't do this thing.” But is that email
(22:04):
timed better than mine would be? Is it written in a more personalized manner, does it take that
individual student and their entire record of work in the course with me into account? And
I feel like again, maybe someday, yes, AI can do that, but right now, it can't encompass the full
range of my experience as an instructor and my experience of that student's work in the course.
(22:31):
I know when a student needs to be challenged, I know when a student needs to be applauded. I don't
know that AI can figure that out based solely on the product that the student either did or
didn't turn in. And so I feel like, again, on one level, this is kind of a personal process for each
faculty member to sort of go through their course and think, “Okay, what's me in this class? What
(22:53):
is uniquely me? Where can I bring more of me and my experience to it?” Because really, as much as
this was about my students and trying to connect with them, it was really for me. It helped me to
reconnect and re-engage with my teaching. Faculty worry that AI is going to take their jobs or do
certain tasks better than they can, and I think really the productive way to deal with that is to
(23:17):
just keep doing things as humanly, as uniquely, as messily as we normally do them. Let's say you're
sitting down on a Friday to grade 20 essays. We've all been there. We've all done that,
and we kind of can shift into autopilot at times. Don't do that anymore, because AI can do that.
AI can provide rote feedback and bland email messages. So it's almost like a wake up call,
(23:39):
in a way, for all of us to stop teaching on autopilot and really reinvigorate our practice.
I don't want to say we're competing against AI, but I do think it's a moment that's helping us
reassess what's important in teaching, what's important in our own disciplines,
what's important in the content of the class, and try to kind of inspire us to get at it, to uncover
(24:01):
those things in a different way. So yeah, it's about reconnecting to that joy in teaching.
One of the things that it sounds like I'm hearing you advocate for is being a bit nimble in how
you're functioning as a teacher, as the technology shifts and change, and as its abilities to do
certain things that free up our time to maybe prioritize or put our energy into other things
(24:23):
so we can highlight or utilize the technology to do things that are kind of rote in nature,
and then places where we can use our human creativity might be where we'd better invest
our time. Can you talk a little bit about how we need to shift? For example, you talked
about video feedback. There may be a day with new technologies as they're evolving, like the HeyGen
(24:44):
tool, right, where maybe some of that feedback can be generated, and it could be in video.
Yeah, it's terrifying. If you stop and think about it for even a minute,
this stuff can overwhelm you. So I'm an educator, but I also have a background in marketing,
corporate training, more of the business side of things. And I think what you're talking about,
this idea of being nimble in thinking, it's maybe not necessarily how a lot of academics usually
(25:09):
think about things. You learn your subject matter, you get engrossed in it, and then you teach it,
and you have that expertise. I feel like maybe it's just me, or maybe it's my training,
or maybe it's the different roles I've held over the years. I never think I've got it figured out.
I'm always worried that I'm five steps behind. I'm always trying to think of new ways to do things,
not just to make them better, it's not about necessarily always making things better,
(25:33):
but it's about experimenting and exploring. And I feel like that mindset, that true growth mindset
that we talk a lot about as educators, I think that's part of what we need to apply to this field
itself, to this field of teaching. So many faculty members, so many teachers, got into teaching,
not to be a teacher, but because they got their masters and PhD in a particular field knowing
(25:56):
that they would be a teacher, but without any actual explicit training in it. So again, yeah,
it's this opportunity to stay open, to be humble, to listen, and to explore widely and see what out
there might resonate with us in our own practice. So, I mean, Rebecca, you're asking for particular
examples. And again, I guess I might just kind of default and say that those examples are going
(26:19):
to be different for so many people. I think there's a lot of faculty work that can be,
and maybe will be, automated. There's a lot of elements of record keeping, data tracking,
things that we do and spend a lot of time on that we maybe don't need to spend that much time
on anymore with AI able to crunch some of these numbers for us or develop data visualizations for
(26:41):
us. I mean, I can pull dashboards… I don't want to scare any of my adjuncts who are listening in my
program… but I can pull a dashboard and I can see how often they're in the class. I can see
what they're doing. I can see their persistence numbers, their retention numbers. I can see all
these different things, and I can compare them to other faculty. So I know that Professor X
tends to have students who do a really good job understanding course learning outcome 3,
(27:06):
but Professor Y is stronger on course learning outcome 4, but not on course learning outcome
3. In order to get all that information, though, I have to look at three different dashboards. I have
to go into the class. I have to look at all these different things. And what does that give me?
It gives me some element of information that helps me to understand how I might interact with these
professors to provide professional development opportunities that might improve their practice
(27:30):
in the classroom. Maybe AI can automate all that. Maybe AI can automate some of that data crunching,
and then it can just tell me, “Hey, go look in Professor X's class. Look at week three. See what
they're doing. Use your own human eyes. But I'm telling you where to go look.” We have tools that
do this for student work. I think I'm just trying to sort of talk my way through an example of how
(27:52):
some degree of faculty work, like in any career, is kind of just a slog. You wake up every morning,
you've got 30 emails, and it's just this digital slush that you have to work through to get to the
good stuff, interacting with students, working on a new research article you're trying to tame,
maybe these tools can help us get through some of that, while freeing up our time to be more
(28:16):
creative, more human, to do those things that we've designated are important for us to do as
people. And again, maybe that metric shifts for some people. I'm teaching freshman comp. Feedback
is really important to me, I don't want to automate that, but if you're teaching a 300-level
class or 400-level class, and you want to be able to give some students feedback on grammar, maybe
(28:40):
that is the kind of thing that AI could help you to provide to a student, freeing you up to deal
more with the complex arguments they're making in their capstone paper. For me, I live in those
details, so I need to do that. But maybe that shifts, again, depending on what your role is,
where you're teaching, what you're teaching, and again, yeah, like you were saying, just
(29:01):
to kind of free us up to do things that are more meaningful when we are the one that does them.
We want to help our students learn how to use AI ethically and responsibly to prepare them
for their lives beyond our classes, but we're also concerned that they actually learn some
basic skills. What are some strategies that you've used or you recommend to encourage
(29:23):
students to use AI in a productive manner, but not as a substitute for learning.
I love this because I feel like so much of the discourse that we hear about is how to
ban AI usage, or coming up with these really Byzantine ways of getting around AI usage,
AI-proofing an assignment, and I feel like that's going to have its place, because, as
(29:45):
we've been talking about, eventually AI is going to beat past all of this. So the crux of this is,
how do we find ways to use this in the classroom? For me, the key is to normalize and openly discuss
AI use with your students, rather than trying to ban it. Students are already using the tools. We
know this. There's an article published every 10 minutes that gives us new data about how we know
(30:07):
students are using AI in classes. So we're not going to get rid of it. Even if your university
or your department bans AI usage or limits it in certain ways, students are still going to
do it. So by inviting it in, we can try to come up with a way of getting students to understand
better ways to use it, rather than just like you were saying, John, rather than just outsourcing
(30:32):
and coming up with products where they don't learn anything, we'll find ways to integrate it into the
classroom. I did some research recently, and kind of basing on a few other ideas I found, I came up
with a framework for balanced AI integration in course design and the framework, it's just four
parts. It's pretty simple, actually. It starts with setting clear guidelines for AI usage,
(30:56):
telling students what they can and can't use, what tools they can and can't use, where they
can and can't use them in a particular assignment. From there, it means that the assessment itself
needs to emphasize the learning process over the product. The third stage is to ensure that you're
encouraging critical thinking, and the fourth stage is to build in feedback and reflection. So
(31:19):
what that looks like is that you're going to talk to your students about AI as a helpful tool for,
let's say, brainstorming or outlining or maybe checking your finished work. It's not a shortcut,
but it's a way to use the AI in a productive manner, telling them they can use it in a
certain prescribed way. Maybe they'll use it only in that way. But from there, we do need to make
(31:43):
sure that we're modeling that appropriate usage. We're showing students how we use AI in our own
work. We're talking about it. We're discussing it. And we continue to position AI as an assistant,
not an author, not somebody who is going to generate this material for us, but a way that
it can help us get to certain aspects of whatever the assignment might be, emphasizing that process
(32:09):
over the product. We're so used to grading essays. Maybe the essay isn't what we grade anymore. Maybe
we grade the entire process that leads to the essay. It's not about the thesis statement,
it's about how the student documents their process of developing the thesis statement.
Maybe that's part of how we do this. But I also think we need to make sure that we're building
(32:30):
in reflective tasks where the students can really analyze the AI generated content. They can talk
about its limitations, they can talk about the biases. They can talk about how they might have
done something different, and analyze what that means, try to figure out what the difference is
between human-generated output, writing, versus AI-generated output. Having that reflection built
(32:53):
in is something that I think can really help students with this idea of authenticity and
authorship. So again, just making sure that you're explicit in your guidelines, making sure that
you're showing where AI can and can't be used, and kind of talking about that usage, I think that's
one way that faculty can really bring it into the classroom. It can't be worse than it already is,
(33:14):
right? We're already dealing with half of our papers being generated fully by AI. I know it
sounds almost counterintuitive to say to your students, “Okay, look, you can use AI for this,
but you gotta use it in this way,” but I have to believe that if you set up those guardrails,
if you work with students, I think they're going to get to that point where they're using it as a
(33:34):
partner, as a tool, rather than something that's just wholesale generating their content. I mean,
again, this might work differently in a math class than it might in a humanities class,
but I do think that there's generalizability here. There's a way to make this work in a
lot of different environments.Can you share some examples of how
you've made it work in your writing courses?I can, for example, we're talking about the
(33:57):
idea of generating. One thing I developed is a tool called a thesis generator versus a thesis
accelerator, right? So you see these thesis generators online, and what a thesis generator
does is a student plugs in a certain amount of information and it just gives them a thesis
statement. Again, there's no learning. The student hasn't learned anything about what a thesis is,
(34:18):
and they certainly didn't have any real, let's say, agency or authorship of the thesis statement
that's been developed automatically through one of these generators. And you can go to
an AI tool and you can say, “Hey, I'm working on an essay about why I want to be a nurse,
and it kind of stems from the fact that one of my parents was really ill when I was a kid,
(34:38):
and so I just want a thesis statement that does all that. Make it sound good,” and the AI will
do that. So this thesis accelerator tool that I've used in a few of my courses is a way for students
to talk about their thesis statement without letting the thesis statement be written for them.
The AI has been prompted explicitly not to write any material for the student to use in
(35:03):
their essay, but to work with them to continue to test their assumptions and ideas, to get them to
form this thesis statement. So it asks questions. It asks what the student is interested in, what
they want to write about, what they're thinking about, asking if it really aligns to the topic.
It's asking very probing questions that are pretty explicit in terms of trying to get this thesis
(35:26):
developed, but it won't write it. It will keep working on it that way. But the assignment is
not about interacting with the AI. The assignment is not about the thesis statement. The assignment
is about the student reflecting on “Okay, I walk into this process. Here's what I wanted
to do. Here's what the AI wanted me to do. Where does that intersect? What did I come up with?
(35:49):
Is that different from what I would have come up with on my own?” and to just really authentically
kind of assess, “Okay, this thesis statement that I've got, some of these are my words. Some
of these aren't. Some of this idea is mine. Maybe this idea isn't,” and I feel like that's one way
to get students to understand those limitations, to get students to understand that this is a tool
meant to potentially help them, but not supplant them. And I found, as with any assignment,
(36:14):
sometimes it works, sometimes it doesn't. Some students really approach that strongly,
and they get a lot out of it. Other students find ways to work around it. But I feel like
creating a tool like that, creating a path for students to interact with AI, rather than simply
to use it to get information or output. It's in its infancy. We're still figuring this out,
(36:36):
but I think that's the way forward to get to this idea of personalized learning that AI seems to be
saying it is promising us, this idea that we can have all these personalized learning tools. How
do we get there? And I think part of it comes from interaction and conversation. So teaching
students in that way, truly dialogic, I think that's one of the ways we can get there.
(36:59):
So one of the things I think you're suggesting is that faculty have to engage with AI to be
able to work with it effectively. And I think throughout academia, there are a lot of faculty
who are reluctant to even consider the use of AI and just want to ban its use by students.
What sort of strategies would you encourage institutions to do in terms of making faculty
(37:21):
more aware of what AI is capable of and how perhaps their students might be using AI tools.
I think partly it starts with normalizing the use of AI, something we were talking about in terms of
students in the classroom with faculty. I think we just need to make sure that there are forums for
experimentation. Faculty need to be able to talk about the AI and the ways that they're using it,
(37:43):
they need to be able to share that with their leaders. They need to be able to feel a sense
of trust that nobody's judging them based on things that they're doing, as long as
they're doing things responsibly, ethically, experimenting. I think that's how we learn. And
having forums for that, whether it's a community of practice, whether it's discussion circles,
(38:03):
whether it's a Friday open office hour, some way to share and talk about the gains,
the wins, the losses. I think that's a first step of taking an organization, an institution,
a university, and helping it kind of take those steps toward AI use in higher ed.
But I think that an institution can say, Okay, we've got this really forward thinking process
(38:26):
with AI, and we think it's okay to use it in X, Y, and Z ways, and we'd like you to
experiment and explore. There are still faculty who won't do that. A lot of faculty just aren't
going to want to learn this. And so I'm not entirely sure the answer to that question,
I think that's the underlying problem with so much of the work that's being done with AI and higher
(38:47):
education, is the fact that you need to have willing partners, you need to have supportive
institutional governance. You need to have supportive infrastructure in place, but you also
need faculty to truly engage with it with an open mind. And I'm not saying that that's bad or good.
Everyone's allowed to do what they want to do. I know a lot of faculty will never want to use this.
(39:08):
The sort of pervasive idea that we see with AI is that we should ban it, it shouldn't be used in the
classroom at all. So many of my colleagues fall on that side of the spectrum, and that's fine,
but then I guess it's a process that's going to take time. I can only imagine what it was
like for teachers when a principal walked into the office, the bullpen office one day and said,
(39:30):
“Hey, you know what, tomorrow for your quizzes and tests, I want you to use the Scantron form.
It's weird. It's different. It's totally new. I want you to try this Scantron out,” or the
day somebody first put an overhead projector on a teacher's desk and was like, “Use this to teach.
Give me your chalk. Start using this thing.” I feel like it's fundamentally changing how we can
(39:51):
interact in the classroom with our students, with each other. And if you don't want to engage that,
think about that. I guess that's your prerogative. Famous story in my family, my dad retired from his
job the day they put a computer on his desk. He walked out the door that day. I think it's going
to be a slow process, but I think eventually we're going to get to a point where faculty are going to
(40:13):
be able to see the benefits of this. We're going to show the benefits to them. We're going to have
to run workshops. We're going to have to run dedicated professional development tracks that
explain the small wins and the large wins of using AI, some of the upskilling that we were talking
about earlier in the conversation. If we show faculty this isn't just about generating content,
but it's about learning while we're doing it. It's a tool for reflection, an aid for reflection.
(40:38):
I think the more we can develop these use cases that show that and we can share those, I think
the more we're going to win people over. We're not going to win everybody over, and I think that's
fine. There are probably a lot of people who still won't use Scantrons, but I feel like it's one of
those moments where getting to the hearts and minds of faculty is what's really crucial here,
(41:01):
and a lot of the conversation around this doesn't always seem to be productive for that. We're
talking about limiting AI or banning AI. I think we need to have more conversations like what we're
having today (41:10):
ways to productively include AI, and
then let's talk about them, let's delineate them,
let's share them, and let's see if it resonates for some people. Maybe it will for others, maybe
it will for some, and then it sort of spreads.And we should note that in the podcast released a
week before this one, we addressed some of those issues in terms of what's being done at several
(41:31):
campuses in SUNY, and we'll include a link to that in the show notes. But one of the things
we've noticed here is that many of the faculty were really resistant to the use of AI. Once
they were in a workshop and got to see how they could use AI to help improve their teaching,
they became much more aware of possibilities, and their attitude changed really dramatically
(41:53):
within even just a few hours of that professional development. But the key is getting people to that
professional development work, and it's sometimes hard to get people started with that, especially
if they have this fear or this serious concern about ethical issues associated with AI.
I think that's why finding ways to show how AI could help do work you might already be doing
(42:17):
is one of the ways in for faculty, especially. I mean, what you're talking about is the same kind
of thing we see in studies. People don't want to use AI once they do it, because they see how
it works, they start to understand, okay, this can be a powerful tool. They may still sort of
ethically make a choice not to use it, I suppose, but I feel like it's just seeing how it works. Can
(42:39):
be the gateway for so many people. So what's that gateway for faculty, but to show them how it might
help influence their work, how it might help free up time to do the creative things we were talking
about earlier. I feel like the more we can give examples of that, I think the more we can lead
people to it. And it means starting small. It means looking at what we're already doing and
(42:59):
seeing if there are ways this can be made to help assist in doing it a little bit better.
Along the same lines, one of the things that I've observed in the way that institutions are handling
AI is typically, right now it's voluntary, like professional development around AI is voluntary.
And so you're getting folks that are at least curious or completely resistant because they
want to fight against it, so they're showing up to things, like there's some motivation to show up,
(43:23):
and it's their motivation to show up. But if we start meeting faculty where they're
already at in department meetings and other places that are places where they're already
kind of required to be to at least raise the issues or the conversations that'll
continue involving some additional people. That's the hope. One of the things we're going
to start piloting this with the new academic year is we have a roster of course revisions that are
(43:46):
going to happen over the course of the year, and we're going to sit down with faculty experts,
and we're going to say, “Okay, you can follow the old 12-week model of revising your course, where
you're going to meet with people, you're going to develop the content, or you can follow this
new model. It's nine weeks long. It's a little shorter. It's going to use AI to help assist
you. Which way do you want to try and if you pick the nine week way, we're going to document that,
(44:09):
we're going to share that out as a kind of story, good and bad, whatever happens.” And I feel like
that's Rebecca kind of at the heart of what you're talking about, this idea of finding ways to get
faculty to connect with it that are kind of low lift. You're going to do this thing anyway. Do you
want to try this other possibly easier, possibly faster way of doing things, just to see what
(44:30):
happens? You want to help us figure this out? And I think that might help. I love being an academic.
I love teaching university. I love my students, but you read these emails from Microsoft or Asana,
or I mentioned Ethan Mollick earlier. He has a substack, and you just see the massive amounts
of money that businesses are pouring into AI training. They're mandating it. They're
(44:54):
throwing everyone into day-long workshops. They're testing and piloting. They're opening up tools and
universities and institutions just aren't doing that. We're not as well funded, and maybe we're
a little stodgy, maybe we like to do things at our own pace. And I respect that and appreciate that,
and that's one of the reasons why I'm an academic and not a CEO, one of the many reasons I'm not a
CEO. But I feel like sometimes I read those things and I just think, “Boy, how nice would it be if we
(45:19):
could just mandate everybody an AI exploration day,” you know, just play with a new tool and
see what you come up with. Because professional development, educational development in higher ed,
is so different from what I experienced as a corporate trainer, where people were thrilled.
They might make fun of it. They might think the outcomes weren't going to be worth it,
but they were there and they were active and they were engaged, and as we're intimating here,
(45:44):
it's tough to get people to show up, it's tough to get them to engage, it's tough to track if they
learned anything, and it's tough to check in six months later and see if they're still doing it. So
what does that look like? That's the challenge of educational development overall, probably.
But maybe it's small wins. Maybe it's small communities that catch fire. I don't know.
I wanna underscore one word that you said, which was play. I think that's something that we don't
(46:06):
often allocate time to play around with new tools and technology, and we're putting our time
resources towards something else, but if we put time resources towards play and experimentation,
we might find something that's useful.I think we just have to engage with that
growth mindset. We have to be like our students. We have to be learners. We have to listen. We have
to just explore, instead of always jumping in with the information, with the answer. We have to sit
(46:30):
back and maybe that play comes from it. One quick example, my colleagues are probably not listening,
so I'll share this. I woke up one day and I had an email that had kicked off the night
before. There were like six or seven of us on the email, and it was probably at that point, like,
16 or 17 emails long. I clock off, I go to bed, wake up the next morning, and there it is, staring
(46:52):
at me. And I was like, boy, this is exactly what people talk about that, “Hey, I can help you
with your email.” So I grabbed all of that, put it into a protected AI tool that's not going to train
itself on my data. And I just said, “Is my name mentioned here? Are they talking about comp? Like,
why am I involved in this?” And it spit back some really good, useful information. And I was like,
(47:12):
“Okay, great.” And then I had 30 minutes to play, to just do something else, something different,
and instead of engaging in the back and forth politics of a gigantic email chain, I got to
cut through the noise and do something that was more meaningful for me in that moment, and that
was that play of exploration, trying something new. If we can get back to that and can enkindle
(47:33):
that, I think we're going to get there.I did enable the Apple intelligence on my phone,
which is only a few months old, and it does give me email summaries automatically. They're often
somewhat distorted. In fact, they're often so scary it forces me to look at my email sooner
than I would have otherwise, because it does look like there's something serious that I have
to address right away. But it's getting better. Those types of summaries can be really useful.
(47:57):
I remember when I first turned on AI summary on Zoom, we do a lot of work on Zoom as an online
institution. I first turned on an AI summary, I was in a meeting, and afterwards, I'm looking at
the summary, and I'm like, these aren't the things we talked about at all. You got everyone's names
wrong, and this is really bizarre. And not even three months later, I was looking at it again,
and I was like, this is actually now really good. And so along with playing, I think it's that sense
(48:19):
of continuing to grow and to learn to keep coming back, don't play with it and then write it off,
but continue to explore, continue to play, and to also take what's there to be taken and don't over
rely on it. We don't want our students to just over rely on this. You don't want me to never
read my emails again, just reading summaries of emails. But I do think that there are ways we can
(48:41):
develop that mindset, really a deep sense of AI literacy as an important outcome in and of itself,
where we realize how to interact with these tools in a way that's going to help us free up that
time. Do better work more as opposed to the slush work more. And we get buried in our emails. We get
buried in grading. We get buried in so many different things. There are ways to shortcut
(49:03):
some of that while still doing the meaningful stuff. I'd like to figure out what that is.
Sounds like a great note to get to our last question, which is we always wrap
up by asking (49:12):
What's next?
Well, what's next for me is
probably grading papers. It's… Wait. No, that's the slush.
For me, that's important work. We've been talking about this a little bit. For me, I think one of
the things that's really important is to continue to learn alongside other people. That is how I've
learned the most. I'm a teacher, but I'm also a teacher of teachers, and one of the ways I learn
(49:36):
how to teach my students better is by working with them. One of the ways I learn how to interact with
my other faculty members, the faculty members that I oversee, is by being a teacher myself, seeing
how they do things, working with them, helping to figure out what those elements are that can be
honed or refined or developed. And so to me, it's always just about learning. It's about putting
(49:57):
myself out there, trying to figure out what can be figured out, and being curious, listening,
finding ways to think strategically, to take the small thing I might be doing and think about it
in terms of how that might scale to other classes, other teachers. I'm playing around with AI on my
computer in the morning. I'm trying to figure out different ways of doing things. While I'm doing
(50:19):
that, I'm also thinking, “ooh, how can I convey this to somebody else? How can I best show this,
demonstrate this to help other people get excited about it?” So it's about structured play. It's
about strategic thinking. It's about having fun and getting back to the root of what brought me
to this crazy discipline to begin with. I want to ask better questions. I don't really want to know
(50:44):
the answers to things, because that always seems like the end. I just want to keep asking questions
and getting to new places along the way.Well, thank you. This has been a fascinating
discussion, and it's one that is really important for people to be
having right now on every campus.Yeah, I appreciate it. It's great to
talk about this stuff. And like I said, you guys kept me on my toes a little
(51:05):
bit here. It's fun to think out loud, and I appreciate your time today, John, Rebecca.
Sounds like a really human activity that just occurred that has maybe a relational aspect,
right, that maybe can't be replicated by AI.I mean, as podcasters, you guys have explored with
the Google podcasts, right?Notebook LM? Yeah.
…where you can give it a research article, and you know you're listening to it, and it sounds
(51:26):
like a podcast. It sounds like you're listening to NPR, but the information is wildly wrong,
and it's missing that nuance and these elements of redirecting things, stuff that's coming up
that's not scripted. Some of the things we talked about today were not questions you shared with
me that we might talk about, and that's great. Yeah, that's the human enterprise. It's messy,
it's weird. Can't be predicted. That's what makes it fun and worthwhile.
(51:48):
Although notebook LM is getting better, as all these other tools are as well.
More deep fakes ahead. Thanks for joining us.
Yes, thank you. Thank you.
If you've enjoyed this podcast, please subscribe and leave a review on iTunes
(52:11):
or your favorite podcast service. To continue the conversation, join us on
our Tea for Teaching Facebook page.
You can find show notes, transcripts and
other materials on teaforteaching.com. Music by Michael Gary Brewer.