Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Faculty are faced with the need to adjust instructional strategies in response to
AI. In this episode, we discuss a professional development initiative
for faculty involving six campuses.
Thanks for joining us for Tea for Teaching,
(00:21):
an informal discussion of innovative and effective practices in teaching and learning.
This podcast series is hosted by John Kane, an economist...
...and Rebecca Mushtare, a graphic designer......and features guests doing important research
and advocacy work to make higher education more inclusive and supportive of all learners.
(00:49):
Our guests today are Racheal Fest and Stephanie Pritchard. Racheal is a Pedagogy Specialist at
the Faculty Center for Teaching, Learning, and Scholarship at the State University of New York at
Oneonta. She also teaches writing courses in the English Department. Stephanie is the Coordinator
of the Writing Center, the Coordinator of Writing and Ethical Practice, and an instructor for
(01:09):
classes in poetry and English composition here at SUNY Oswego. Racheal is the Principal Investigator
and Stephanie is one of the campus coordinators on a SUNY multi-campus grant focused on faculty
development related to AI. Welcome Racheal and welcome back Stephanie. It's been a while.
Thanks for having us. We're very happy to be here.
(01:31):
Today's teas are:... Racheal, are you drinking any tea today?
Yes, I actually came prepared with two teas. I'm a big tea drinker, and I'm starting with
an organic loose leaf orange pekoe, and then that will be my allotment of caffeine for the
day. And then I'll be drinking an herbal tea, which is a honey turmeric chai.
Both of those sound delightful, but maybe you're slightly an overachiever in the tea
(01:54):
category. How about you, Stephanie?I am drinking my go to Earl Grey tea,
and then perhaps this afternoon, will indulge in some blueberry green tea.
I knew that Earl Grey was gonna be involved.We do have, by the way, I think, four different
varieties of Earl Grey tea at the teaching center, and I think Stephanie has tried each of those at
(02:16):
one point or another. How about you, John?
I have a spring cherry green tea. Now that it really is spring here,
we haven't had snow for over a week.At least today, it feels like spring.
We'll see what the dice rolls tomorrow. I have Chai today.
We've invited you here to discuss this multi-campus grant funded AI
professional development program that you've been working on all year. Can you
(02:38):
provide an overview of the program? So, this program is called Teaching with
AI (02:43):
A Cross-Campus Community of Practice, and
essentially we brought together faculty from six
SUNY campuses, including regional comprehensives, technology schools, and community colleges,
and we asked them to design learning activities that integrate AI into the classroom in ways
that are critical, active, and inclusive. So you mentioned that there's six campuses
(03:06):
involved. Can you talk about how many faculty are participating?
We have about 66 faculty participants at this time, and the majority of them are funded by
our IITG funding. But we were able to get some additional funding at Oneonta and Oswego as well.
And Oswego, as Stephanie and John, you can attest, had a really big and enthusiastic cohort.
(03:32):
For those that aren't familiar, can you describe what an IITG grant is?
The IITG grants are innovative instruction technology grants. They're awarded across the
SUNY system, and they're peer reviewed through a competitive process that is
used to distribute funds, especially focused on educational technology,
and often OER (open educational resources). We are participating with six other SUNY campuses,
(03:59):
and those campuses are Alfred, Morrisville, Oneonta, Orange, Oswego,
which is my campus, and Schenectady. Could you talk a little bit about what the
participants in this program have been asked to do as part of their work in the program.
Our project began in January, and we started with a webinar from Anna Mills, who is an advocate for
(04:22):
AI literacy and also for OER or open educational resources. She teaches writing at the College of
Marin, and we made her webinar available to all of the participants in our grant. But we also had
some interest from SUNY, so we advertised the webinar through SUNY’s Center for Professional
(04:42):
Development, and we had a very large group of participants attend Anna Mills's talk, which
really focused on thinking about AI, how it's affecting the different assignments that we're
giving in our courses in all sorts of different ways in higher education. She also spoke about
some strategies that teachers should consider as they are really working to either adapt AI
(05:07):
into their courses or sort of try to manage how AI is affecting the classes that they teach.
And I'll just add that one thing I really love about Anna is she is focused on critical AI
literacy, and that was a big piece of our project, as we'll talk about as we develop the details,
(05:27):
but she has been a voice that is kind of guiding conversations about evaluative engagement with
AI and not only ways to integrate and use it in the classroom, but also ways to think
about how to protect learning objectives now that these tools are widely available.
So we started our project by listening to Anna Mills, like I mentioned. It was very well
(05:50):
attended, and that was in early January. After that, each of our campuses had a day-long kickoff
event where participants in each of our cohorts gathered in person on the various campuses, and we
led them through a series of structured activities and introduced them to the grant objectives
(06:11):
and deliverables. We also shared with them the Brightspace shell that we had created specifically
to help guide our cohorts through this process.And I'll add that in the initial meetings,
one of the things that was really helpful in getting everyone there, besides requiring people
to be there as part of the process, there was also lunch available, which made it a little bit nicer,
(06:34):
and it created a very nice environment for people to begin their work together.
It was a very nice way to pull everyone together on our different campuses,
introduce them to one another, and also sort of see what level of knowledge or engagement our
different participants had at the very beginning of the project. We had participants on our
(06:54):
campus who were doing all sorts of different things. Some had already been integrating AI
into their courses in various ways. Others were experimenting with ChatGPT for the very first
time during that day-long kickoff event. So it was really interesting and kind of energetic and
wonderful to see all of these different people in a room learn what they were doing and also learn
(07:16):
about what they wanted to do and what goals they wanted to achieve with the project.
And I'll also note that some of the faculty were there mostly to learn how to keep students
from using it as an alternative to learning. Once they started seeing, over the course of that day,
ways in which they could use it and perhaps have their students use it productively,
t here was a pretty significant shift in their attitude towards AI use over
(07:40):
just the course of that very first day, which has continued over the project.
That's a great point, John, we saw that here at Oneonta as well, and I think that we have
a number of faculty on our campus who are quite enthusiastic about AI and have been adopting it,
and some of the other diversity that we saw in the room that I thought was really
interesting. There were folks who were more skeptical about AI or more invested in kind
(08:07):
of protecting learning objectives, and I did want to make sure that the experience helped
them to think about those problems and questions, because those are so pervasive. And then there
were faculty who also have been adopting really enthusiastically in different ways,
so creating a space for both of those groups to kind of learn together and hash out different
(08:27):
positions, even while maintaining those different positions, because I think AI's integration,
it really varies across disciplines, and I think that that richness and diversity was a really
valuable and important part of this project. So beyond that first kickoff day, what were some
of the other things that faculty have been required to do as part of the project?
(08:48):
So after the kickoff, we moved on to a series of monthly meetings that we had planned,
and those meetings were designed to continue the conversation and also to offer pedagogy support
around designing learning activities. So in addition to thinking about AI integration,
(09:08):
we were also offering that pedagogy support of thinking about backward design, thinking
about how to begin with course objectives, how to design an activity that is active and inclusive
for students, and then how to think about feedback and assessment. So we'll talk a little bit about
the rubric that we use. I mean, we did use it to assess faculty deliverables, but it was more
(09:34):
of a tool to guide the conversations about what a strong learning activity could look like in a
range of contexts. So that rubric kind of guided our monthly meetings. We were focusing on having
an AI policy in your syllabus, developing at least one course objective related to AI. And then we
(09:54):
were focusing on designing the learning activity and finally offering feedback and assessment on
the learning activity. And we're wrapping up those monthly meetings in April. Some campuses
have already hosted them. Mine is actually this Friday coming up, and at that April meeting,
we were giving folks a chance to share and give each other feedback on drafts of
(10:19):
their assignments and learning activities before they submit those final drafts.
One of the things that we tried really hard to do in those monthly meetings was to model effective
ways of teaching to everyone who was participating in our program. So in addition to the things that
Racheal mentioned, like backwards design, we also spend a good chunk of time focusing on the TILT
(10:43):
approach. For example, they had an option, they could do an in class learning activity with AI or
they could have their students submit something in an out of class way. So we spent some time
talking about what those assignments could look like, and why that level of transparency in your
expectations as a teacher matters, and why that's really important. We also spent time,
(11:05):
as Racheal said, talking about feedback and all of the different ways that we give feedback to our
students, whether it just be a grade or written feedback or verbal feedback, using a rubric or
not, using peer feedback as a successful strategy. So we really tried, during these monthly meetings
(11:27):
and through our Brightspace shell, to really model what effective teaching looks like, in hopes that
our participants would take that and then apply it to the projects that they were developing.
And another important piece that pairs with that, I think, Stephanie,
we also tried to be really responsive to the different needs and contexts that our
(11:47):
faculty were bringing. If you were planning an in class learning activity, for example,
you're not going to grade that or provide written feedback, and we certainly weren't expecting that.
We wanted to allow for a range of approaches, so we talked about strategies for giving feedback
during class or through conversation. We tried to ensure that a range of different approaches
(12:12):
to learning activities and assignments were represented in the conversation.
And just following up a little bit on that, one of the reasons we focused on the TILT framework
is because, in the current environment, students may choose to use AI as an alternative to learning
material, and we'd like them to be aware of why we're asking them to do specific things,
(12:33):
because if they understand that they're developing skills or tools that are going to be useful for
them later, it's more likely that they'll actually engage in it actively themselves,
rather than trying to skip the actual learning by using AI. So we've encouraged faculty not
only to do it in these activities, but also to integrate the TILT approach into all their
(12:54):
teaching and learning activities. For those that aren't familiar,
TILT stands for Transparency in Learning and Teaching. We do have a previous episode on that,
that we'll put in the show notes.And we'll share a link both to that
and to Mary-Ann Winklemes’s website, which describes this as well. And just as one more
aside related to this, if you're not sure of how to create a TILT assignment, you could
always feed an assignment into AI and ask it to put it into a TILT framework. And that works
(13:20):
especially well if you also share your course learning objectives with the tool as well.
One of the things that I've heard all three of you mention is this Brightspace
course that you set up for faculty. Can you talk a little bit about this?
So we spent some time as a leadership team developing the Brightspace shell.
We knew that we wanted a common place for all of these materials to live, especially
(13:43):
the support materials, as our participants were working their way through our different months,
or our different modules. So we worked with members of the leadership team to develop this
Brightspace shell that could kind of serve as a common place for us all to come together. In
that shell, in addition to uploading our monthly materials that would focus on a different piece
(14:05):
as we moved through that rubric that Racheal was talking about earlier, we also established clear
guidelines for all of our participants so that they could understand what their requirements
were as they moved through the program, which we'll talk more about in a little bit. But that
Brightspace shell also offered opportunities for our participants to talk to one another. One of
(14:27):
the things that we required our participants to do was engage with one another in what we
called our community of practice discussions. So our participants were required to engage with a
small group of their colleagues from different campuses. This was a little bit interesting,
because Oswego and Oneonta had larger cohorts than some of our other participating campuses.
(14:53):
At Oswego, we were lucky enough to receive some additional funding and support from our Chief
Technology Officer, Sean Moriarty, which we really appreciated. And I know that Oneonta had some
extra funding as well. Is that right, Rachael?That’s right. We had initially requested funding
for 10 faculty to receive stipends of $600 each, but our Faculty Center for Teaching, Learning,
(15:16):
and Scholarship was able to provide additional stipends for, I think for us, it was six more
members to participate in the program. And I think for you, you were able to double that number.
So, Oswego ended up with 24 participants total, which was about 14 more than what we had initially
planned for. So in the community of practice discussions, it was a little bit heavier with
(15:41):
membership from Oswego and Oneonta, but we were still able to design these discussions based
off of what our participants requested. So some wanted the opportunity to speak with people from
other campuses who were in similar disciplines as themselves, and some wanted to be in groups with
(16:01):
people who were from totally different disciplines to sort of learn what other people were doing.
I think we're bringing out here the two tracks that ran parallel throughout the course of this
experience. On the one hand, we had our in-person cohort meetings on each of the six campuses, and
that allowed people to connect across departments from a range of disciplines with folks on their
(16:24):
own campus. And then we had this second element that Stephanie has been talking about, that was
the cross-campus community of practice element, and those were virtual meetings with three or
four faculty members per group. And we asked those cross-campus groups to meet for a minimum
of two times and then to post their takeaways to the discussion area in Brightspace. That way we
(16:50):
could track and see what our faculty were talking about. We could take a look and answer questions
that might have come up. And we also, as a leadership team, responded to those discussions on
Brightspace, and we really saw the groups talking about a range of topics. One of the reasons that
we wanted to connect faculty across campuses and not just run these cohorts on individual campuses,
(17:16):
is that we're seeing faculty bring a lot of different approaches and attitudes to AI,
and a lot of that is place specific. A lot of it is related to institutional cultures or resources,
and so the ability for faculty to meet from across the system and to share what their
institutions are doing to support professional development related to AI, as well as to share
(17:43):
what different departments or what conversations around AI might be happening in those different
spaces. We wanted to create space for people to come together and have those conversations.
It was also really interesting, because participants in the cross-campus groups,
in addition to having these other conversations about AI and pedagogy,
(18:06):
began to really work well together as a unit as they worked on the deliverables for this project.
So they worked together to share their ideas for their learning activity, their learning objective,
their syllabus statement. And then as we got deeper into the project, they started to share
drafts with one another and to ask each other, is this meeting the criteria for the grant? Are
(18:31):
we making a learning activity that's critical and active and inclusive? Do you have feedback
for me? So as we talked about earlier, we had the meetings on each of our individual campuses with
our campus groups, but it was really nice that the cross-campus groups were engaging with one another
beyond what we had initially imagined, which I thought was really a nice bonus from this.
(18:54):
I completely agree, and to try to facilitate some of that conversation as a leadership team,
we didn't want to impose structure on these meetings, but we also wanted to provide
some resources for getting those conversations started. So we created on Brightspace question
banks that faculty could choose to draw from to guide those conversations. And so with those
(19:19):
suggested structures, I think we were able to leave room for that creative exchange. And each
group was able to kind of determine how they wanted to use that cross-campus group time.
We also had, in addition to the cross-campus group discussion boards, we also made additional
discussion boards that were optional, so participants could choose whether or
(19:44):
not they wanted to participate in a board called community conversation, which was very informal,
just a way for people to share their thoughts or ask questions to one another. And again,
this was not required, but we did have participants posting there, engaging with
one another in that way. Then we also had another discussion board that we called news and tools,
(20:09):
which was basically the opportunity for participants to share new tools that they had
stumbled across, or post things related to things that were happening as new AI tools were developed
and came out. As we know they're changing so rapidly. Things are coming out every single day,
so participants had the opportunity to share that information with each other as well, and I
(20:30):
think it was nice to give them the opportunity to participate in more than just one forum, because,
as Racheal said, we had several members who were very engaged in the project,
and I think valued other spaces to collaborate.What are some of the examples of learning
activities under development by faculty?So, materials for this project are not due for
(20:53):
another two weeks or so, but we have had several participants submit their materials already,
getting things all squared away before the madness of final exams week here. So we did have a few
faculty who were thinking about engaging with AI tools as part of the brainstorming process.
So for example, one of our participants here gave an assignment where students were asked to
(21:18):
use Perplexity, which is a free AI tool that also can access the web and provide suggested sources.
He's asking his students to use Perplexity to help them continue brainstorming for an assignment that
they're working on in the class. So students already have some general ideas of what they
want to do, and he's asking them to engage with Perplexity to deepen their brainstorming. So for
(21:45):
example, ask Perplexity to provide some suggested sources to the topic that they're considering,
and then evaluate the sources that Perplexity is suggesting. He also asks them to prompt
Perplexity, to give them a counter argument to what they're exploring. So those are some
examples. T hat would be an example of a learning activity. It's something that he's going to have
(22:08):
them do in class. And the nice thing about his prompt is that he gave his students four different
sample prompts that they could take and then put into the tool, but they would have to fill out
the details of their particular brainstorming. So that's one example from our campus. Racheal,
would you like to give one example from yours, and then I can do another one?
(22:30):
Absolutely, that sounds great. I'll share an example from Oneonta,
while saying as well that the leadership team will be taking a look at the examples
across campus once folks are uploading them, as Stephanie has said, but we haven't done that yet,
so we only have the knowledge of folks we've been working with on our own campuses. So one faculty
(22:52):
member in sociology here has been developing a project that I think is really interesting and
exciting. He's going to be teaching a course in the fall that has a unit focused on conspiracy
theories and thinking specifically about how to debunk or counter conspiracy theories.
And so he has identified an AI chatbot called DebunkBot, which has been developed, I think,
(23:17):
out of Cornell. And he's going to ask students to identify a conspiracy theory of their choice,
engage with DebunkBot to see how the AI counters that conspiracy theory with factual information,
and then they're going to evaluate those AI outputs. And I think within the conversations
(23:38):
he's having with students, too, is the question of, do facts work against conspiracy theories?
There are various opinions on that, so not only evaluating DebunkBot’s outputs, but also thinking
about it as a tool to counter conspiracy theories. What are its strengths and weaknesses? And what I
really like about this project is, I think that it exemplifies that critical element that we've been
(24:04):
emphasizing through this project. By critical, we really mean evaluative. We are asking that all of
these learning activities and assignments take up that AI literacy task of helping students evaluate
AI outputs and evaluate AI as software.Tagging on to the evaluative piece, one of our
(24:28):
colleagues here is working on an assignment that asks students to sort of reverse write a paper.
So she's giving her students a prompt and asking an AI tool of their choice, probably ChatGPT I
imagine… one of the components of our grant that we really emphasized was trying to make these
(24:50):
assignments or learning activities as inclusive as possible. So we're thinking about access, who
has access to this technology. And since ChatGPT doesn't require users to make an account, there
is a free version, a lot of faculty are leaning toward those sorts of tools for this. So anyway,
one of our faculty here is asking her students to have ChatGPT generate an essay on one of the books
(25:16):
that they're reading for the course, and students are then going to look at the essay and evaluate
the output that ChatGPT has created. So they're going to look at things like whether or not the
material is accurate, if there are hallucinations, if it is in fact doing what it was asked to do. So
(25:36):
I think they're being asked to analyze, “So how is the quality of ChatGPT analysis?” And this faculty
member also has a series of follow up prompting questions to have the students continue to engage
with this essay that was generated and see if they can make it better, or see if that's it, if it's
(25:58):
as good as it's going to get, and if there are still holes in the analysis or with the argument.
As a writing teacher, I thought that that was a really cool way to look at what this technology
can do, really consider its limitations as well, and think about how are we going to do writing,
teach writing as we keep going, as more students rely on these sorts of tools.
(26:23):
I love that you were emphasizing the inclusivity piece there, Stephanie, which has come up in our
conversation where you were mentioning especially how you emphasize TILT on Oswego’s campus. On
Oneonta’s campus, we talked a little bit about UDL, universal design for learning,
but those were frameworks that we wanted to keep in the mix as part of the conversation, we didn't
(26:47):
necessarily have time in our progression to really dive deeper focus. So we set some basic guidelines
for inclusive activities that we encouraged everyone to meet. And those were, as you said,
making sure that whatever AI tools you're asking students to use they have equal access to. So
(27:09):
that would look different on different campuses as well. Here at Oneonta, we're a Microsoft campus,
and that means that we have Co-Pilot for all students over 18. And if you're thinking about,
when you're introducing students to AI tools, not only making sure that everyone can access it,
but also making sure that you're thinking and talking about who might have access to
(27:31):
paid versions, because students are sometimes subscribing to ChatGPT at that higher level,
and just making those conversations about what tools you're using part of the conversation. The
other inclusive piece that we really emphasized was just that providing instructions to make
(27:52):
sure that you're aware that people… students, specifically… are bringing different levels of
knowledge of technology to the classroom. Some students are using AI all the time, others have
never used it and don't want to. I think we've had those conversations in our group. So just being
aware of the diversity of student capabilities as part of that inclusivity and accessibility
(28:14):
piece was really important to us too.Just to kind of tag on, I thought it was really
interesting that, as we talk about access and who has access to a paid tool versus a free tool,
I'm sure you have seen, I know John and I have spoken about this a little bit that,
right now ChatGPT has a free trial of their paid version that they're marketing specifically to
(28:36):
students. So it's ongoing from now until the end of May, which I thought was really interesting.
And I know that they're not the only tool that has done that. Is that, right, John?
Gemini and Claude have also done it, and one of them I forgot which one, actually is extending
it until next year. So there's a little bit of competition goizg on to get more students using
their particular models, because student use does make up a remarkably large share of the use of
(29:01):
all the AI tools now, for reasons that perhaps are not always as positive as we'd like.
I think it will be really interesting to see how that continues to develop as we move forward.
They're doing this two month free trial. Now. What does that mean for the beginning
of the fall semester and beyond that?One thing we should also note is that
this podcast is coming out in June. We're recording this in late April,
(29:26):
which is why snow was as recent as last week, and why many of the references are towards the last
stages of the semester. This is a rare occasion for us to be this far ahead in our recording. We
often record things a week or two before their released, but we've happened to be able to get
a lot of podcasts scheduled within a fairly short period of time. Along the lines of the assignment
(29:46):
that Racheal mentioned in evaluating sources, Mike Caulfield has been developing a prompt for SIFT,
which is his approach for analyzing the veracity of online claims, and as of last night,
he's developed an expanded version of that prompt, and we'll share a link to that in the
show notes as well, because we can use AI tools to try, not only to create false information,
(30:12):
we can also use it to try to verify claims.One of the things I've heard you all talk about
quite a bit, and I'd like to hear a little bit more about, is the rubric that you were
using to evaluate the assignments and things that faculty were creating.
Yeah, the rubric that we developed, as I mentioned earlier, we really thought of this more as a tool
to guide our conversations, rather than as an assessment piece. But we've run at SUNY Oneonta,
(30:38):
similar cohorts devoted to different learning activities and aims in the past, and we knew from
those experiences that being really clear about what the deliverables are and what elements we're
privileging and would like to see, that's going to help faculty design learning activities that
then can move as we'd like to move them, to our public facing repository. So we focused on three
(31:06):
categories of evaluation on the rubric. First, we were asking folks to develop an AI policy
for their syllabus. That was a big part of this project, too. As we are thinking about helping
faculty and students navigate the emergence of AI, students are coming into a range of classes that
(31:26):
all have different expectations for them. So some faculty might be saying, “if you use AI at all,
you are cheating.” Others might say, “I want you to use AI. You have to use it. And here are some
of the ways that I'll be evaluating your use of it.” So just thinking about how we as a leadership
team could support students by encouraging AI policies through this experience was important
(31:48):
to us, and those policies would capture a wide variety of approaches to AI in the classroom.
Some policies that we've seen have given guidelines assignment to assignment, where some
assignments require AI use, some ask students not to use it. We've seen policies that outline the
(32:10):
specific ways students might use AI throughout the semester, and we've also seen policies
that said you can't use AI except in these very specific ways that I ask you to on this learning
activity. So we've really seen a range there. And the rubric begins with that AI policy element and
with the course objective. So one of the elements that we're collecting as part of the deliverable
(32:35):
with the learning activity is the syllabus with an AI policy and a learning objective.
And I think it's important to emphasize as well that regardless of how faculty feel about AI,
like whether they're embracing it or they're saying, “I really would prefer you not use AI
in this course.” It's really important for us to be transparent with our students about what
(32:58):
the rules are. As Racheal said, our students are generally taking five different courses
or more with five different faculty who have a whole bunch of different feelings about this.
So some people might think I don't want them to use AI in my course, so I'm just going to not put
anything on my syllabus. It's so important for our students that we are transparent,
(33:20):
regardless of how we feel about it.Yes, and that was one of our aims with
including that syllabus element. So the second rubric element focused on
the learning activity itself. And again, that had the critical, active, and inclusive piece.
I think we've done a good job talking about how we define critical as evaluative, and how we define
(33:41):
inclusive as tool access and instructions. And I guess I can just briefly define active as students
are using the tools themselves. So we're not just asking you to demo AI in a lecture. We're saying,
“Get students in there evaluating outputs for that AI literacy piece.” And I do think it's important
(34:02):
to mention here that we had many conversations about students who want to opt out of AI tool use,
students who might have environmental reasons for not wanting to use the tools or other ethical
reasons, and we did talk about strategies for that, including alternatives that would allow
students to maybe engage with an output that the instructor created from AI or alternative
(34:30):
assignments that aren't actually asking them to engage with AI, but might be asking them to
evaluate AI in a broader conceptual way. So those alternatives did come up for us. And Stephanie,
maybe you could talk about that last rubric piece, the assessment and feedback element.
Sure. So we thought that it was important for our faculty, as they are designing this assignment, to
(34:55):
really think about how this assignment or learning activity would be assessed. How are they going to
give feedback to their students as they experiment with these tools, as they evaluate these tools,
and what does that assessment piece mean? So we have some participants who are just doing
one assignment or one learning activity, so they have to just think about that one piece. But we've
(35:19):
had several participants in this project who have become inspired by the one part and have decided
to make lots of changes to their assignments or their learning activities, and then think about
what assessment means for them and for their students. So as we talked about earlier, the
feedback piece is very important, as our students are critically and actively engaging with these
(35:42):
tools, because we have to guide them through what they are learning. If we are really focusing on
AI literacy and helping our students understand or begin to understand how these tools might impact,
not only the work they're doing in the classroom, but their future jobs. Every career is going to
look a little bit different. Some are really going to be embracing these tools and expecting
(36:06):
our students to understand how to use them, where others perhaps not so much. An example, I teach
creative writing courses. Right now. In those courses, I am not emphasizing AI use because,
for many of my students, they're looking to submit their work to different journals,
and most creative journals at this time are not accepting AI-generated work. So it's important
(36:29):
to acknowledge that, and it's also important to focus on why we're going to continue to do
things the way that we've done them a little bit for the past, right? But other courses,
if I'm teaching English composition, and I'm teaching them different ways that they can outline
or draft or brainstorm, how they can use AI to get feedback on their own work and their own writing.
(36:49):
Those are important skills that will come in handy for them later on, as we talk about things like
efficiency and making more polished drafts and getting feedback before you turn something in.
Those skills will come in handy more so for those students than for some of the creative writers.
We have a student advisory board for our teaching center, and they met last week, and one of the
(37:11):
things that came up was the topic of AI, and two concerns that students discussed pretty actively
was that a lot of faculty have not made clear yet what their expectations are about whether AI use
is allowed, when it is allowed, and so forth, and they noted that it was more rare than the norm for
people to actually have made those policies. We've obviously been encouraging faculty to do that for
(37:34):
a couple of years now, but it hasn't made it out to all of our classes yet. But our institution
will be requiring an AI policy statement in the syllabi beginning next fall. So this transparency
issue and being clear about your expectations is really important. The other thing they were
concerned about is being falsely accused of AI use, and this is something that is very concerning
(37:58):
for students, and we have a lot of cases where some faculty are just completely ignoring AI,
and others are assuming that AI is being used by all students all the time, and accusing students
of using it without any real proof in many cases, and it's a somewhat troubling environment.
(38:18):
Yeah, I think that's such a good point, John, about the range of approaches that students are
taking to AI, as well as the different assumptions that faculty members are bringing. And I think
that there's sort of a spectrum of assumptions, where, on the one hand, you have the idea that AI
should be completely left out of the classroom and any use of it is cheating. On the other end of the
(38:43):
spectrum you have we should embrace AI and use it for everything in the classroom, and if we're not
teaching that, we're being irresponsible. So we are seeing very polarized views on our campuses,
and I think that a professional development opportunity like the one we've been running
needs to respect and honor and speak to both of those positions, while also making
(39:07):
the conversation a little bit more nuanced. And I think one of the ways that we've done that is to
show that bringing AI into the classroom actually addresses the problem of students using it for
cheating and plagiarism, because students are:
A) aware that the faculty member knows about and (39:20):
undefined
is actively engaged with those tools, so they might be less likely to just copy and paste,
which is the use that most people do regard as cheating or plagiarism. And in addition,
it not only shows that faculty members are engaged with and knowledgeable about these tools,
(39:43):
but in my experience, students are really hungry for guidance from faculty on how to engage with
them in an ethical and informed way. So we're helping them build those AI literacy skills by
confronting and talking about these tools in our classes. And to me, that might lead to them using
the tools more in specific ways that we're guiding them to, but it can also lead to them using them
(40:06):
less in situations where we want to protect those learning objectives. Like Stephanie,
I teach in the writing classroom, and I have been devoting this semester to engaging with
and thinking about AI, but that has meant we have plenty of writing and activities where I'm asking
them not to use it, and then we have activities where I'm asking them to use it in specific,
(40:27):
guided, critical ways. And so I think that that more nuanced approach than we need to totally keep
it out or we need to totally embrace it, that's the area that a lot of our faculty are working in,
and I think it serves our students really well.And Stephanie and I have also, in a series of
workshops here, encouraged faculty to co-develop their AI policy statements with students,
(40:48):
so that students are involved in that conversation, and if they help develop
the guidelines for the class, they're much more likely to abide by them and to buy into them.
To piggyback off of that as well, in our cohort, especially, I mentioned earlier
that we had a range of experience levels with AI tools. And what's been really interesting
is that for participants who are trying these tools for the first time, they're having some
(41:14):
important realizations about what these tools are or are not capable of, and I think that's
really important. If we are teaching and we're saying, here is my AI policy, and here's why,
if we are familiar with what the AI tools can do, I personally feel that our students are going to
be more receptive to that. If we are making assumptions about what the AI tools can do,
(41:39):
or how well they can complete a task, but we don't really know, then how are we going to recognize an
AI's work when we come across it? So for example, this is a dated example, but I think it works
still really well, in a writing classroom, for a little while after ChatGPT came out, the solution
(42:00):
to avoiding AI-generated work was to ask students to write reflectively, because they'll be writing
from their own personal experience, and an AI tool can't replicate that. And I remember leading a
workshop, and I showed an example of a reflective essay that was generated by AI, and this was like
two years ago. All I had to do was prompt it a little bit. I asked it in a revision of the
(42:24):
reflection to include a sad story about how the tool had a childhood dog that they really loved,
and that relationship with their dog is the reason that they wanted to pursue veterinary studies. And
it generated this beautiful reflective essay that you would have never known from anything else. As
I continued to prompt it, I asked it to include volunteer activity, and it made up a name of an
(42:47):
animal hospital… at the time that didn't exist, but now it would, because AI tools have gotten so
much better. So it's just really interesting as we think of what the limitations are, and
how there are not as many limitations in so many different fields. That is, of course, not the case
for all fields, but the tools continue to evolve so quickly. So I think it's important for us to
(43:07):
recognize what those limitations and capabilities are, and then if we know, I feel that our students
are more likely to trust our judgment, and then believe us when we say, “Yes, perhaps you could
use AI for this, but let's not because I want you to develop this skill that will help you
as you continue taking more classes here, or as you graduate and start looking for a job.”
(43:30):
We've talked a bit throughout the conversation about how faculty have engaged with the program
over the course of the year so far, in creating assignments and in the discussion boards,
et cetera. Is there any additional things that you want to share about how faculty have responded
to participating in this program. I think that some of the feedback we've
received is that it's been really nice to have institutional support to help faculty do work
(43:56):
that they felt the need to do in their classes. Faculty want to respond to AI's emergence,
and oftentimes we know faculty are overworked. There's so much to do, and so being able to get
that institutional recognition through a stipend to say “this is important, and we're offering
(44:19):
support for developing these activities.” We've heard faculty report that they're happy that the
opportunity is available to do things that they've already wanted to be doing, but here
they can see the institution supporting that.I agree, and it's also been very helpful as we
have these conversations with our participants to think about effective practices on the whole with
(44:41):
AI. So for example, there has been conversation everywhere about using AI tools to grade student
work and to assess student work. So we've had opportunities to have those sorts of
conversations about why, perhaps, it's not in our best interest as teachers to use AI to give
feedback to student writing, for example. And it's been nice to hear what other people are doing,
(45:06):
like Racheal said, with that support aspect, but also the conversations
allow us to sort of come across those topics that might just be neglected otherwise.
And I'll just jump in, Stephanie, because your comments about using AI to assess and grade has
also been an important part of our conversations at Oneonta. And again, we're seeing really
(45:28):
polarized views, where some people are saying, “I would never do that.” Some people are saying,
“I'm already doing that in these specific ways.” And we've talked there, for example, about the
Student AI Bill of Rights, which circulates online. And one of the elements of that AI bill
of rights for students is that students should have the right to know when faculty are using AI
(45:51):
for feedback or other forms of engagement, and so I agree that this has been a really great
place to have those conversations which might be taking place informally or in departments, but in
this interdisciplinary setting, people are able to talk about those issues and share their views.
(46:12):
So we always end by asking, and this is something particularly appropriate with AI, what's next?
As we're wrapping up this project, we are looking to move the learning activities that our faculty
will be submitting to Brightspace into an open access repository that my colleague Ed Beck here
(46:32):
at SUNY Oneonta is helping to develop, not only for our grant, but for another IITG-funded project
out of Albany that is generating some K-12 resources. So we are creating that repository
space via SUNY Create and after we collect our faculty learning activities on Brightspace,
(46:54):
we'll be guiding them to move revised versions of those activities into the repository. So we
also have our presentation at CIT, which is the Conference on Instructional Technology that SUNY
convenes every year across the system, we will be presenting at CIT on this project,
and by that time, we will have more information on the website and how to access it, so that faculty,
(47:20):
not only across SUNY, but across the internet, will be able to view the learning activities
that we've been talking about today. And we should mention that SUNY Create is
basically an instance of a system-wide license for Reclaim Hosting’s domain of your own.
And after CIT, we're hoping to extend this work, maybe in new ways. We're thinking about ways to
(47:45):
bring campuses together for communities of practice. And one thing that's been really
important to me about this project has been the value of working with colleagues in educational
development, whether we're talking about faculty or staff at all of our six campuses,
all of whom have brought different skills, knowledge, expertise, passion to this project.
(48:09):
We've had leadership team members who work in the social sciences and have brought that qualitative
research and assessment knowledge to the project. We have created pre- and post-assessment surveys
that are helping us collect data on the success of the project, and we couldn't have done that
without Deepa Deshpande, who's at SUNY Alfred. We also have leadership team members who are
(48:35):
bringing expertise in the classroom, that includes Stephanie, John, and Laura Pierie,
who is with us from Morrisville, and having that faculty insight from a range of disciplines has
really grounded the project in the needs of our participants. We've also had Dana Salkowski and
(48:56):
David Wolf join us from community colleges, and having their input as directors and leaders in
teaching and learning has been really crucial to helping us think about how our project could serve
faculty at all of our different campuses that we've been working with. So I think that it's
been a really amazing group effort, and I'm so grateful for the leadership work of that entire
(49:22):
team. I think together, we've been able to really support faculty in some of this work, and I'm
excited to think about the collaborations that we might be able to pursue going into next year.
Well, thank you so much for joining us. It's been a great conversation, and it's a great project
to share with others that they might want to implement something similar on their campuses.
Thank you. It's been great working with both of you and I’m looking
(49:45):
forward to future collaborations.It's been wonderful chatting with you all.
Thank you for having us, and it's been wonderful talking with you today.
If you've enjoyed this podcast, please subscribe and leave a review on iTunes
or your favorite podcast service. To continue the conversation, join us on
(50:09):
our Tea for Teaching Facebook page.
You can find show notes, transcripts and
other materials on teaforteaching.com. Music by Michael Gary Brewer.