All Episodes

August 20, 2025 • 16 mins

Have a question or want to share some thoughts? Send us a text message!

Artificial Intelligence has dramatically transformed the educational landscape, leaving both students and educators navigating uncharted territory. Dr. Kelly Ahuna, Director of Academic Integrity at the University at Buffalo, discusses this complex issue in our thought-provoking episode about AI tools, academic honesty, and the future of assessment.

Join us for this insightful conversation that balances practical advice with thoughtful reflection on preserving educational integrity in the AI era. Have you encountered AI challenges in your teaching or learning? We'd love to hear your experiences and strategies for navigating this new frontier.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Maggie Grady (00:00):
Welcome to the CATT Teaching Table podcast,
where we explore innovativeteaching methods and dynamic
educational strategies, hostedby the University at Buffalo's
Office of Curriculum, Assessmentand Teaching Transformation,
otherwise known as CATT, andsupp orted by the Genteels'
Excellence in Teaching Fund,this podcast is dedicated to
highlighting the journeys towardeducational excellence.

(00:21):
I'm your host, Maggie Grady, alearning designer for CATT, and
today we're exploring a criticaland timely topic artificial
intelligence, academic integrityand where we go from here.
So I'm thrilled to be joined byDr Kelly Ahuna, who is the
Director of Academic Integrityat the University of Buffalo.
Kelly, thank you for joining ustoday.

Dr Kelly Ahuna (00:41):
Thank you very much for inviting me to
participate.
I always appreciate anopportunity to talk about
academic integrity, especiallyin this new world of AI.

Maggie Grady (00:49):
Yeah, and we'd like hearing it from you because
you're the specialist, so let'sdive right in.
So AI tools are changing howstudents approach their work and
how we assess it as faculty, aseducators.
So, to start, can you explainwhat cognitive offloading is and
why it's particularly relevantin today's academic landscape?

Dr Kelly Ahuna (01:08):
So cognitive offloading is when you take
cognitive demands and you reducethem to allow greater
efficiency.
So a good classic example ofthis would be the calculator
that you know if you have to dolong division to get to
ultimately to the end of a bigproblem, you might cognitively
offload the long division to thecalculator.
Things like translation tools.
I think EndNote is another goodexample.

(01:30):
You're writing a long paper.
You have to put all yourreferences together.
You use EndNote to do thereferences.
So these are tools that do whatpreviously would have been
cognitive labor, and I think theconcept is particularly
relevant today because somestudents are using generative AI
tools to cognitively offloadwork that in some cases they

(01:50):
shouldn't or they're not allowedto.

Maggie Grady (01:53):
Okay, so the issue isn't necessarily the tools
themselves, but how they'rebeing used and within that
context of learning objectives.
So how can we dive into that?

Dr Kelly Ahuna (02:07):
Yeah, I mean, I think the learning objectives
are really where theconversation should be, because
the desired learning outcomes ofthe course should dictate when
cognitive offloading isallowable.
So, for example, if you'retaking a Spanish 101 class,
students are not going to beallowed to use translation tools
, because the learning objectiveis for students to memorize
vocabulary.
But if it's a high levelSpanish course and students are

(02:28):
reading Don Quixote in Spanish,a translation tool is probably
fine, because if they run intounknown words, the learning
objective is not aboutmemorizing vocabulary, it's much
broader than that.
So the translation tool isprobably allowable.
And this is where theinstructor's rules become really
important, because, as theexpert in the subject and the

(02:50):
designer of the learningoutcomes, they're the people who
know when an AI tool hinders oradvances a learning objective.
And students.
You know they're enticed bythese tools, but they're novices
and they might not be able toidentify this issue of how the
AI tools hurt or help learningobjectives.
So the more explicitinstructors can be about what

(03:12):
tools are allowed, when and why,the better for students.

Maggie Grady (03:17):
Understandable So how does AI impact assessment
, especially considering today'stech-savvy students who might
view AI tools like Chat GPT,Microsoft Copilot, something
along those natures, is justanother resource.

Dr Kelly Ahuna (03:31):
Yeah, so this is really important because
faculty, when they collect anassessment from a student and
they score it, it's reallyimperative that they are scoring
what the student knows, or whatthe student can do, and if
students are using AI tools inunallowable ways, it really
harms the authenticity of thatassessment.
So they're not scoring what thestudent can do, they're scoring
what the AI has done, and soit's just imperative to the you

(03:55):
know the genuine nature ofassessments that students aren't
using the tools in ways they'renot allowed.

Maggie Grady (04:00):
That makes sense because, right, just what you
said, they're judging or they'regrading the AI tool versus the
student, so isn't that the ideais that we want the students to
learn and build that knowledge.

Dr Kelly Ahuna (04:11):
Yeah, and I would just say that AI is just
the newest threat to academicintegrity.
This has always been the case.
You could be grading what thestudent's roommate did for them,
or the student's mother, orwhat they purchased online, or
there have been all kinds ofother threats.
It's just that artificialintelligence now is so
accessible in so many areas.

Maggie Grady (04:32):
So communicating to the students can be tricky.
And how can we ensure thatunderstanding when AI is used
and if it's crossing the line?

Dr Kelly Ahuna (04:37):
Yeah, this is where communication between
instructors and students reallyhas to be very clear.
So there are a lot of thingsfaculty can do, I think.
First of all putting somethingin their syllabus.
So this could be a blanketstatement if they're never going
to allow AI or they're alwaysgoing to allow AI, but it's
probably more typical that it'sgoing to be assignment dependent
so their syllabus statement cansay there will be specific

(05:00):
rules about AI as we get toassignments, and then they need
to say it out loud when they getto the assignment.
You know, in this case you'reallowed to use these things and
I learned from Ethan Blanton,who's a computer science
professor here, that I think itis easier to tell students what
tools they can use than to tryto make a list of what they
can't, Because the list ofpossible tools is proliferating

(05:23):
rapidly and you're never goingto be able to cover all your
bases.
So if on this assignment youthink Microsoft Copilot would be
okay for students to use, totell them overtly you can use
this tool or that tool or thesethree tools and sort of put the
parameters around it for thestudents.
And then, importantly, I alwaystell students they should never
make assumptions If theprofessor hasn't said they

(05:45):
should not assume it's allowed.
They should always go back andask.

Maggie Grady (05:47):
Yeah, so clear lines of communication, always
talk to the professor ofuncertainty and communicate out
towards the students, so keepingthings clear for everybody.
So, I have heard from faculty afew strategies that they use
requiring students to submitdrafts as a way to get to combat
cheating, or having studentsinclude an oral component, and

(06:10):
this is also another way thatthey can detect if they're using
AI.
So why are those two thingseffective and can you think of
other strategies?

Dr Kelly Ahuna (06:20):
Yeah, I mean, if the goal is to make sure
students aren't using AI, thereare some methods that might be
more reliable than others.
So I think a lot of people aregoing back to the classic
in-class proctored assessment,where students aren't taking the
exam somewhere else or writingdoing the writing somewhere else
.
They're writing or taking theexam in class where they can be

(06:43):
observed by the instructor.
So, along those lines,sometimes people are flipping
their classrooms and if you'renot familiar with that, that's
the idea that students do thelearning outside of class and
the assessment in class.
Typically we think of, you knowstudents coming to class to
hear the instruction and thenthey might go home and do the
assessment as an out of classsomething.

(07:04):
Some faculty are flipping thataround, especially since COVID.
We have so much better, youknow, fluency in the
technologies for students towatch lectures at home and then
do assessment in class.
Also, you know the Gen AI toolsare getting better all the time
, but still, if you can be veryspecific to course content in
the assessment questions, thenit's going to be harder for

(07:27):
students to get the help fromthe tools.
So making the assessments ascourse specific as possible you
mentioned drafts.
You know drafts is a good idea.
You could follow a student'sprogression as they go from
beginning to end and along thoselines you could ask for the
metadata you know Google Docskeeps you know track.
You can ask a student to turnthat in.
Or you could ask a student totake notes on their progress.

(07:50):
You know sort of ametacognitive approach to like
when I did this first part I wasstuck here and now I'm feeling
better and you know, just sortof talking it through.
And then, like you mentionedoral components.
I mean, if you want to know ifsomebody knows something, a
really good way is to ask themquestions and see if they could
respond in real time.
That's often not possible inclasses of any substance of size

(08:14):
, but it is a good, effectivetechnique to get knowledge.

Maggie Grady (08:19):
Yeah, so putting a personal component into
whatever assessment method, Ithink would be one way that they
could get around and makingsure, just like what you said,
that they know the material,they're ready to move on and
they understand.
So now let's talk aboutdetecting AI use.
So I know that UB uses Turnitin, which is an integrated AI

(08:43):
detection feature, and howreliable is it and what should
educators be mindful of?

Dr Kelly Ahuna (08:51):
Yeah, this is really such an important topic.
I mean I don't want any studentout there worrying that they're
going to get in trouble foracademic dishonesty when they
really did their own work.
We have a lot of stop getmeasures in place to prevent
that from happening.
And I think there's been a lotof press about these tools.
You know, after ChatGPT cameout, these detection tools
proliferated.

(09:12):
I spent a lot of time trying tokeep up with them all and some
of them are much better thanothers and I think the bad ones
have really brought bad press toall the tools.
We use Turnitin, it's builtinto our Brightspace platform.
It's easy for faculty to turnit on.
Turnitin does two things itwill give you a plagiarism score
and a score of what is thelikelihood that text was

(09:34):
generated by artificialintelligence.
So it's a predictor likelihoodscore.
Turnitin is less likely tocatch AI than it is to falsely
accuse a student of AI.
So their sort of business modelis that they'd rather miss some
than falsely accuse some people.
But I just want to say that ouracademic integrity process we

(09:57):
rely on a standard of evidencethat we call preponderance,
which is what is more likely.
We're just looking to see whatmore likely happened and if
Turnitin comes back with a highlikelihood score that this was
generated by AI, our proceduresrequire that the instructor have
a conversation with the studentand when they have that
conversation and the professorsays, tell me about your paper,

(10:20):
why did you write this, wheredid you get this information,
and the student can talk in aneducated way about their paper,
that really usually just erasesthe concern of what the Turnitin
report has been.
So if a student has reallytruly done their own work, I
don't want them to have anyworries about a false positive.

Maggie Grady (10:41):
Okay, so let's move on to well, or let's
continue to talk about what isthe best approach for faculty to
take.

Dr Kelly Ahuna (10:48):
Yeah, so faculty should just follow the process
we have in place.
We have a, I've worked reallyhard to develop a faculty web
page on the Academic Integritywebsite where they can just walk
through the steps of theacademic integrity process if
they have a concern.
But they can always call ouroffice.
They can talk to us.
We're happy to walk themthrough anything.
But when they have thisconversation with a student you

(11:11):
know I was faculty for 20 yearsbefore I took this job and so I
know that if you have a studentwho you think cheated somehow on
your work, you might take thatpersonally and I would just say
to faculty try to leave that atthe door.
You know students have lots ofreasons and motivations for why
they do things and usually it'sreally not personal to you and

(11:32):
the goal of this is just to havea conversation to come to an
understanding of what happened.
So I'm finding that instructorscan pretty easily identify when
students are using artificialintelligence in a couple of ways
.
Sometimes they includeinformation that you did not
teach in class and sometimesthey include information that's
very much at a higher level thanwhat you taught in class.

(11:53):
Sometimes a lower level, butusually a higher level.
Sometimes there are theseidiosyncratic misinformation
items, sometimes there are falsereferences and sometimes it's
just markedly different fromevery other assessment the
student has turned in.
The whole voice of it is verydifferent.
So I always just say that, youknow, I think instructors should

(12:16):
trust their expertise and theirexperience.
They read a lot of theseassignments and sometimes it's
pretty clear.

Maggie Grady (12:22):
Okay, so, as AI continues to evolve, what are
some of the larger challengesthat you foresee for academic
integrity and higher education?

Dr Kelly Ahuna (12:31):
Yes, I mean I can't stress enough the need to
protect the value of a UB degree.
I mean we are not a diplomamill, right?
We are not just giving outdegrees for money.
We are trying to make ourcommunity and our world a better
place by educating our studentsto go out and do good work.
So if our students you know, Ialways try to tell them that our
futures are inextricably linkedtogether.

(12:52):
Because if our graduates go outfor a job unprepared because
they cheated their way throughUB, that employer is going to
wonder why this person doesn'tknow what they're supposed to
know and they're likely going tothink you know what is UB doing
?
UB is not preparing the studentin the way that they should,
and so the next time a UBgraduate applies for a job with
that company, they're so muchless likely to be looked upon

(13:14):
favorably because the employerno longer trusts UB.
So really, UB's reputation iscritical to everyone involved
with the university.

Maggie Grady (13:25):
I like that viewpoint because some people
forget that.
So it's nice to put that it'sreputation, it's value, it's all
of those things and we're justbuilding upon that.
I like how that thinking is.

Dr Kelly Ahuna (13:38):
Yeah, sometimes students will say well, if I
cheat, I'm only cheating myself.
That's the kind of line thatpeople say, and that's really
not true.
When students cheat, theyreally they hurt the whole
community.

Maggie Grady (13:48):
Yeah, I agree on that.
It sounds like there's adelicate balance of integrating
these tools without underminingthe core values of education.
What advice would you give ourlisteners?

Dr Kelly Ahuna (13:59):
Well, I guess I'll talk to two audiences.
I'm not sure who exactly willbe listening, but I would remind
our students that they're hereand they're paying good money to
learn.
There's no shortcut to learning.
Back to our cognitiveoffloading discussion.
All learning has to take placein a student's head.
They have to do the cognitivework to take in new information
and make it meaningful and keepit.
So, t hey need to trust theirprofessors when it comes to how

(14:22):
they're going to learn in aclass and follow those
guidelines.
But for our faculty, I wouldquote a woman named Sarah Eaton
who does academic integrity workin Canada and she likes to say
that our students aren't ourenemies, they're our future, and
I think this is reallyimportant to keep in mind.
We don't want to make this intoan arms race and that

(14:43):
instructors really I know itmight seem a lot to some people,
but I think they need toeducate themselves about the
available AI tools and howthey're relevant to their
content area and, wheneverpossible, allow students to get
some responsible practice withthe tools.
You know, we know from a surveythat we did last year that our

(15:11):
students are very worried aboutthe job market expectations
around AI use, that this isgoing to be an expectation of
them.
So when faculty can allow AI intheir classes and in their
assessments to, you know, sortof allow for that model, that
for students and get them to becomfortable with the tools.

Maggie Grady (15:22):
So it's been a super enlightening conversation
and before we wrap up, do youhave any advice for instructors
navigating the use of AI intheir teaching and their
assessments?

Dr Kelly Ahuna (15:33):
You know, I would just say use the Office of
Academic Integrity.
If you need support reading aTurnitin report, if you have
questions about how to approacha meeting with a student, or
worry about if you havepreponderance of evidence or any
questions like that, reach out.
UB is very lucky to have anOffice of Academic Integrity.
Not a lot of universities areresourced that way, so that's
why we're here.
We're here to help, and so Iwould just say please use us.

Maggie Grady (15:56):
So, for those that want to learn more, you can
visit the University ofBuffalo's Academic Integrity
page, just like Kelly mentioned,and that's buffalo.
edu/ academic- integrity.
And Kelly, thank you so muchfor joining me today.

Dr Kelly Ahuna (16:11):
Yeah, thanks very much for having me.
I mean, these tools aren'tgoing anywhere, so we're going
to need to keep the conversationgoing.

Maggie Grady (16:16):
Thank you to our listeners for tuning into this
episode of the Teaching Tablepodcast.
If you enjoyed today'sdiscussion, be sure to subscribe
and leave us a review.
We'll be back soon with moreconversations on exploring the
latest in teaching innovationsand strategies, and until then,
keep exploring new ways to reachand inspire your students.
As always, be sure to connectwith us online at buffalo.

(16:38):
edu/ catt that's C-A-T-T oremail us at ubcatt@buffalo.
edu
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

The Breakfast Club

The Breakfast Club

The World's Most Dangerous Morning Show, The Breakfast Club, With DJ Envy, Jess Hilarious, And Charlamagne Tha God!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.