All Episodes

March 19, 2025 53 mins

Many students use generative AI tools to complete writing assignments. In this episode, John Warner joins us to discuss what may be lost when they do so. John has twenty years of experience teaching college writing at five different institutions and is the author of 8 books encompassing a wide variety of topics including political humor, short stories, and a novel, including Why They Can’t Write: Killing the Five-Paragraph Essay and Other Necessities. He writes a weekly column on books for the Chicago Tribune and an associated newsletter, The Biblioracle Recommends. John is also a contributing writer to Inside Higher Ed. His most recent book is More than Words: How to Think About Writing in the Age of AI.

A transcript of this episode and show notes may be found at http://teaforteaching.com.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Many students use generative AI tools to complete writing assignments. In this episode,
we explore what may be lost when they do so. Thanks for joining us for Tea for Teaching,
an informal discussion of innovative and effective practices in teaching and learning.

(00:23):
This podcast series is hosted by John Kane, an economist...
...and Rebecca Mushtare, a graphic designer......and features guests doing important research
and advocacy work to make higher education more inclusive and supportive of all learners.

(00:45):
Our guest today is John Warner. He has twenty years of experience teaching college writing
at five different institutions and is the author of 8 books encompassing a wide variety of topics
including political humor, short stories, and a novel, including Why They Can’t Write: Killing
the Five-Paragraph Essay and Other Necessities. John writes a weekly column on books for the

(01:06):
Chicago Tribune and an associated newsletter, The Biblioracle Recommends. John is also a
contributing writer to Inside Higher Ed. His most recent book is More than Words: How to Think About
Writing in the Age of AI. Welcome, John.Very glad to be here. I'm a longtime
listener and fan.Well, we've been a long time
reader of your columns and your other work. So we're very pleased to have you here.

(01:29):
That's great. Today's teas are…
are you drinking tea by any chance? Not at the moment I was earlier, I have sort of
a post-lunch tea to keep me going. And post lunch, it is green tea, breakfast, I'm a black tea guy,
and I'm into tea because this is a strange true fact about me. I've never had a cup of coffee
in my entire life. So I'm only a tea drinker as far as warm caffeinated beverages go.

(01:54):
There we go. A true fan, a true fan, unlike all those pretend tea drinkers who drink coffee all
the time on our podcast.…and Diet Coke.
Yeah, and Diet Coke, right.I have English Afternoon today, although
I was drinking a Christmas blend earlier to match our winter blizzard that we're recording in.
Yes, it's mid February here, but it's been snowing about a foot or so or more of snow

(02:17):
every day for the last week, it seems. So, that is very fitting. And I have an Earl Grey today.
Classic.Yes, from Tea Forte,
so it's in a nice little pyramid. Oh, wow.
So John, we've invited you here today to discuss your most recent book, which focuses on writing
in the age of AI. You begin the book by describing initial reactions to the potential impact of the

(02:39):
introduction of ChatGPT on writing. What was your initial response to the release of ChatGPT?
This may sound strange coming from somebody who teaches writing and has taught writing for a long
time. I was concerned, but I was simultaneously excited. I saw it as an opportunity rather than
a pure threat, because for many years, and people can go back to my archives at Inside Higher Ed,

(03:01):
I've been lamenting the way writing is taught, not just in college, but prior to college, as a sort
of formulaic exercise in filling in prescribed templates. And my experience as a teacher mostly
of first-year college writing was that my students came in not with deficient skills around writing,
but cramped attitudes about writing, where it's not that writing was hard, it's more that it was

(03:25):
boring. They'd done the same thing over and over again, and couldn't find any sort of intellectual
interest or stimulation in writing. And I saw writing as the opposite of this. And so the
fact that ChatGPT can churn out a five-paragraph essay in seconds, and it's highly proficient at
these things, to me, was an opportunity for us to examine our priorities around the kind of writing

(03:46):
we ask students to do in school, and maybe even more importantly, than the kind of writing they
do, the way we assess that writing, the values we bring to how we assess that writing. Has that
been borne out in the two plus years since? …not as much as one would like, but that's one of the
reasons why I wrote More than Words. And there actually are some encouraging signs around these

(04:07):
conversations, that we recognize both the limits of asking students to write like algorithms,
which is what a five-paragraph essay format does, and the limits of the syntax generated
by a large language model like ChatGPT, that there is space for humans to still write. I'll believe
this until they drag me off the scene, but I think more people are seeing this. They maybe

(04:30):
don't have the kind of background or experience that allows them to really think deeply about it,
but they sense that there's a problem associated with outsourcing all this stuff to ChatGPT.
One of the things you write about in the beginning of your book is how Daniel Herman,
in an Atlantic article, argued that ChatGPT would be the end of high school English. And as you've

(04:52):
already indicated, that's something that you welcome. Could you talk a little bit more about
what's wrong with writing in the K-12 system, this sort of mechanical system that's become really
standard for the last few decades, I think?Yeah, it's not that I want to not have high
school English class. I think we should have more of those. But it would be great if high

(05:12):
school English got back to helping students learn to think and relate to written texts,
including literature, including whole texts, like books, as opposed to extracts that can
be tested against quite easily. And really this is the sort of core of
the problem. The standardization of instruction around something like a five-paragraph essay is

(05:35):
driven by the standardization of curriculum around demands related to accountability measures that
have been a long-term aspect of the so-called school reform movement. In my previous book,
Why They Can't Write, I sort of date it to the A Nation at Risk Report, which came out during the
Reagan Administration when I was in middle school, where they were worried about a rising tide of

(05:59):
mediocrity in the nation, of which apparently I was a member, because I was in eighth grade,
and they were certain that Japan was going to eat our lunch because of the Sony Walkman, basically.
We're not making the Walkman, they are, and so we need to get on with this. This is the Sputnik
moment of my generation, and it sort of began what is not ill meaning, right, like the idea

(06:20):
that we should be able to measure what students are learning and be able to know where additional
interventions should go in terms of schools or districts or students, is all very sound thinking,
but it became, over time, an exercise in a concept known as Campbell's Law, which
is essentially a social science precept where once the measurement becomes the goal, it's no longer a

(06:43):
good measurement. And that has happened writ large in education in all kinds of different domains,
and writing is one of the sort of most obvious manifestations of this, where we allow the
creation of a five-paragraph essay to stand in for writing ability and writing experiences,
and it just isn't. Writing a five-paragraph essay is not the same process as writing something for

(07:04):
an audience inside a full rhetorical situation. And so we now have a couple of generations of
students who have really, in some cases, only experienced this, by their own testimony. They've
written these forms over and over. And then they would arrive in my first-year writing class in
college, and I would pull the rug out from under them and say, “none of this is useful.” They had
been told explicitly, you have to do this so you can do it in college. And then I come in and say,

(07:27):
“That's all wrong.” These disconnects were what motivated Why They Can't Write,
and More than Words is motivated by my concern that now that we have this tool to which
these forms can be outsourced, we're going to forget to actually teach students to write.
So as you're talking about the evaluation of writing, and earlier, you mentioned the values

(07:47):
in writing and how they have changed, maybe need to change again. Can you talk a little bit about
what we should be valuing in student writing? Yeah, I have long believed, long before ChatGPT
showed up, that students should write in genuine rhetorical situations of message,
audience and purpose. That audience should not be the teacher unless the teacher is an authentic

(08:10):
audience for the thing they're writing, which could be the case. We could be communicating
something to a teacher as an audience, rather than teacher as assessor or grader. I think
this is a good idea for a lot of reasons, one of which is students are much more engaged by these
challenges than they are when writing for agrade, they're much more authentic to the work they will

(08:30):
do in other classes or outside of school once they graduate. The goal, as I put it in Why They
Can't Write and another book called The Writer’s Practice, is to develop their writing practices,
the skills, knowledge, attitudes and habits of mind of writers. These writing practices,
they're then transportable to new and novel writing problems that they're going to need

(08:52):
to solve. It's just much more flexible. It's much more useful. And really, what I realized
over the course of my teaching career that I wish somebody had told me before I started teaching,
was that I had been lucky enough to develop this writing practice, sort of on an ad hoc basis,
through the course of my education as an undergraduate, as a rhetoric major, my
graduate studies in creative writing and English literature, and then a job at a market research

(09:16):
firm where I had to learn how to write things like focus group reports and questionnaires,
forms that I had no prior knowledge of, but which I realized were amenable to the same process I
used to also write a form that I was not familiar with when I was in graduate school, which was a
20-page explication of a single poem. I didn't know how to do this, but I could fall back on

(09:36):
my writing practice that allowed me to do it, and I realized I had been given this great gift of a
writing practice without anybody knowing it. And so my mission became to try to help students see
that they could develop a writing practice. So those values are really just around the

things writing is (09:50):
the act of thinking, the act of  feeling, the act of communicating. This is going
to sound like I'm simultaneously tooting my own horn and of being a little coy about myself, but
I did an interview recently where the interviewer called my work groundbreaking, and I laughed. And
he asked why I laughed, and I said, there's no groundbreaking. I'm doing what I was taught in

(10:11):
grade school. It's just that we have gotten away from these values around writing and so that it
feels groundbreaking is really ironic to me, given that I think they're fundamental, but we have to
agree at the values level that these things are fundamental, otherwise we will be stuck in the
same rut we've had, only now we're letting ChatGPT solve these problems for us, rather than making

(10:32):
students write these sorts of formulaic essays.I also went through a school system where there
had been no discussion of a five-paragraph essay. I didn't run into that until some colleagues
started talking about how students were not very good at writing these five-paragraph essays. And
I wondered where that whole concept came from, because I remember when I was moving through,

(10:53):
a little bit earlier than either of you were, but we had a wide variety of writing assignments, and
we also did an awful lot of reading. And one thing that I think has been lost along with a decline
in writing or with a shorter form more formulaic writing, is that students are not asked to read
as much, and I think that's also been affecting their ability to write, because engaging with

(11:16):
multiple styles of text in reading can be really helpful in developing as a writer yourself.
It's absolutely necessary. Reading and writing are inextricably related to each other,
like the skill of interpreting reading across all kinds of different genres is the internalizing of
the message. Like this is how this is delivered, and then we're going to do this in turn. We are of

(11:40):
a certain age. Those of us who are of a certain age did not encounter the five paragraph essay.
My first encounter was in high school when I was taking AP classes, and I was coached to
use this form to pass the AP assessment. And my first thought was, “Well, this is dumb,
like this is the easiest thing I'm going to do all year is write these AP essays.” And indeed, they

were. I dedicate Why They Can't Write (12:00):
Killing  the Five-Paragraph Essay and Other Necessities
to my actual grade school teachers, because in hindsight, I realized how fortunate I was by the
accident of my birth and my parents to grow up in a Chicago suburb and go to my neighborhood
grade school at a time where the teachers were given the freedom to engage their students,

(12:22):
and I still have my fifth grade writing portfolio that my mom saved. If this was a visual podcast I
would show it, but it's called 12 stories, and it's laminated construction paper cover tied
together with hole punches, tied together with yarn binding. And if you look at those artifacts,
none of them are a five-paragraph essay. They are all in some way asking me to solve a writing

(12:48):
problem in a rhetorical situation. There's the classic Thanksgiving assignment where
you draw a picture of a turkey by tracing your hand and decorate it, and then for each letter
of Thanksgiving, like T, H, A, et cetera, you have to give something you're thankful about. There's
a limerick. There's a haiku. There is a number of different genres of fiction, science fiction,

(13:10):
historical fiction, which required us to read those forms, to understand them, and then produce
them in the image of the original form. This one totally cracks me up. There is a classified
ad that you would read in a newspaper for my skateboard. And so it's like “Used skateboard,
blue works good. $2 or best offer.” Actually, it doesn't say “or best offer.” It says “OBO,” which

(13:35):
is the shorthand you would use in a classified ad at the time, because you paid by the word. That is
rhetorical problem solving, that is internalizing how to think about audience as you write. This was
fifth grade, and sometimes I can't believe what we've done to students in the name of helping them
try to prepare them for quote, unquote, college and career. We've really, in a lot of cases,

(13:56):
done the opposite. And I know it doesn't need to be this way, because I lived through a different
time. I know something else used to be done. When I wrote Why They Can't Write and went back and
did all kinds of research, and I found articles dating back to when I was in grade school and
middle school about some of the things I would later do that don't help you learn to write,
things like diagramming sentences, which is a valuable exercise in critical thinking, but

(14:19):
doesn't help you learn how to write or research papers. There's a report from 1984 by this guy,
Richard Larson, who wrote about how asking students to do research papers in high school
is a fake genre, because you can't actually ask high schoolers to do a research paper. Now, if the
goal is to help us learn to use the library, or the card catalog at the time, that kind of stuff,

(14:41):
that's one thing. But if you think they're doing research, you're kidding yourself. And these are
the things we need to know so we know what are students actually experiencing when we ask them
to write in our classes. We have to think through that. It doesn't mean that we should abandon whole
genres or not have them do things ChatGPT can do, but if we're going to ask them to do it,
it has to be rooted in something meaningful. And to your point about reading, my superpower, which

(15:04):
I don't think of as a superpower, because it's just been something I've been able to do since
I was quite young, is reading texts for extended periods of time. It helps me with concentration,
helps with my mood, it helps me learn things. And the fact that we have allowed school to get
away from that clearly important underlying skill, which is also a practice, which also has habits,

(15:29):
which also has attitude, also has knowledge involved, is sort of tragic, and it's a real
heavy lift to get away from it. But it is, I think, like the thing honestly, to tackle if
we're going to help students become the kind of people they wish to be. Like, honestly, I think

(15:50):
if we still read whole books, the distractions of cell phones and these things we argue about would
be significantly less. Like, don't get me wrong, I can be distracted by my cell phone like anybody,
but I also know how to put my cell phone away and get lost in a writing or reading task for hours
at a time, because I have that embedded in my practice. If students have never had that in their

(16:11):
practice, and we're asking them to suddenly engage in this behavior, we're looking at trouble
Reflecting a little bit on your portfolio, John, and I'm thinking about the one that I also have
from fifth grade, except at that time, I think I was also really exploring my design skills,
because it's hard bound…Wow.
…and I made a full cover and bound the edges with thread, and all the things,.

(16:32):
That’s awesome, I mean my fifth grade portfolio is shameful because I had horrible handwriting.
I always got “needs improvement on my handwriting” in my report cards and my
artistic abilities are non existent, so that part is awful. But to your point,
the opportunity to engage in even that relatively small instance of self expression in the context
of something you did in school just strikes me as vanishingly rare in today's culture of schooling,

(16:57):
and not for the better in any way. I was also reflecting on your mention of reading
to change your mood, and I'm realizing how much I've really read since the beginning of this year,
probably for the same reason. You argue in your book that ChatGPT just does text generation
and not writing, because writing involves thinking and feelings. How do we convey this

(17:20):
to our students and maybe to our colleagues? Yeah, the only way to do this is to experience it
for yourself. And my hope is somebody teaching in higher education, regardless of discipline,
has had the experience of writing as thinking, which I like to talk about as the simultaneous
expression of an idea. You're trying to capture something that's in your head and put on the page,

(17:43):
but it is also the exploration of an idea. So as we are writing, we have this notion that
we're trying to capture, and often that notion shifts as we are writing. That's the exploration
of the idea, and that is by itself, the best and clearest manifestation of critical thinking that
I can imagine. In fact, as the way I assessed writing evolved, I would have assignments where

(18:06):
that was the only criteria. It was have a thought while you're writing. So the thing students were
writing, it wasn't that it was unimportant, but what they were actually assessed on was
the later reflection about the thought they had while they were writing. Could they articulate to
me the experience of thinking they had while they were writing this particular artifact,
because I knew once that had happened and they had had that experience, it was something they could

(18:31):
return to again and again and again, recognizing that there's a difference between I'm just sort
of going through the motions to get words on a page to satisfy a grade, and writing as an act of
thinking and development and not only capturing our thoughts, but developing those thoughts as
we go. So that's the only way it can be done. But if we're going to do it in school, we have

(18:54):
to prioritize it, and in terms of feeling, it's kind of the same thing. And I write about some of
the stuff in my book, where I write about my own instances of being like, as weird as this sounds,
emotionally moved by my own writing, because I'm conjuring something for myself and my memories
or my life that is meaningful to me, but it can even be something small that upon reflection,

(19:16):
we realized this writing was meaningful. I think I have this anecdote in the book; if not,
I should have put it in there. But it was during the pandemic, I was grocery shopping and on the
floor, and we're supposed to keep our distance and only go one way in the aisles, it was that period,
and on the floor, I saw a yellow post it note that clearly had somebody's shopping list on

(19:36):
it. And I picked it up. It had a bunch of items, but one of them was “cherry Pop Tarts” in all caps
with three exclamation points, underlined, and I just had this sort of moment to myself. I said,
“those cherry pop tarts are very, very, very important.” And reflecting on it later, I thought
of the person who'd sort of put that notion in the list writer's mind. But I also thought of the list

(20:01):
writer, sort of, particularly at that time… we can remember how fraught everything felt… going to the
grocery store knowing that the central importance of that mission was to come home with these
Cherry Pop Tarts to help soothe the spirit of the person in their household who needed them. Writing
a grocery list can be an emotional experience if we attend to it, if we're aware of it. Now,

(20:23):
it's not like we can go around 24/7 having emotions over everything we write. That would be
disturbing and inefficient, but everything we write has that potential, and I think we should
be cognizant of it, and every so often we should pause and reflect on it. And to the extent that
school does not allow room for that, again, is to wall students off from some important

(20:44):
aspect of their intellectual, emotional, social, let's say even economic development,
all these things matter, I think, long term, and so we should give students a chance to experience
them in their education. Because why not? What else are we doing with our time? If not that?
Along those lines, you'd suggest that pretty much everything that ChatGPT does is essentially like

(21:07):
hallucinations. There's a lot of discussion of hallucinations in getting things wrong, but you
suggest that essentially everything it does is just predictive text which is devoid of meaning
to the system itself, that it's missing the human context. One of the things I was thinking about
is you also had that example where you mentioned cinnamon rolls, and that ChatGPT could write about

(21:28):
cinnamon rolls, but it wouldn't bring up the same sort of connotations that people would have with
that. What are students missing when they rely on generative AI to generate writing or to summarize
readings that they've been asked to do, which is an increasingly common use of AI tools.
Yeah, the reading, quote, unquote, I'm going to put air quotes around reading when you talk about

(21:50):
ChatGPT or large language models “reading,” but that is a burgeoning and worrisome application
of this technology. Now it's not like we should never ask a large language model to summarize a
source for us. That may be useful to us, but we have to be very, very cognizant of the difference
between what a large language model does and what humans do. When humans read and humans write,

(22:15):
we're having an experience, and that experience is encoded in memory, it involves our senses,
it involves sort of a metacognitive process. Large language models do none of that. They are syntax
generators. They can't think, they can't feel, they can't communicate with intention. So if we're

(22:35):
going to outsource that to a syntax generator operating on probabilities, we have to be aware
of what is being compromised in the exchange. Now, in some cases, that's fine. If I'm going
to go to Wikipedia for something, because I need some kind of background about a source or an idea
or a concept, maybe I'm going to go to ChatGPT instead, although the risk of hallucination can

(22:57):
make that fraught, because Wikipedia, while it is the product of flawed human beings,
has been checked and edited in ways that make it significantly more reliable than a large language
model. You guys know this, reading is fodder for writing. So if we're going to need to write,
ultimately write about something, failing to read it leaves me ill equipped to write about it,
because the reading itself is where my ideas, my thoughts are generated. It's sort of ironic

(23:23):
that we call this stuff generative AI, because it really is like reading is truly generative,
like it creates the conditions under which our thoughts can occur. And so I'm disturbed by some
of it. Like, the deep research release that can supposedly put together a PhD level dissertation
is BS. I mean, it really is, the only way you believe that is if you ignore what it means to

(23:49):
write a dissertation. No offense, I've never done a dissertation. I don't have a PhD. No offense to
the dissertation producers out there, we don't write dissertations solely to be read. In fact,
if somebody wants to publish their dissertation outside of an academic artifact, they have to turn
it into a book. Dissertations are fundamentally used to demonstrate an extended engagement with

(24:11):
a type of thinking and knowledge creation. And so when I see somebody on social media saying, use
this to generate your dissertation’s literature review, my top of my head lifts off and my brain
orbits around the sun and comes back, because I cannot believe that this is something anybody
would tolerate for a knowledge producing exercise and industry, and yet, I have seen real academics

(24:37):
recommend these behaviors and the imperatives of speed and efficiency and productivity override
our good sense around what is the meaning behind what we're trying to actually do. And this is my
concern, that speed, efficiency, productivity, these are not educational values, yet, they are
the things that are often valued inside of higher education institutional frameworks. So all this

(25:03):
stuff needs to be examined if we're going to make truly productive use of this. If it isn't just
going to be like I can now churn out six articles are necessary. I'm going to turn out 600 now,
because I've got my large language model cranking this up. I just ask folks, “What is the point of
all of that extra stuff?” It's just slop. We don't need more slop. We need good, reliable,

(25:27):
thoughtful human products. And I said this at the top, this is the gift of this technology
to help us understand what humans should be doing, but only if we take advantage of the
gift. We could squander it. And in a lot of ways, I think we are squandering it, unfortunately.
So along the lines of productivity and efficiency, you do note in your book about automating parts of

(25:48):
the writing process, like moving from cursive writing to typing and word processing are
beneficial, including other things in different processes, like calculators, GPS systems,
et cetera. There's ways that technology has certainly made our lives more efficient. Can
you talk a little bit about how ChatGPT is essentially, fundamentally different
than those other kinds of automations?I have to admit from the outset that even those

(26:12):
quote, unquote good automations have trade offs. I've hugely benefited from these automations in my
life and career. Talking earlier about my lousy handwriting, I really could not begin to write
until I learned to type on a manual typewriter in quote, unquote, summer school in fifth grade,
when my parents needed me out of the house and do something and I took a typing class at the junior

(26:32):
high, typing island photography in fifth grade. It was great. I loved it, but I really loved typing
because that I could do, and I could type so much faster and with more clarity than I could hand
write that I now had a way to capture my thoughts roughly at the speed with which they occurred. So
it was a great boon, and the noise the manual typewriter made was just a bonus, because it
was super fun to hammer away on that thing and then throw the carriage return across. Nobody's

(26:57):
done it. They should. Go find a manual typewriter somewhere and spend some time doing that. It is a
good time. That said, like, I can't handwrite. I can't write cursive anymore. I recently had
to write a paper check for the first time in months, and had to rewrite it three times in
order to get it legible so somebody could actually cash my check. So automation is not unalloyed good
without trade offs. That said something like automating the creation of human writing on a

(27:23):
page from cursive to typewriting, doesn't really alter the underlying labor of what's happening.
Now we do know that the typewriter and the word processor have changed the nature of writing. The
existence of cut and paste as a technology that didn't involve actually cutting paper and pasting
paper changed the type of writing that we produce. So all this stuff is pretty fascinating, if you

(27:45):
start to look at it. The analogy that people have drawn between the calculator and ChatGPT, it's not
that it's entirely unlike that, it's different from it in some very important ways. The most
important difference is that when a calculator does its mechanical calculations versus a human
doing the calculations, that underlying act is identical. Now a calculator does it differently.

(28:06):
It uses sort of these binary calculation tools to do it while humans are using our brain, but the
end result is identical through those different mechanisms, and the calculator has the benefit of
,unless we wrongly input a number, of being more accurate. Now, it doesn't mean we shouldn't still
teach addition, subtraction and long division and this kind of stuff. We still have to be able to

(28:28):
think about numbers in a way that makes them sensible to us. The difference though,
is that a calculator is a labor-saving device for something that we would do not as well through our
own calculation. ChatGPT, as a syntax generator, is not doing the same thing as human writing.
Human writing is thinking, feeling, communicating. So when we outsource the production of text to a

(28:51):
syntax generator, we are doing something fundamentally different in the production
of that text, and that difference sands away the most interesting features of writing, which is
the demonstration of a sort of spiky human unique intelligence, which is the thing that ultimately
gets us interested in reading, particular writers write their unique intelligence. The misfortune of

(29:15):
school is that we forgot this in how we assign and assess student writing, we've asked them to take
away their unique intelligence, what they present to us, so the large language model output looks
a lot like what gets good grades in school or has gotten good grades in school. But this is just the
opportunity of resetting those values and keeping in mind these differences in the underlying labor

(29:39):
is important, because again, if we want students to learn certain things, like writing, like
critical thinking, like active reading, we have to have them do it. If they don't do it, they can't
learn it. We learn through experiences, and if we outsource some critical part of that experience to
a large language model, we are denying students the experiences to learn it. One of the early

(30:00):
applications of large language models, people enthused about it, so I'm going to use it to do
a first draft, and then I will edit the draft. This is ass backwards. The first draft is where
all the thinking happens. That's the stuff. And so you can't edit something using your thoughts that
was not produced through thinking. We need to keep all this stuff in mind as we do it. Otherwise,

(30:22):
we're really going to spin our wheels around how this technology can be genuinely useful, rather
than simply as a substitute for the stuff that we should be doing for ourselves, particularly
in learning contexts. I can't say this enough, like people write me like, “I'll use this for
the marketing copy I write for my business, and it's fine.” I'm like, “Great, great. That's fine,
and that's a great labor-saving device, but you have to remember all of the stuff you,

(30:45):
the adult employed human, had to experience and learn, the skills, knowledge, attitudes and habits
of mind in order to look at that output and say, ‘This is good,’ that can only be done through
writing that cannot be done through becoming a prompt engineer with large language models.”
I get heated up about this stuff, I’m sorry. You're talking about the sound of the typewriter.

(31:06):
Just recently, last week, my mom sent me a little letter that I had written to Santa when I was
little, and I had asked for a typewriter. We're gonna sound like a bunch of fogies here,
but my work set up at my home office, I have a laptop and I have a large monitor, and because I
like two screens, and I actually ordered a special Bluetooth keyboard for my laptop that was designed

(31:28):
to purposely make noise, more like an electric than a manual, because a manual that really would
be something, it was just a noisy mechanical keyboard. It was very satisfying to use it. I
actually broke it because I worked it too hard, and the company went out of business. Maybe it
wasn't a great product, but I loved it. And loved it, and I had it while I was writing the book,
and my wife would know when I had a good day by the sounds that had been coming from my office,

(31:50):
because it was like, clack, clack, clack, clack, clack and know if I had a bad day if
it was mostly silent. I mean, this is real, like the experiences of our lives are our lives, and to
the extent that we've sanded those experiences out of learning in education contexts is just kind of
tragic and unnecessary, has no relationship to how much students learn or the value of

(32:14):
their learning. In fact, it's the opposite. And so I'm a true believer in this stuff.
Going back along the same line, I also, throughout my elementary school, got comments about my
handwriting, and in fact, that continued into the next generation, because when my son's teacher,
and I think it was fourth or fifth grade, was complaining about his handwriting being illegible,

(32:35):
apparently, he brought in something that he found that I had written at some point,
and he said, “See, it's genetic…” but I benefit a lot, too, from those early automations. Getting
a typewriter was a tremendous improvement in the ability to express myself more clearly. But
then when word processing came out, it changed the way in which I wrote. Before that, I used

(32:56):
to have very elaborate outlines and organized things very carefully, because there's just so
much you can do with Wite-Out and with correction tape. And my writing was much more efficient when
I had the ability to move things around and to rephrase things and edit it before it finally
was produced as a more final version. And I was at Stony Brook at the time, where dissertations

(33:17):
had to be done on a typewriter, and I wasn't about to do this was like 180 pages or so with all these
math symbols. So I had a typewriter with a daisy wheel and an RS 232 interface from my computer,
and I connected that to my computer to print that on a typewriter. It took 12 hours because

(33:37):
I had to keep swapping the daisy wheel every time there were Greek symbols, but I certainly
wasn't going to go through and having to type it page by page, and the ability to edit using Word
Processing made it so much more efficient. But as you note, students are not getting the same
sort of efficiencies when they rely on ChatGPT and effective writing takes a lot of time and effort

(33:59):
and thought and critical thinking, and we're living in a time where there's so much fake news
out there, or distortions of reality, and so many conspiracy theories and critical thinking skills,
I think, are perhaps more important than they have been in the more recent past. How can we
help students develop those critical thinking skills when they do rely to such a large extent

(34:21):
on generative AI to get summaries of content. It's a challenge. It's an interesting challenge,
and I'm gonna turn listeners on to some fascinating experiments being done by a guy named
Mike Caulfield, who is training large language models to become essentially fact checkers,
particularly around images, because they're very good at this, because it can compare image

(34:45):
features in ways that humans can't. But the reason Mike can do this is because he has a career of
thinking about the problem of fact checking, of how we think about information and how we check
these things. I really recommend people checking out his newsletter, because his experiments are
fascinating. Still, the only way to become critical thinkers is to require students to

(35:06):
do a lot of thinking. The absence of thinking from large language model outputs should be sufficient
warning for us to be wary of letting students use them to bypass parts of the process that may
require thinking. Ferreting out misinformation is a practice. In fact, Mike Caulfield has a previous
book called Verified, where he talks about a SIFT method for helping assess the veracity of

(35:31):
online information. Once you get this method, you can employ it to help you understand the world,
and this is sort of my pitch to students around writing, is that if they can develop the writing
practices, if they can develop their powers of observation, drawing inferences from observations,
and then drawing analytical conclusions from those inferences, they have a significantly

(35:54):
increased ability to just go through the world, seeing, reading encountering things,
and having that detector that allows them to say this thing doesn't make sense. But it is
something that has to be practiced and experienced to get good at. And writing is a kind of unique
way of being able to do this, because it should require students to turn all of those senses on,

(36:18):
but again, we have to make them do it when the assignment comes. And then we have to assess the
assignment according to criteria that values those experiences. More and more as I wrote
the book and as I think about the problems, it's not so much the assignments themselves, although
there are some assignments that are just sort of like we're gonna have to put those away. Those are
too vulnerable to the technology, and they're too uninteresting to students to ask them to do these

(36:41):
things, and so we have to do something else. But if the assignment is intrinsically interesting
and engaging and gets students thinking deeply about these things, we then have to use assessment
criteria that values that, as opposed to surface level features that so much of student writing is
attended to. And some of this also involves convincing students that this is what we're
going to do. The first part of my first-year writing courses was always sort of fraught,

(37:06):
because students would want to give me what they'd done before, and they were very conditioned to
producing what I called pseudo-academic BS, which was a sort of high falutin fancy-languaged version
of what they thought a smart student who had sort of read the assignment would sound like on paper,
and they had been conditioned to this performance of good, smart studentness. And one of my rubric

(37:31):
criteria was absence of pseudo-academic BS. And as soon as I saw it, I would just turn it back
to the student and say, this is pseudo-academic BS, you need to revise, and they hated me for it,
absolutely hated me for it, at least at the start. Some at the end also, but at the end, some would
come around. But this is a reorienting around the values that we attach to learning, and if students

(37:52):
have not been exposed to those values, or if those things had not been valued in their school context
before, we have to help them reorient to them and not get overly frustrated, as I sometimes would
when they're not there already. When they enter my class and they're not doing the stuff I need them
to do, I have to remember that in a lot of cases, this was like new, it was unfamiliar. I think I

(38:15):
used this analogy in my book, I don't know if I did or not, but it was like the beagles that had
only been raised in a lab and had only experienced like a cage, and then they took them to a park,
and they opened the cage doors and say, “Run, be free, beagles,” and the beagles are afraid of
the grass because it's foreign. Maybe I'm going to regret reducing students to beagles. It's an
analogy, people. But the idea that we know the grass is the great thing, and that the beagles

(38:41):
who are in the cages are going to love this grass as soon as they just get to experience it,
but they might be afraid of it because it's unfamiliar, and because the system within which
they've been working has been rewarding these non-grass-like behaviors, they've been rewarding
staying in the cage. They've been rewarding hitting the prescriptions. And so part of my work

(39:03):
was always that reorientation, like “It's okay, come out of the cage. There's good stuff out here.
You might get muddy, you might step in dog crap. There might be something that doesn't go well,
but we're going to be okay, because ultimately on the other side of this is a genuine experience of
learning in life, and you will appreciate this.” Again, no teacher bats 1000 with their students

(39:25):
in the course of the semester. And I think, like a lot of instructors, I can remember every
horrible end of semester comment a student ever wrote about me. But overall, the batting average
was pretty good because students are thirsty with us. They've been really wishing for it,
and as soon as they're invited to do it, most students will embrace it pretty thoroughly.
We spent most of our time today talking a lot about students and learning, related to ChatGPT,

(39:51):
the ethics around that, but you've also discussed several other ethical concerns related to the
training and operation of generative AI, including copyright, exploitation of precarious workers,
environmental issues, etc. Can you talk a little bit more about these concerns?
Yeah, it's important to recall that these models are trained significantly on stolen merchandise,

(40:15):
including things I'm willing to bet all of us in this virtual podcast room have had our work
sucked into these models and is being regurgitated back to us without notice, without permission,
without compensation. It is not impossible to create a system where the origins of these

(40:37):
models is compensated. It's not impossible. It's incredibly unlikely to happen, though. So it's not
that I think we should sit around being sort of “Ah, whatever that ship has sailed.” I think we
actually should keep it in mind every time we use it, that this stuff basically exists because of
the use of material without permission and without compensation. Now our judicial system is going to

(40:59):
work through this in terms of our actual copyright law, and I think it's probably going to turn out
in favor of tech companies. But I think this is primarily because our copyright law was written
at a time where the thought of this sort of technology was not even in science fiction. So we
can't confuse the strict outlines of copyright law with justice and what is right. The environmental

(41:23):
impacts of this technology is unknown. It could be incredibly severe. We already know that it's using
a lot of extra energy and a lot of extra water in the data centers that are online in order to
train and deploy this technology. There's a lot of arguments subsequent to me writing the book around
like, well, it's not as much as like watching a single series on Netflix or something like that.

(41:48):
These comparisons I find interesting. But again, anytime we're going to do this sort of stuff, we
have to think, what is the cost to this benefit? If ChatGPT really is just kind of a novelty, then
should we be using energy and draining aquifers in order to power these servers? And the exploited
labor, the labor in third world countries that is used to train this is again, it's the story of our

(42:11):
Westerners’ use of tech over and over again. It's nothing new, but it's an intensified version of
it, and it's something we should be mindful of. We should be thinking about the ethical concerns of
all technology we use. Like I have an iPhone. It has minerals that were mined under unconscionable
conditions. I have Netflix that is using power that is cooling a server that allows that sort

(42:34):
of stuff to happen. I still do these things. I also know that one of the biggest contributions
to global warming is the beef industry. I still eat. I have to be able to recognize these harms
even while I do these things. I don't require anyone to like put on the hairshirt and repent for
existing in modern society. We are where we are, but I think we do have an obligation to understand

(42:59):
the truth behind these potential harms around our actions, just to be reasonable human beings in the
world, like I'm about to go get on a plane to do some various events, to talk at institutions and
promote the book. None of that is necessary. I could do everything over Zoom. I'm going to
do it anyway, because I think it actually has a little better impact than it does over Zoom. And

(43:21):
it's an enjoyable part of my life. Being able to visit an institution and talk about these things,
or work with faculty around how we adjust to this technology is fulfilling. And so I have to
be aware of the balance between the life I want to live and the trade-offs and the costs to society.
So it's really just a matter of being thoughtful about in the same way we have to be thoughtful
around student learning when we're deploying this technology, we have to be thoughtful about these

(43:45):
impacts. Otherwise, I think, as the technology develops, as greater demands are made around
this technology, both funding, the money these tech companies want to develop something like
artificial general intelligence or to deploy this technology in our institutions,
often without our permission, without asking. We should be aware of these things so that we

(44:07):
can use them as frames for thinking about how we want to use this technology in our lives, as
individuals and as educators, and as educational institutions. We should never forget them.
So we've talked a lot about the challenges and the potential concerns associated with the use
of generative AI, are there some uses that may actually benefit students’ learning and maybe

(44:30):
even benefit students’ writing, perhaps doing some proofreading using AI tools, or perhaps evaluating
a text that the students created to find some weaknesses in their arguments. Might there be
some benefit for those types of activities?Anything is possible, and there's a lot of people
doing some very interesting experimenting around the intersection of learning using

(44:52):
these tools and writing, and I encourage all of that experimenting, even as I am not the
person who is going to do it, because I don't like to think of myself as a fundamentalist,
but I do believe that this technology is primarily useful to people with well developed professional
practices and to introduce it at times before those things are there, I'm dubious. That said,

(45:12):
I'm encouraging of class contexts that are transparent and open… allow students to experiment
with the technology as part of developing their practices. I think that's all good, as long as
it's done mindfully and with important end goals around learning in mind. This is not my practice.
I found this stuff increasingly less useful as I worked on the book. I set out to use it as I

(45:33):
was working on it as a sort of demonstration to myself and to see what would happen. Over time,
it was less and less interesting, because I realized that the predictive outputs of a large
language model were not generative to me. I needed that unique intelligence in order to help me do
my work. I do have some what I think are hard and fast lines around the use of this technology. If a

(45:55):
student wants to sort of curiously and voluntarily engage with this technology for their writing,
I think that's great. I would never personally have a student write a piece for my course that
was only going to be assessed and evaluated by a large language model, and that's a betrayal of the
relationship we should be establishing with our students. I don't know why I would do it. If I'm

(46:15):
teaching writing, the most important input I have for my teaching is my students’ writing. So the
idea that I'm going to let a large language model grade that instead of me is sort of nonsensical.
But even to ask students to use it to get feedback on their writing by my own urging, I don't know.
I mean, this is a probability machine giving feedback on writing. If I'm going to do that,
I need to prove to myself that that output is more useful to learning, not to the outcome,

(46:40):
not to the grade, but to learning, than, say, peer review or asking a student to read the work
out loud to themselves, or do something that I call a reverse outline, which is just a technique
for looking at something you've written, write a question based outline for each paragraph. What
audience question is being answered by this paragraph, and does that reverse outline make
sense? I've got a million things that students could and should do with their writing that are

(47:06):
not interacting with a large language model in order to improve their writing. So personally,
I would have to exhaust those before I would turn to them. But I'm not against experimentation. I
know folks who are doing that. It's not for me, as long as the experiment is predicated on learning,
not just improving a student's output for the purpose of a surface-level criteria grade,

(47:28):
I'm for it, and I'm eager to see them, and people share them with me, and sometimes I'm taken aback,
and I'll have to eat some of my words and think, “Okay, maybe I still wouldn't do this,
but I get why this person is doing it.” But again, that's just a mindful teaching practice; that is
using technology that's available to enhance the experiences of their students around learning.
That's the work. So I can't criticize or gainsay that. When it is a substitute for what teachers

(47:53):
should be doing, when it allows for an end run around our own labor, I think it's dangerous. And
I would just say, since instructional faculty are overwhelmingly your audience, I assume, we should
be very, very, very, very, very, 12 more verys in there, careful about outsourcing our instructional
labor to automation, because it is a encouragement for those who would like to automate faculty labor

(48:17):
to do so, and for those of us who have worked at less than fully resourced institutions, we know
what this sort of pressure is like. It's constant, and the decisions that are made often have nothing
to do with the quality of learning or instruction. And so if we're going to allow these things to,
like write our syllabi or grade our assignments or give peer student feedback or something like that,

(48:39):
I would just say, be careful, because what you may be demonstrating is, for the purposes of the
institution, to collect tuition and convey credits and credentials, you may be proving
you're not necessary. Now there's no way that that process is the same as a good teacher,
human interacting with other students, but that's not a requirement for an institution
to give grades and confer credentials, and so we have to be aware of these things. I'm not saying

(49:04):
we have to be Luddites ready to destroy the looms owned by the factory, but we should understand the
lessons of the Luddites and respect the actions of the Luddites, which were not just to protect
their labor, but protect the quality of the goods that were produced for the public. The
mass-produced cloths were not as good as what they did. The public ultimately decided mass

(49:25):
production was more desirable than employing hand weavers, but we're talking about education here,
and when is mass produced education proved superior to individual, bespoke relationships?
Seems like a good note to move to our last question,
which is we always end by asking, what's next? That's a good question. So whenever I write a

(49:49):
book, I promise my wife to not write another book for a year, because, as much as I love
writing a book, and I love writing books, because of what we've been talking about,
like the depth of immersion that I have to do to write a book is just steeply, deeply pleasurable,
but it also makes me checked out of my life in other ways. Now, fortunately, this book was

(50:11):
written quite quickly relative to my other books, so that was good. But to answer your question,
I've become increasingly interested in the idea of apprenticeship and how we learn to do what we do.
I'm gonna guess both you guys either had formal apprenticeship programs to l earn to teach you

(50:32):
had, like, literal mentors, or I, unfortunately didn't have that. But what I did have was a lot
of other people who are teaching around me who I would go bug and say, “This isn't working. What do
you do? What can I do differently? Because here's what's happening, and it is very,
very unsatisfactory for everybody involved in the equation, what else can we do?” And so I cobbled

(50:53):
together a sort of apprenticeship model to help me learn how to teach and reading. I read a bunch of
books. I'll still never forget reading Ken Bain's book What the Best College Teachers Do, where it
was literally like the scales falling from my eyes. I'm like, “Oh my God, there's people who
care about this stuff out there, and I can read about it, and I'm going to start thinking about
it too.” It was literally life changing. That book was a form of apprenticeship to better learn how

(51:16):
to do the work of teaching. And the reality is, huge swaths of our society and work are based on
an apprenticeship model without us recognizing it. And my concern over this technology is we're going
to lose it, either because we decide to outsource the work of the apprentice to the technology in
the interests of increased efficiency and lowered cost, or when we ask apprentice workers to do the

(51:41):
work, we will not be requiring them to have the kinds of experiences that allow them to
become the mentor. If an associate lawyer is writing a brief using large language models,
they will be missing out on the kinds of thinking that will make them the mentor lawyer one day. My
older brother's a lawyer, and he's concerned about this stuff, like “How do I make me?” And what

(52:04):
makes him is all the thinking he's done over the course of his career to do this. So I'm fascinated
by that. So I'm thinking of some kind of project around that, maybe not a book, maybe a podcast,
so I can tell my wife I'm not writing a book, I'm doing a podcast. But that's what's interesting
me right now. That's sort of, how do we learn to be what we are, and what does that entail,
and how does this technology threaten those things, and what will happen if we allow it to

(52:29):
dissolve those relationships in our workplace?Well, this conversation and your book has provoked
an awful lot of thought, and I've really enjoyed both listening to you and reading your book,
and I'd strongly encourage anyone who is thinking about using generative
AI tools to read through your book. Oh, it's my pleasure. I have been a long time

(52:50):
listener, and so getting on Tea for Teaching is a career honor that I'll always be glad for.
Thank you so much. Thank you.
If you've enjoyed this podcast, please subscribe and leave a review on iTunes
or your favorite podcast service. To continue the conversation, join

(53:13):
us on our Tea for Teaching Facebook page. You can find show notes, transcripts and other
materials on teaforteaching.com. Music by Michael Gary Brewer.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.