All Episodes

February 6, 2024 44 mins

Send us a text

Dr. Bradley Robinson talks to us about artificial intelligence technologies, including how we can critically approach possibilities for teaching and learning with AI, and the deeply human nature of the ways AI tools were built. Brad is known for his work focusing on the creative and critical capacities of digital technologies in literacy education. Specifically, he has examined topics like novice video game design, digital platforms in and out of education, and artificial intelligence, all with a commitment to mindful, authentic, and just implementations of digital technologies. Dr. Bradley Robinson is an Assistant Professor of Educational Technology and Secondary Education in the Department of Curriculum and Instruction at Texas State University. You can connect with Brad via email (bradrobinson@txstate.edu) or on Twitter (@Prof_Brad_TxSt).

Resources mentioned in this episode: https://tech.ed.gov/ai-future-of-teaching-and-learning/

To cite this episode:
Persohn, L. (Host). (2024, Feb 13). A conversation with Brad Robinson (Season 4, No. 8) [Audio podcast episode]. In Classroom Caffeine Podcast series. https://www.classroomcaffeine.com/guests. DOI: 10.5240/1974-7A05-2E9B-7B45-C029-7

Connect with Classroom Caffeine at www.classroomcaffeine.com or on Instagram, Facebook, Twitter, and LinkedIn.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:10):
Education Research has a problem the work of
brilliant education researchersoften doesn't reach the practice
of brilliant teachers.
Classroom Caffeine is here tohelp.
In each episode, I talk with atop education researcher or
expert educator about what theyhave learned from their research

(00:31):
and experiences.
In this episode, dr BradleyRobinson talks to us about
artificial intelligencetechnologies, including how we
can critically approachpossibilities for teaching and
learning with AI and the deeplyhuman nature of the ways AI
tools were built.

(00:51):
Brad is known for his work,focusing on the creative and
critical capacities of digitaltechnologies in literacy
education.
Specifically, he has examinedtopics like novice video game
design, digital platforms in andout of education and artificial
intelligence, all with acommitment to mindful, authentic
and just implementations ofdigital technologies.

(01:12):
Dr Bradley Robinson is anassistant professor of
educational technology andsecondary education in the
Department of Curriculum andInstruction at Texas State
University.
For more information about ourguest, stay tuned to the end of
this episode.
So pour a cup of your favoritedrink and join me, your host,

(01:34):
lindsay Persan, for ClassroomCaffeine Research to Energize
your Teaching Practice.
Brad, thank you for joining me.
Welcome to the show.

Speaker 2 (01:43):
It's a pleasure to be here.
Thanks so much for inviting me.
I'm excited to think with youfor a few minutes.

Speaker 1 (01:48):
Thank you.
From your own experiences ineducation, will you share with
us one or two moments thatinform your thinking now?

Speaker 2 (01:56):
Yeah, sure.
So over the past couple ofyears, a lot of my thinking has
been around the influence ofemerging artificial intelligence
technologies on education ingeneral, literacy education in
particular in my case.
And there are kind of twostories that come to mind when I
think about that.
One of them was when I was aPhD student at the University of

(02:18):
Georgia.
I taught a class to pre-serviceEnglish teachers, secondary
English teachers, and it wascalled Digital Tools in English
Education, and the basic purposeof the course was to explore
kind of an open-ended, creativeway, lots of different
technologies for supportingliteracy learning in English
classrooms.

(02:39):
So we would look at podcasting,for example, or digital
storytelling.
The very last kind of unit ofthe class was called N plus one,
and the point was to kind ofsay well, what's next?
What emerging technology shouldwe be thinking about?
So in the fall of 2019, I don'teven remember how I came across
it, but I have read somethingabout GPT2.

(03:02):
And someone had linked to awebsite called
talktotransformercom and Istarted looking into it and I
was like OK, so this is awebsite that allows you to
interact with this thing calleda language model that I had
never heard of before, and ituses some sort of algorithmic

(03:25):
processes to generate text, andit seems pretty natural, and so
the article that I read was veryhype-driven and I was
interested in it.
So I went to the websitetalktotransformer and I started
playing around with it andimmediately I was kind of struck
by it.
I was like wow, it was nowherenear as sophisticated as the

(03:45):
GPT3 or GPT4 now or the otherlanguage models that people are
using now, but it was stillpretty impressive at the time
and I was like this is a greatthing.
I immediately started thinkingthis is probably going to be a
big deal here in a few years,and so in the N plus one unit I
took it into the classroom and Ijust kind of started talking

(04:09):
about it and sharing it with thestudents.
And we did this activity wherewe all picked the first sentence
of a beloved novel.
So I picked the first sentenceof Ralph Ellison's Invisible man
.
People picked the firstsentences of other novels and
the objective was to put thesentence into talktotransformer

(04:31):
and click Enter and see what wasproduced and then do some line
breaks to kind of make a poemout of it.
So the idea was to create whatwe were calling automated poetry
, and it was just such bizarrestuff and, as a quick footnote
here, by the way, thetalktotransformer used GPT2, I

(04:52):
believe, but it had been kind ofdumbed down a bit.
Even at that time, openai thiswas when they were still kind of
a nonprofit research outfitwere concerned, like they were
genuinely concerned about theinfluence that this technology
could have, and so theydeliberately constrained it.
And it was good and interestingat the time, but in retrospect

(05:15):
it was nowhere nearsophisticated as it could have
been then, had they unleashedthe whole technology, or, as it
is now, with GPT3, 0.5, and 0.4,et cetera.
And so, anyways, the studentsstarted doing the assignment and
they immediately, justcompletely started freaking out,
and this was in the fall of2019.
And they started saying all thethings that everyone has been

(05:38):
saying since Chat GPT wasdeployed.
I think what was that?
In November of 2021?
, 2022?

Speaker 1 (05:48):
I think it became public in 2022, I do believe.

Speaker 2 (05:52):
I should say widely used right.

Speaker 1 (05:54):
That's when they hit their million users in a little
bit, yeah yeah.

Speaker 2 (05:59):
And so this was well before that.
But they were like kids aregoing to cheat on their essays,
english teachers aren't going tobe necessary anymore, what's
the point of writing anymore?
And so we just talked about itand we explored it.
But again, this was an N plusone unit.
So there was a lot ofspeculation Because I had just

(06:21):
discovered this technology justthrough meandering on the
internet in the fall of 2019.
And so we kind of left it thereand it was a very interesting
thing.
And then, slowly but surely, Istarted hearing more about it
and so I started to kind of takeit more seriously and started
to research a lot about it andlearn a lot about language

(06:42):
models and open AI.
And then I published an articlein early 2023 called
speculative propositions for thenew autonomous model of
literacy, where I kind of triedto take the prior thinking about
the autonomous model ofliteracy that Brian Strait

(07:02):
referred to it as in the 20thcentury and how that was kind of
challenged by ideologicalmodels and social cultural
perspectives on literacy.
And I kind of try to thinkabout how can we think about
this as a new autonomous modelthat's kind of reoriented around
machine cognition as a kind offocal way of thinking about
literacy, and so, yeah, I'vejust been doing a lot of

(07:25):
thinking about it at then, andso that happened at Texas State.
So that's story one.
The story two is I'm now aprofessor of educational
technology and secondaryeducation at Texas State
University.
I teach an undergrad courseevery semester called
Introduction to EducationalTechnology.
It's similar to the class thatI taught at UGA, but rather than

(07:50):
being for pre-service secondaryEnglish teachers, it's just for
any students in the curriculuminstruction department.
I knew that I needed at somepoint to integrate a unit on
generated AI.
I hadn't found the time or theenergy to just produce the

(08:10):
content In the spring of thisyear.
This course is structuredaround a project-based learning
unit idea that the students comeup on their own that's relevant
to their discipline and agelevel.
Then they keep that unit ideawith them throughout the course.
When we explore differenttechnologies, they say okay, how

(08:31):
could this technology help usdo something cool with this
project-based learning idea?
One of the units is just aboutdifferent apps, like different
apps that people use.
The assignment was very simple.
It was pick two apps that arerelevant to your project-based
learning unit and explain how itmight support your students'

(08:53):
learning and creativity with theunit.
That semester I was teachingseveral sections of the class
and I had a total of 70 students, and it was at least 10.
It might have been a bit morethan that.
In their apps they includedchat GPT, they said.
I should add here that most ofmy students right now are

(09:16):
elementary teachers.
These were teachers who weregoing to be teaching elementary
school primarily.
They said they had heard aboutchat GPT, as we all had, so this
was the app they wanted toinclude.
They said they wanted to use itto teach their elementary
school students how to doresearch in relation to their
project-based learning units.

(09:37):
I completely freaked out.
I freaked out in the way that myEnglish students freaked out at
UGA, because I was like, oh mygosh, their write-ups, their
assignments, made it clear thatthey understood chat GPT to be
this very reliable research toolthat made the research process
way easier and more natural thanGoogle, and that they could

(10:00):
take their elementary schoolkids and bring that app to them
and say when you're researchingphotosynthesis, hop on there, or
whatever it is you'reresearching, hop on to chat GPT
and it'll help you find yourinformation.
I realized at that moment it wasa potential problem not in the
sense that teachers should neveruse chat GPT or anything like

(10:21):
that but it was clear to me thatmy students didn't understand
the implications of how theywere using chat GPT.
That was then I and mycolleague in the program decided
we really needed to redoubleour efforts and create a
generative AI unit, which we nowhave, where we walk them
through learning about it.

(10:41):
But those are two stories thattake AI in relation to teacher
education.
That had really informed a lotof how I think about it and how
I think about the ways thatmight influence education and
the ways that teachers mightrespond to it in their practice.

Speaker 1 (11:00):
Those are two really great stories that I think help
us to not only follow your path,but it also traces a bit of the
history of how open AI toolshave been introduced in
education.
As we were talking before westarted recording, I've begun to
play with some of these toolsmyself, not only as an
instructor, but then thinkingabout how I can help my

(11:21):
pre-service teachers topotentially use a tool like chat
GPT to make their work a littlebit easier on them.
I don't mean easy in the waythat I think the freak out kind
of way.
We're not looking to cheat here.
We're looking to make our workclearer, stronger, more robust.
I've been thinking a lot abouthow that can happen Because, as

(11:45):
soon as you said using chat GPTto help elementary-age students
in their research, there are somany potential pitfalls there
Because you're talking to anaudience of elementary-age
students who likely have cursoryknowledge of the research
process at best or are stillworking to build their knowledge

(12:05):
of the concepts that they'relearning.
So it can be difficult to knowwhen your chat bot has it right
or when they're leading youastray and really sourcing.
There are just so manypotential implications there
that I think are reallyimportant to unpack.
But I think the other thingthere is your pre-service

(12:25):
teachers recognizing the powerof this tool, but I think there
always has to be this caveatthat it's not a human and so
that element of teaching isdifferent with humans than it is
when we're being taught byrobots.

Speaker 2 (12:41):
Absolutely, and a couple of things to say about
that.
The point that you made aboutthe stories in some way loosely
narrating the trajectory of openAI releasing chat GPT Not a lot
of people are aware that thedevelopers at open AI a lot of
them were really uncertain aboutdeploying chat GPT.

(13:02):
They were concerned about allthe things that everybody else
did.
It's not as if they justweren't aware of it.
They knew there were potentialfor the proliferation of miss
and disinformation.
The language models can tend toreproduce algorithm bias,
re-represent the biases All thehuman ugliness that is on the

(13:24):
internet getting reproducedstatistically and
probabilistically throughalgorithms and the language
models.
They knew all that, and sothere was a lot of reluctance to
publish it.
But they heard that other largeorganizations, like Google, had
language models that were aboutto be deployed, and when they

(13:47):
heard that, they freaked outbecause they didn't want someone
to beat them to market.
And so when I read it, onlyit's looked at about two weeks
or so to build the front endthat we now call chat GPT, which
is just the user interface thatallows you to interact with
their language model, and sothey really rushed it out.
And in some ways, when I saw mystudents talking about chat,

(14:09):
gpt in the spring of 2023, whatI saw, in some ways, was like a
triumph of marketing, that it'salmost like the Xerox effect we
don't talk about photocopying,we talk about Xeroxing.
We don't talk about search, wetalk about Googling.
We don't talk about languagemodels, we talk about chat, gpt,
and in some ways, I think itwas very shrewd, maybe cynical

(14:31):
kind of business move to throwthat user interface together and
put it out there to beat themarket, and they ended up being
wrong, right, they ended uplearning that there wasn't
another organization that wasabout to publish their language
model, but because theypublished theirs, that then
prompted Google and these NMETAto then try and get their

(14:51):
language models out there too,and so the point being is that
corporate motivations are allentangled with the ways these
technologies are created anddeployed, and so I saw that
surfacing in some ways when mystudents were talking about it.
The other thing that I would sayis to your point about

(15:13):
interacting with robots, that'strue.
Sometimes.
I'm concerned, though, thatwhen we talk so much about the
machine dimension of thesetechnologies, that we do forget
their deeply human quality.
Again, all the language used totrain language models were

(15:33):
derived from people'sinteractions on the internet and
, as I said before, all thebeauty and ugliness that
humanity is capable of that isexpressed on the internet is
then hoovered up into thetraining data and then is used
to complete answer whateverprompts that you put in there.
At the same time, you havepeople in countries like Kenya

(15:58):
who are playing part of whatthey call user in the loop,
where they have certain outputs,and then those workers in Kenya
will say, well, this one'sbetter than that one, this one's
better than that one, andthat's one of the layers of the
training that the model goesthrough.
And so there's humans in theplace there.
So when you're interacting withchat, dpp and it fits out its

(16:23):
output, what you see there seemsvery robotic and machine-like.
By the way, too, some peoplearen't aware of this, but the
little dot-dot-dot that happens,that makes it seem like it's
thinking, is just there as aneffect to make it seem like
you're interacting with someone,like in a text messaging chain
where the three dots appear on,like iMessage or whatever.

(16:44):
It's there to create the effectof interacting with the machine
.
But it's super important foranyone interacting with these
technologies to always keep inmind that they are not
artificial.
They are deeply and irrevocablyhuman, and it's something that
we should always keep in mind,even if we're looking at a
screen and it's not always easyto see the human there.

Speaker 1 (17:07):
In my mind, that is less a part of the conversation.
At the forefront right nowthere are doom and gloom kinds
of perspectives.
Artificial intelligence isgoing to take over the world and
it tied to movies.
It just makes me think of thetangled webs that we weave, and
in particular, not only thehumanistic ways in which we can

(17:30):
interact with artificialintelligence, but also the
capitalistic ways in which thisis weaving its way right into
our lives.

Speaker 2 (17:39):
This isn't really fit .
You were talking about the doomand gloom.
One of the things about chatGPTis that it's like a Langdon
winner, the science andtechnology scholar from the 20th
century.
He often talked abouttechnology as being tools
without handles, which is aconfusing formulation, but I

(18:03):
always thought about it as whentechnologies are deployed
without a clear use case.
When chatGPT was deployed andall these language models came
online, there's all thespeculation and people imagining
ways that you could use them,but these technologies were not
developed by, for or witheducators at all.

(18:28):
Yet what was the first domainof human life that people
understood very clearly would bemost impacted immediately by
these technologies?
It was education, and so Ithink that just because the
technologies exist, I'm not surethat's necessarily reason for

(18:50):
us to use them like they'retools without handles.
I think that as we, aseducators, think about how do we
respond to these technologies,how can we use them, I think
it's important that we have aclear sense of the use case and
that we're not using them justto fulfill some sort of type
filled expectation or theseinevitability arguments about AI

(19:15):
taking over the world, not inthat AI apocalypse way, but just
the fact that they're going tobe deeply integrated into
everything we do, and if wedon't teach our students how to
use them, they're going to beleft behind.
That could be true, but again,it could also be marketing, and
so it's worth asking like whoseinterests are served when we
accept those argumentsuncritically and then begin to

(19:38):
train our students to use thesetechnologies?
There's a certain kind ofself-fulfilling nature to those
kind of prophecies if we respondto that way.
One other story that came tomind I don't know if you'll ask
for two, but I have been working, collaborating with a
researcher in Germany on doingsome game studies on the
research on the gaming strand ofmy research, and English is its

(20:02):
second language, and this isrelevant to the whole GPT-3 use
case and he and I and anotherauthor have written his names.
So the German author's name isAndrei Salderna I hope I'm
pronouncing that correctly andthen the other author is someone
you might be familiar with, samVon Gillern, who's a literacy

(20:24):
researcher at the University ofMissouri.
But we've written severalmanuscripts together and the
first one we wrote together.
When Andrei, the German guy,sent me his writing, I was super
impressed with his writtenEnglish being his second
language, and I'm always justblown away by people who do

(20:44):
academic writing and a languagethat's not their native language
.
It's just because academicwriting can just be so specific,
it's really impressive.
Nevertheless, I still had tospend time like as a leader,
author, kind of overwriting andyou know making some tweaks some
places where not quite theright preposition would be there
.
If you're a native speaker, youmight kind of get the nuance,
but it's really difficult toexplain why you would use of

(21:07):
rather than in a specific place,and so I'd go through and fix
all those little things, butotherwise it was great.
We were working on our secondmanuscript a bit later and he
sent me his writing and Istarted reading through it,
thinking I would have to do thesame thing, and I was like, wow,
his writing has gotten so muchbetter.
This is amazing.
Like this is I mean, this isreally amazing.

(21:28):
I didn't have to changeanything.
And so we had a research meetingand I was like I complimented
him.
I was like Andrei, look, yourwriting is.
Your academic English is superimpressive.
I just wanted to just let youknow that I'm really impressed
by your ability to writeacademic English at being a
second language.
And he kind of like you know,kind of smiled a little bit and

(21:49):
he's like well, I should tellyou that the first time I said
but I didn't do this.
But the second time I took mywriting kind of chunk by chunk
and put it into chat GPT andsaid, can you clean this up a
little bit?
And so, unbeknownst to me, hehad been using it to kind of
clarify his writing a little bitbefore submitting it.
And to me that was a momentbecause I have, if people have,

(22:13):
read any of my work on this.
I'm pretty skeptical of it andI think it's important to cut
for me.
That's just my disposition.
But at that moment I was like,ok, that's actually like doing
really meaningful work for himas a writer and it really made
sense for me as a youth states.
And so I think that you know,for educators, when they're

(22:35):
thinking about using thesetechnologies, I think keeping in
mind, like what's the youthstates, like what are you using
for?
How is it really helping yousupport your students?
You know they're literacydevelopment and meaningful,
authentic ways.

Speaker 1 (22:50):
Right, because certainly there are really
powerful use cases like you justdescribed, as well as some
pretty nefarious uses for thosekinds of tools too.
So, yeah, I think that, in mymind, that's one of my next big
steps is determining, you know,how can we put these tools to
good use?
They are here and they're hereto stay, so how do we in fact

(23:10):
leverage them for good insteadof evil?

Speaker 2 (23:13):
Yeah.

Speaker 1 (23:13):
So, brad, what else do you want listeners to know
about your work?

Speaker 2 (23:17):
So about my work in general, something that I've
been thinking about a lot latelywith these technologies and
I've had these conversations inprofessional development
sessions I've done at theuniversity and also with other
faculty and also with my studentteacher candidates I kind of
just think of it in this verysimple framework and it's just

(23:39):
about before with, and sosomething that I think a lot
about when I think abouttechnologies in general, but
educational technology ingeneral, but focusing
specifically on generative AI isthat before teachers at any
level decide to use this intheir teaching whether it be

(24:01):
developing lesson plans,whatever it might be it's
important that they learn aboutthe technology before they start
teaching with it.
There's just so many resourcesout there now for doing that.
Openai provides pretty gooddocumentation, and pretty
readable documentation too,about how chat GPT works.

(24:23):
They also recently releasedsome information about using it
in education in particular.
Of course, that stuff should beread critically.
They're not educators.
It's important, I think, for usto own our expertise as
educators and not just be likelet's just listen to what the
techno gods have to say to usabout this stuff.

(24:47):
In May of 2023, the Office ofEducational Technology at the
Department of Education releaseda report called Artificial
Intelligence and the Future ofTeaching and Learning Insights
and Recommendations.
It's a fairly long it's like a70-page document that does a
really great job introducing thebasic concepts around AI,

(25:09):
giving very workable, easy tounderstand definitions of it.
It has a whole chapter onethical considerations related
to issues around algorithmicbias, data extraction,
misinformation, disinformation,all those kinds of things
intellectual property, all thosekinds of things that are
relevant to debates aroundlanguage models, the journey of

(25:31):
the AI.
It's from an educationalperspective.
It has a what is AI.
It has ways that it mightinfluence learning, ways that
might influence teaching.
It talks about assessment.
It talks about research anddevelopment and then it has a
whole section on recommendation.
I think it's a pretty gooddocument for teacher educators,

(25:51):
free service teachers who areinterested in this area.
I think it's an excellentresource to consult as you're
thinking about how to respond tothese technologies oh yeah,
about before with this resource.
The Office of EducationalTechnology's report on AI and
Education is a great way tolearn about these technologies

(26:12):
before you start teaching withthem, because, see, that's what
I realized that my students weredoing they were starting to
teach with them and thereforehaving their students to start
learning with them before eitherof them knew really anything
about them.
And so, again, the basicheuristic is about before with.
Before you teach with thesetechnologies, learn about them.

(26:33):
And then the corollary is alsotrue Before your students start
learning with them, make surethey have learned about them,
and this needs to bedevelopmentally appropriate for
the age level.
If the students are not able tounderstand the basic ideas
about these technologies, thenmaybe it's better to wait until

(26:55):
they're of an age where they can, when teachers start to think
about integrating or havingtheir students use these
technologies, just like theyneed to learn about them before
they teach with them.
The students need to learn aboutthem before they start learning
with them.
And if you're right that thesetechnologies are here to stay

(27:15):
and they're going to be with usand they're going to start
getting layered into all kindsof different programs, I mean
even now, and when I open GoogleDocs, I now have a little
barred thing that I can click onon the left side of the page
and it's basically similar to achat GPT prompt window and you
can like ask, get a question orinsert a prompt or whatever, and

(27:36):
it'll just spit the text rightinto your Google Doc, and I'm
sure probably that many schoolsystems who subscribe to Google
and haven't integrated intotheir school system might
disable that function.
But it just goes to show howthese schools are definitely
going to, in ways that we maynot even know, to start to kind

(27:57):
of come into the technology.
So again, learning about thembefore learning with them, I
think is really important.

Speaker 1 (28:03):
That framework that you've given us that about
before and with.
I find that to be incrediblyhelpful because I think this is
a field that many people areinterested about that we are
obviously still learning so muchabout.
But it is challenging in somany ways if someone jumps
straight to the integrating themor learning with them without

(28:25):
really understanding how's itbuilt, what's it for, what's it
not so good for Right, I thinkthat without that
contextualizing knowledge, itdoes put us in a little further
into the danger zone, right bynot understanding anything about
them.

Speaker 2 (28:39):
Yeah, I mean, if you have a teacher who uses it to
chart out a pacing guide ornotes on some sort of historical
event or something to kind ofhelp them speed up their content
production for some lessonthey're doing, maybe that's okay
, but they need to know whenthey do that that it's prone to

(29:01):
providing incorrect informationand that the information can
only go up to a certain dateunless they deliberately go in
and insert it.
And so if you know that as ateacher, then you know, like if
you consult it for something,that you should read it
critically and so hopefully,like if you're a history teacher

(29:21):
, you'll have your historicalexpertise that then you can kind
of use as a filter for whichyou engage with on those
platforms.
That then will prevent you fromfalling into that trap.

Speaker 1 (29:36):
I'm absolutely with you on that.
I think we've got to have thathuman vetting of the information
, that knowledge and expertisethat we bring contextually and
in an expert way that thetechnology just really can't
achieve.
That really makes me thinkabout a use case that I was
considering.
This morning.
I did some work in the AIcourse that I'm enrolled in

(29:58):
right now that I mentionedearlier, and someone had
mentioned in a chat using AI inorder to generate potential
accommodations for a lesson plan.
I thought, hmm, I know that'ssomething that historically, my
students have had a little bitof trouble with, particularly
coming up with accommodations intheir planning for diverse

(30:19):
learners in their classrooms.
You ask chat GPT to give youaccommodations for a lesson plan
on this particular book,because my students are working
on read aloud plans right now.
It'll give you 10 or 12different recommendations, as
you said.
That doesn't mean that thoseare 10 or 12 great

(30:39):
recommendations.
They're ideas and they'resomething that we can then work
with.
I think that when it comes tothat sort of thinking, I think
of it as a partner in thinking.
It doesn't mean that you justtake everything as it is, you
don't view it critically, youdon't vet that information.
It can be a good way, I think,to generate ideas, so that we
don't feel like we're working inisolation, or if there's some

(31:01):
area that we don't feelparticularly strong in and we're
still working to build ourexpertise or our knowledge of
the different options.
I think it can be really usefulin those cases.

Speaker 2 (31:12):
Absolutely.
One of my colleagues at TexasState.
He's in the computer sciencedepartment and he does work in
the area of natural languageprocessing.
He says that he tells thestudents to think of it as a
smart fellow student who theydon't entirely trust.
You can ask your student canyou help me with this thing?
You know they're pretty smartbut you also know not to

(31:35):
completely trust everything theysay.
That might be a helpfulmetaphor to think with.

Speaker 1 (31:42):
I think that's super helpful.
We can think of it as peersuggestions, but it's the peer
that maybe you haven't workedwith over and over again that
you really trust.
It's the peer that you werepartnered with, perhaps,
incidentally, and so,approaching it with that in mind
, I think it's a really smartway to frame it.
I think it translates thatscenario in a way that I think

(32:03):
many, particularly collegestudents, or even high school,
middle school students, wouldpretty readily be able to
understand.

Speaker 2 (32:11):
Yeah, I think so too.

Speaker 1 (32:12):
That's great.
What else do you want us toknow about your work, Brad?

Speaker 2 (32:17):
One thing, if people are interested.
I and another literacy scholarnamed Ty Hollitt at Penn State
University are currently guestco-editing a special issue of
reading research quarterlycalled Literacy and the Age of
AI.
We're bringing together aninterdisciplinary cohort of

(32:38):
scholars both in and out ofliteracy studies.
It's a really reckon with a lotof the questions about what are
the implications of thesetechnologies for reading,
writing, speaking, listening,creating in the 21st century and
moving forward.
I would encourage listeners, ifthey're interested, to look out

(33:02):
for that.
It should be out next year.
There's a few other really coolspecial issues as well.
First, teaching practice andcritique currently has a special
issue and process on this, andlearning media and technology
also does.
There's a bit of a lag withacademic publishing responding
to technological innovation, butI think a lot of that's going

(33:25):
to be coming out at some pointin 2024, where a lot of leading
voices across diverse fieldsthat are interested in some of
the questions around AI andeducation.
They are becoming public, andso that should be a great
resource for people to keeptheir eyes out.

Speaker 1 (33:44):
That's super helpful to have a bit of a compass in
the world of academicpublications because it, as we
know, it can be a little bittricky to navigate even for
those who work inside that field.
But particularly for those whomay be just working just outside
of academia, that's superhelpful to know that those
things are in the works andcoming our way in the next year

(34:04):
or so.
Is there anything else you wantto share, Brad?

Speaker 2 (34:08):
Yeah, one other thing that I'll say is that we are as
a culture, as a society and aseducators.
We are understanding artificialintelligence largely as a novel
phenomenon.
We have known through sciencefiction and speculative fiction
that artificial intelligence islike an idea in the world, but

(34:30):
it's recently that it's really.
You know, we've really startedto understand the ways that it
can influence our lives andteaching and learning.
But I think it's reallyimportant to understand that
artificial intelligence iseffectively a set of
computational processes thathave material dimensions to them

(34:53):
server forms, energy extraction, all these kinds of things,
people in different countries.
It has a material impact on theworld.
That's one thing.
But also these samecomputational practices like
machine learning they are notnew in education.
So take a reading platform likeEpic Reading.

(35:17):
So Epic Reading, if you're notfamiliar with it, is a digital
reading platform for educationand basically it provides it's
like a collection of e-booksonline and some of the books
have quizzes and it conforms toyour basic structure of reading
and answering questions.
But one of the interestingthings about Epic is when you

(35:38):
open it up as a user and youlook at it.
It actually looks a lot likeNetflix.
When you open up your Netflixaccount that has like tiles with
the different shows and moviesyou can watch, like here's the
For you, here's the trendingright now.
So the Epic kind of landingpage when you log in it evokes
that.
It's very similar and it has arecommended for you pane.

(35:59):
And the way that recommendedfor you pane exists is through
machine learning.
It's through algorithmicprediction, which is the same
like, on a fundamental level,making predictions, absorbing
lots of data, processing it andthen making predictions off of
it.
Is at the heart what's happeningin language model technologies

(36:19):
like chat and GPT, and so it'sjust important, I think, to
understand that thesetechnologies have been playing
on the lower frequencies ofliteracy, learning and living
for years now, and so it's justand it's only kind of become
aware of us when the kind of wowfactor of engaging with an AI

(36:40):
chat bot kind of blows you awaybecause it sounds so natural.
But if you have questions orconcerns about the algorithmic
processes and their influence onliteracy, education or
education in general, then someof those concerns are hold for
some of these other platformsthat, like Epic, that are also

(37:05):
driven by algorithmic predictiontechnology.
So just another thing I thinkit's important to remember about
those technologies and that, bythe way, is something that I'm
currently it's kind of at thecenter of my research is kind of
understanding how do we gethere with AI and what are the
ways that AI powered platformshave been shaping literacy

(37:27):
technologies in ways that we maynot have been paying attention
to, even before the deploymentof language model technologies
like chat, gpt.

Speaker 1 (37:36):
What an interesting thing to think about, right,
those things that have beenhappening all along, that aren't
so in your face or that don'thave such a wow factor, and how
that is already impactingliteracies in many hidden ways,
I would say.
And so, yeah, I look forward toreading more about that work as
you move along with it.
So, brad, given the challengesof today's educational climate,

(37:58):
what message do you wantteachers to hear?

Speaker 2 (38:00):
Well, I will say first that the challenges of
today's educational climate arelegion and intense.
At the same time, though, thefirst thing that I'll say is
something that I always say atthe end of my courses is that,
in my opinion, teaching isliterally the most important job
, and I know that's kind of acliche, but I think it's

(38:24):
important to remember that whenwe are, as a field, facing so
many headwinds whether they beeconomic, political,
technological, whatever theymight be that the work you are
doing is, in my opinion, some ofperhaps the most important work
that people can do in theirlives, and so that's one thing

(38:46):
to hold on to is just theimportance of it.
Another thing that I would say,when it comes to AI
technologies in particular, is,for me, and as a literacy
researcher, as a writer and areader, for me, writing has been
, and continues to be, a waythat I kind of come to

(39:08):
understand myself and the worldand other people, and that's a
message that I deliver tostudents.
I mean, that's something thatliteracy scholarship has taught
us is that literacy is deeplyrelational, in the sense of
getting to know ourselves andunderstanding each other and the
world, and however teachersdecide to respond to AI-powered

(39:34):
platform technology.
I think it's just really,really important to remember and
to keep in mind that, given thenature of writing and the way
that it connects us withourselves, each other and the
world continue asking yourselfwhat does that mean when massive
, globally scaled algorithmicprocesses start to kind of

(39:55):
interface with that process ofcoming to understand ourselves,
each other and the world.
Some people may say, yes, that'sscary, I'm just not going to do
it, and other people may arriveat a different conclusion,
which is fair, but I do thinkit's important to remember that.
You know what was it?

(40:16):
Kransberg's first law is likeon technologies that are neither
positive nor negative, butneither are they neutral, and so
a lot of the talk around GPTtalks about it being a tool.
You know it's just a tool, andI use that language too, but
tools don't just pop intoexistence.
They come shaped by certainvalues and certain kind of ideas

(40:41):
about use and mind, and that'sthe same with all of these
technologies.
And so I think just being supermindful about the emergence of
these algorithmic processes asthey interface with literacy,
learning and living is animportant consideration.

Speaker 1 (40:58):
I couldn't agree more , and I think that you know, I
know that a lot of your work hasto do with this mindful
integration of technology and AItools, and so I really
appreciate all of thosereminders, because it is easy to
get caught up in the excitementof either you know it's good,
it's bad, but I love that quotealso to remind us that it's
neither, but it's also notneutral, and I think that
perhaps that is where human endusers can really find themselves

(41:22):
in this work, because it is allof those things and it's none
of those things, and I think ittakes us to bring our own
critical lens and our ownidentity as literacy learners as
, hopefully, lifelong literacylearners to really understand
what it means, how we best useit and, yeah, how we can
leverage it for a better world.

Speaker 2 (41:43):
Absolutely.

Speaker 1 (41:44):
Well, Brad, I thank you so much for your time today
and I thank you for yourcontributions to the world of
education.

Speaker 2 (41:50):
My pleasure.
Thank you so much for theinvitation.
It's been super fun to chatwith you for a little while.
Thank you, you too.

Speaker 1 (41:57):
Dr Bradley Robinson is known for his work in the
creative and critical capacitiesof digital technologies and
literacy education, specificallyexamining such topics as novice
video game design, digitalplatforms in and out of
education, and artificialintelligence.
His commitment to mindful,authentic and just
implementations of digitaltechnologies runs deep and it

(42:20):
informs his work in support ofethical and equitable literacy
education across ages andcontexts.
His work has appeared inwritten communication learning
media and technology.
International journal ofqualitative studies and
education, qualitative inquiry,literacy research, theory,
method and practice.
Post-digital science andeducation and English journal.

(42:44):
Formerly a secondary Englishteacher in North Carolina, brad
holds a PhD in language andliteracy education from the
University of Georgia.
He also holds a Master's inArts in English from Middlebury
College's Breadloaf School ofEnglish in Middlebury, vermont.
Dr Robinson is an assistantprofessor of educational
technology and secondaryeducation in the Department of

(43:06):
Curriculum and Instruction atTexas State University.
You can connect with Brad viaemail at bradrobbinson, at
txtateedu that'sb-r-a-d-r-o-b-i-n-s-o-n.
At txs-t-a-t-e dot edu, or onTwitter at prof underscore Brad

(43:31):
underscore txs-t.
For the good of all students,classroom caffeine aims to
energize education, research andpractice.
If this show gives you things tothink about, help us spread the
word.
Talk to your colleagues andeducator friends about what you

(43:53):
hear.
You can support the show bysubscribing, liking and
reviewing this podcast throughyour podcast provider.
Visit classroomcafinecom, whereyou can subscribe to receive
our short monthly newsletter,the Espresso Shot.
On our website, you can alsolearn more about each guest,
find transcripts for ourepisodes, more topics using our

(44:15):
drop-down menu of tags, requestan episode topic or potential
guest.
Support our research throughour listener survey or learn
more about the research we'redoing on our publications page.
Connect with us on social mediathrough Instagram, facebook and
Twitter.
We would love to hear from you.
Special thanks to the classroomcaffeine team Leah Berger,

(44:37):
abaya the LuRu, stephanieBranson and Shaba Hojfath.
As always, I raise my mug toyou teachers.
Thanks for joining me.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.