All Episodes

July 10, 2025 61 mins

AI in Schools: Innovation or Illusion? | Ep. 328 with Job Christiansen

Is AI making classrooms smarter—or just noisier?

In this episode of My EdTech Life, I sit down with Job Christiansen, a fellow cautious advocate, educator, and instructional technology specialist, to ask the hard questions about AI’s role in education. From privacy concerns to the illusion of “safe use,” we unpack what most educators, tech leaders, and decision-makers aren’t being told about the tools flooding our schools.

Job doesn’t just read privacy policies, he tests them, creating teacher and student accounts to see what’s really happening behind the interface. This episode dives into why most AI tools may still be stuck in the substitution phase and what it’s going to take to truly shift toward responsible, innovative, human-centered use.

🎧 Whether you're in K–12 or Higher Ed, this one will challenge your thinking.

 TIMESTAMPS:
 00:00 Welcome and Guest Intro
 02:00 Job’s Journey Into EdTech
 06:00 First Reactions to ChatGPT in 2022
 10:00 Early Adoption vs. Caution in Schools
 13:30 AI's Substitution Trap & SAMR Concerns
 19:00 Redefining “Safe” in AI Tools
 24:30 What Job Found Testing Student-Facing AI Apps
 30:00 Historical Accuracy and the AI “George Washington” Problem
 36:00 The Concept of AI Pollution and Knowledge Dilution
 43:00 Transparency, Trust, and Teacher Responsibility
 47:00 Final Takeaways and Reflective Advice
 50:00 Where to Connect with Job
 54:00 EdTech Kryptonite, Billboards, and Historical Curiosity

🙏 Big thanks to our amazing sponsors:
 🔹 Book Creator
🔹 Eduaide.AI
🔹 Yellowdig

💬 Visit www.myedtech.life to explore more episodes and support the show.

👋 Stay curious. Stay critical. And as always—Stay Techie.

Authentic engagement, inclusion, and learning across the curriculum for ALL your students. Teachers love Book Creator.

Yellowdig is transforming higher education by building online communities that drive engagement and collaboration. My EdTech Life is proud to partner with Yellowdig to amplify its mission.

See how Yellowdig can revolutionize your campus—visit Yellowdig.co today!

Support the show

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Fonz Mendoza (00:30):
Hello everybody and welcome to another great
episode of my EdTech Life.
Thank you so much for joiningme on this wonderful day and, as
always, thank you, as always,for your support.
We appreciate all the likes,the shares, the follows.
Thank you so much for yourfeedback too well, as that is
always welcome.
As you know, we do what we dofor you to bring you some

(00:51):
amazing conversations that willhelp us continue to grow within
our education space.
A wonderful guest, somebodythat I follow on LinkedIn and
somebody that shares a lot ofgreat posts and a lot of great
insight about AI in education,so I would love to welcome to

(01:11):
the show Job Christensen.
Job, how are you doing today?

Job Christiansen (01:16):
I'm doing.
Great Thanks for having me onthe show, Fonz.

Fonz Mendoza (01:18):
Excellent.
Well, I'm excited to talk toyou, job.
I know after a couple of postson LinkedIn, I kind of started
seeing you know that we do.
After a couple of posts onLinkedIn, I kind of started
seeing you know that we do havea couple of mutual friends and
kind of posting within the samepost and I was like I really
like your insights, I reallylike what you have to share and,
again, the reason that Ireached out to you was just
because I would really love tojust amplify your voice and hear

(01:42):
a little bit more about yourperspective and experience
within the education space andmainly with AI in education.
So, before we dive into theconversation, can you give us a
little brief description aboutof you know?
Excuse me, can you give us alittle brief bio and what your

(02:02):
context is within the educationspace?

Job Christiansen (02:06):
absolutely so.
I'm a relative newcomer to theeducation space.
My background actually is thatI'm I went to school and studied
history, I got a bachelor's andmaster's in history and then I
never really could get a strongcareer started with those
degrees.

(02:26):
So I did a couple differentthings.
I worked for some nonprofitsand then so three years ago I
actually saw a job opening at aschool, basically for basic tech
support, and especially at oneof the nonprofits I'd worked at,
I had been hired basicallybecause I had like website

(02:47):
skills on my resume.
Right, it's always those likehard skills that they were
looking for at that time.
This was 10 years ago.
So I worked for this nonprofitfor three years and over the
course of those three years theyjust continually found, like
they figured out, that like, oh,like this is broken, maybe Job
can fix it, instead of going tolike the contracted IT guys that

(03:08):
they had.
Right, it was a really smallnonprofit, like 12 of us, so
anytime you can cut costs withjust like Job fixing it.
So I just would tackle things,approach things, start playing
with all these different stuff,like it was like Salesforce
database and then like the VoIPphone system, just all these
different tools.
I just kind of cut my teeth onand I, I I began.

(03:28):
I'm not formally trained intechnology, I just jumped in and
learned it by using it andplaying with it, so anyway, so I
applied to this school and Ithink they really they really
liked that attitude of like youcan just learn by doing and you
have that like go get them heart.
And so I was basic tech supportfor this school.

(03:49):
So that was three years ago andit's a K-12 private Christian
school.
So just to give you some ofthat background too, because
that plays into just how I thinkof things and approach things.
Think of things and approachthings and I think that having
that humanities mindset of thosehistory degrees has given me

(04:10):
like a unique perspective.
So now where I am at the school, after the first year the tech
director who hired me steppedaside and a new tech director
came in, and then this last year, instead of just being tech
support, this new tech directorsaw that I worked really well
with teachers and so he kind ofmoved me over to being what

(04:34):
basically is some sort of like atech coach.
I'm an instructional technologyspecialist now, so I work with
the teachers to help them usethe tools more effectively in
their classrooms.
Use the tools more effectivelyin their classrooms.
But through this whole process,I really found that I, even
though I was hired for liketechnology and that's what a lot
of the you know like I'm theguy everyone called up for, like
hey, my computer's not turningon job, and I was there in five

(04:56):
minutes.
I was, you know, I was like thetech support you really wanted,
but that wasn't enough to keepme passionate and going.
So I've now been pivoting, andsome of the stuff that I write
about is even it's not so muchthe technology that excites me,
it's really just I viewtechnology as a vehicle.

(05:18):
But what I really have gottenexcited about over the last few
years is the learning process,and especially how do we just
get better at learning?
And so I see at timestechnology can help that and
sometimes it can't, and sothat's kind of my approach and
my brief synopsis of all that.

Fonz Mendoza (05:38):
Excellent.
Well, that's good to know andthat's good to hear your
background too as well, as Ithink that that will definitely,
you know, lend itself to thisconversation very well.
Especially, you know, andtalking a little bit about that
coaching, with your coachingexperience, which you saw in
education and especially with AIin education, and being that
you are in a private school alsoas well, it's very interesting

(06:01):
just to get those perspectivesas well, because one of the
things is, you know, in thepublic school sector, it's very
interesting just to get thoseperspectives as well, because
one of the things is, you know,in the public school sector,
it's depending on the size ofyour school, usually the bigger
you are, the more platforms youget to have, as opposed to a
smaller school, due to funding,you have to be a little bit more
just tight with your money andbudgets and so on.

(06:21):
So, you know, getting thatexperience and, of course, that
perspective from private schooltoo, as well as teachers and
students, how they areinteracting with a lot of
platforms.
I would definitely love to hearthat.
So I want to ask you, joe, juston your own, before getting
into education, I would love tohear what your thoughts were.

(06:44):
November 2022, taking it wayback.
And as soon as you know ChatGPTwas out, what were your initial
thoughts?
Were you an early adopter?
Were you did you?
Were you kind of a wait and seekind of guy, or did you just
really kind of wait it out untilyou said, okay, let me see what
this is all about?
So I would just love to hearyour experience through that.

Job Christiansen (07:05):
Yeah, so I had started at the school that
previous August.
So I've been there I don't know.
So November was like threemonths in, right.
So I'm three months in.
I was aware it came out.
I went and made an account atOpenAI ChatGPT.
I typed one prompt in like notknowing how to prompt, right,

(07:27):
and it was something about likecreate a guidelines or policy
for our school.
I saw it and then I was likekind of cool and then I didn't
touch it for a long time.
My thought was just this wasI'm kind of a slow adopter in
general with technology in mylife.
I just kind of grew up that way.

(07:49):
I was kind of behind the curve,so I want to be aware of things
, but I don't always use thethings.
And so at the time I was justapprehensive because I didn't
know really what AI was and Iwas worried.
It was kind of like the waythat it's portrayed in media,
that it's had like a life of itsown, and so it was kind of like

(08:12):
skeptical but maybe optimistic.
Skeptical, but I'm not someonewho's just gonna like jump in
and use it right out the gateexcellent.

Fonz Mendoza (08:23):
Yeah, then that for me was something very
similar, like I'm kind of anearly adopter, same thing Just
kind of went in and then I justkind of backed off a little bit
just to seeing as how thingswere moving and especially
within education and seeing andlearning more.
Because before and I kind ofwanted to it kind of goes to a
post that you kind of put uprecently where I honestly

(08:46):
thought, oh my gosh, like thisis really magical, you know, and
I was like, oh my goodness,this is great and this is going
to go ahead and do a lot oftransformation, and so then,
kind of seeing the way thingswere going and understanding a
little bit more about how LLMsreally work and so on, and
following other people from bothsides Obviously I want to hear,

(09:07):
you know, from the, I guess,for AI crowd or, you know, the
early adopters.
And then, of course, we've gotsort of the cautious advocates
that are kind of in the middlejust kind of seeing things
through, and then you've got,you know, some of us that may
just kind of hang back a littlebit more, but it was very
interesting where I just kind ofsaid you know what?
Let me kind of slow things downand understand more that.

(09:29):
You know, not everything has tobe AI, but the way that the
perception was is like, oh mygosh, this is going to save me
time, this is going to save mefrom burnout, this is going to
save me from, you know, thissituation and so on.
Just, I guess, creating work, orhaving something ready lesson
plans, of course, or get rid ofthe Sunday scaries, as a lot of

(09:52):
platforms kind of you know, theyprey on those things and saying
like, oh, this is the way we'regoing to sell.
And I'm going to go back toMicah Shippey, who was on the
show a couple of months backsaying, you know, fear,
uncertainty and doubt are whatsells, and that's really what
they kind of do, you know.
And coming back from Misty,there are many platforms that

(10:12):
are out there and you're kind ofstarting to see kind of like
the top five that are reallykind of getting out in the
forefront and kind of being, Iguess you would say, the
educator favorites.
But I want to get your thoughtson that as far as when you
started seeing it, you know,with your teachers or you know,
were your teachers earlyadopters too, and as you were

(10:33):
kind of going through andhelping them out.
Were you seeing some of thethings that they were working on
and what were your thoughts onthat?

Job Christiansen (10:39):
I'd say we had a handful of early adopters,
but in general, it's just a lotof like caution, and so I think

(11:00):
actually where I've reallystarted to see it like creep in
or just appear is not with thetools that are built as-specific
, but in the tools we're alreadyusing.
I started to notice there'djust be AI features start to
appear, and that's when Istarted to become a lot more
conscientious of this is goingto be in here whether or not we
are actively choosing to sign on.

(11:24):
We're using AI.
It's just appearing in thetools we're already using, and
so, unless we're just going toget rid of all the tools, we
need to figure out how to use it.

Fonz Mendoza (11:33):
Okay.

Job Christiansen (11:34):
Excellent.

Fonz Mendoza (11:35):
So now your teachers, as far as being able
to use it, and were they usingit?
How were they using it?
Was it mainly just for, likeyou know, worksheet creation,
was it?
You know what were the initial,you know ways that they started
adopting the technology yeah, Ithink, I think especially that

(11:56):
first year we kind of put out alike.

Job Christiansen (11:59):
It was kind of like banned.
It was basically, um, I thinkespecially like no students, um,
but if teachers want to, theycan.
And then the second year it wasmore okay.
Here's kind of like some roughguidelines.
I, I don't know, it's kind offoggy in my mind um, um, I would
say the teachers that use it Iknow the one that was a really

(12:21):
early adopter and now is kind oflike one of the main people in
the building I go to if I wantto have a discussion about AI.
She generally, I think, uses itto help create like lesson
plans and things.
She's not someone who usesworksheets, so it's not like
that doesn't appeal to her.
It's all about like lessonplans, rubrics, helping to
revise or come up with newprojects.

(12:42):
Think of more like using Add aThought Partner is how that
teacher especially is using it.
I'm trying to think about theother teacher, but generally
those teachers are specificallya lot more tech savvy, and so I
don't actually get a lot ofinteraction with them because
they don't need me.

(13:02):
They don't need to ask me forquestions or even to necessarily
fix their equipment.
Um, they just can pick upsomething and run with it, and
so, um, they just were given thefreedom to do that on their own
excellent.

Fonz Mendoza (13:18):
so, as the kind of uh, you know, of course, the
progression of ai from 2022 andits initial stages, like you
said, you know, uh, thinking ofit as a thought partner and, of
course, the progression of AIfrom 2022 and its initial stages
, like you said, you knowthinking of it as a thought
partner and, of course, we haveso many names for it too as well
, that, you know, kind ofletting people know, like this
is not going to or should not beyour tool to just offload

(13:40):
everything, but it's justsupposed to be that tool to help
supplement or help improve, youknow, the learning process or
maybe within the lesson plansand so on.
So I want to ask you, you know,now that you've seen this kind
of shift and now that you'remore familiar with a lot of the
platforms out there, what areyour initial thoughts now that
you're seeing, you know?
Now, it's basically maybe aboutfive platforms that are really,

(14:04):
you know, kind of garneringeverybody's attentions and
everything, as I saw at ISTE.
Usually it's about those fivethat are there, that are kind of
, you know, weeding themselvesout and coming up to the
forefront.
What has changed as far as yourperception of the use of AI in
education?

Job Christiansen (14:22):
the use of AI in education.
I think, like widespreadthere's.
I guess it's like you said,that they, that educators, are

(14:48):
recognizing that it can helpsave time.
It's hard for me to kind of.
This is partly why I'm onLinkedIn a lot and I'm
interacting with people, becauseI want to get that outside
perspective, because otherwiseI'm going to end up in my own
little private school bubble,right, and I don't want to just
stay there.
I want to know what's happeningat other places and where this
field of education is shifting.
It's hard to get that sense,though, on LinkedIn and even in

(15:11):
other places, because it's all.
It feels like it's all over theplace.
I feel like there are peoplethat are creating all their
lesson plans and using it asthought partners and using it in
innovative ways with students,and then there are people way
over here that are just likewe're not even touching it yet,
right, like it's not evenallowed in our classrooms or

(15:32):
schools, and so it's all overthe place.
So the people that are pickingit up and running with it, my
impression, I guess I would say,is that they are a little I
don't want to say like overlyoptimistic, but in general, I

(15:54):
think they see the benefits ofit.
But where I get a littlecautious is on, like, the safety
and the data privacy side.
So I think that's where thatshift happened and especially I
don't even know if most peoplerecognize this but where I get a

(16:14):
little.
I'm less enthusiastic about thebig tools that are well-known
now in the like, you keepmentioning the big five, which I
probably could name at leastthree, and they're just wrappers
for the LLMs and so it's justlike prepackaged prompts for

(16:35):
educators to use.
And once I put two and twotogether and figured that out,
I'm like, oh, this is lessexciting.
I thought they actually built abrand new.
They call it tools, right.
You go into one of the big five, right, and there's like 150
tools specifically for educators, right, and they call them
tools.
And I'm just like the tool isjust a pre-programmed prompt

(16:57):
that's talking an API back tothe LLM.
So if you know what you'redoing which some of the teachers
I interact with do they don'teven go to the big tools, they
just go to the LLM, because theycan actually get what they want
faster than trying to workthrough a prepackaged prompt If
you don't know what you're doing, and I think that's where the

(17:18):
shift has kind of come.
Educators don't have a lot ofthem don't have the time.
I refuse to believe they don'thave the skill.
I think we are capable oflearning and picking up new
things, but I think a lot ofthem just don't have the time to
go to ChatGPT and learn how toprompt the way they can get what
they want.
So they pick up one of the bigfive tools and someone like

(17:45):
Charlotte shares with them hey,I can get you like a rubric and
a lesson plan in 10 minutes.
And I think to themselves ohwow, like I only had 10 minutes
today and I couldn't get donewhat I wanted to do, I'm going
to try this and so that's wherethat shift is happening.
But I still think it's like ifyou're using one of those big
five tools instead of like anLLM or actually learning all the

(18:05):
background, I feel like you'restopping yourself short of
actually using it in a creativeand innovative way and it just
becomes another tool.
Like it just becomes anothertool and the big thing is that
it's a substitution.
I think I hinted at this in oneof my recent posts I was talking

(18:27):
about like AI.
We're still at the level ofsubstitution, and we're not.
It's.
I have not seen much that's atthe level of modification and
reinvention, which what I'mtalking about is the SAMR model
for using ed tech in schools,and this year I've started.
I have a newsletter at myschool that I started and I've

(18:47):
started actually educating thepeople in my district about like
the SAMR model, cause I thinkthis is actually kind of not I
don't know how many teachersactually know about it, right,
and so everything that I'mseeing from most of these big
tools are, like you said, likeyou can just make a worksheet.
That's the same as what theyhad before.

(19:09):
We're just making worksheetsfaster.
So we're just doing the samething we did before, but we're
doing it faster.
But are we actually doing itbetter?
And what I care about, myguiding light is now how do we
learn better?
So if worksheets were workingbefore, why do we actually need

(19:30):
AI?
We can just keep doing what wewere doing before.
You know what I mean.

Fonz Mendoza (19:36):
Yeah, no, and that makes perfect sense.
You know, and there's a lot ofthings that I want to unpack
there.
So one of the things that we'llkind of go back to is the
safety issue, because I know youmentioned it right now, you
know during this answer, and Iknow that you've pointed out

(19:58):
that safe isn't really a neutralterm in that sense.

Job Christiansen (20:00):
So I want to ask you, you know, in your
opinion, and I know you did postabout this on LinkedIn how
should school leaders you knowwith your experience and what
you have seen, define andcommunicate what is safe use, or

(20:20):
actually what safe use means inthat practical and
developmental terms.
This is a really big question.
So, partly this year, just togive you some historical context
because, like I said, I comefrom a history background, so I
approach everything from context.
Figure out the context.
First, we put together an AItask force and I was on that to

(20:44):
come up with a formal policy.
So we're discussing safety,formal policy so like what we're
discussing like safety, and sosome of the things we looked at
other policies from otherschools and what we kind of
ended up on safe use really kindof relates to like ethical use.
So the way that we're thinkingabout AI tools is it's kind of

(21:05):
the same way we've beenapproaching other edtech tools
and so safe use has to do withnot using it to inflict harm on
others not.
I don't want to just use notphrases, but especially where it

(21:29):
gets tricky with AI, it is alot more convoluted than other
or previous ed tech tools, so Iguess I want to approach it from
like okay, let's talk aboutjust like the teacher side.
So what would safe use forteachers look like?

(21:52):
Teachers are still responsiblefor any output, anything that
they create with it, but, asanyone who's used AI knows, ai
is not necessarily always goingto give you verifiable or
authentic not authentic theirword is escaping me but

(22:15):
sometimes the output's justgoing to be wrong.
And so safe use is actuallyputting on your thinking cap and
actually vetting what you'regetting.
Where I see some really bigflags and some shocking stories
from educators is when they putinformation into an AI tool that

(22:38):
contains private informationabout students.
That contains privateinformation about students, and
so the tools on the back end wedon't know where that data is
going.
So that is an unsafe use, right?
That's like buying a billboardand just posting that student's
information on there.

(22:58):
You don't know who's driving bythat billboard now and taking
down that student's information.
So on the teacher side, that'sdefinitely like unsafe use.
On the student side, it's allkind of the similar things Like
don't put your privateinformation in there, but where

(23:19):
I would actually almost think ofsafe use as a misnomer.
I don't know that there is agenuine, real safe use in the
sense that if you thinksomething's safe, you can use it
freely without any harm comingto you or to others.
And I don't know if AI hasactually I'm hesitant to say AI

(23:40):
has actually reached that levelor especially even the ways that
these companies are trying toput these protections and
guardrails around the packageand the wrapper that they're
pulling from the big LLMs.
I'm not sure I haven't reallyseen good guardrails and so in
that sense I'm hesitant toactually say it's safe.

(24:02):
If we want to actually jumpinto what do I think some of the
issues are, I actually, when Itest out an AI tool, I don't
just read their privacy policy.
I create a teacher account andthen, especially if it has like
a student facing side, I have adummy student account and so
I'll make assignments from theteacher and then I'll send it to

(24:24):
my student account and interactwith it as a student would.
And so where I really don't seethese tools as being safe are
the things that I test andsometimes it puts me in a dark
mood, but I, from my backgroundworking in tech support, I would
actually see sometimes whatstudents type in Google, right?

(24:45):
So if you just take how astudent uses Google, how they
might approach using an AI tool,they're probably going to start
using it the same way as theyused Google and so students
might start typing in thingsthat are giving social,
emotional cues, things like Idon't feel safe, I might be

(25:08):
depressed.
I've tested.
I've literally typed into an AItool like I didn't explicitly
say I was running away, but Iwas typing in questions as if
like how?
do I buy a plane ticket to meetup with my friend, right?
And I wanted to see if the AItool would pick up on.
Does it actually know that thestudent wants to run away?
Because an adult would pick upon that and they would intervene

(25:31):
, and so some of the tools did,okay.
And then where I get hesitant,though, is it just says, like I
sense you're in distress, pleasetalk to an adult, but then it
might just move on and thestudent can just keep typing in
and it would just move on as ifnothing happened, whereas an

(25:53):
actual adult would recognizethat and intervene and need to
have a conversation like there'ssomething's going on with the
student, and so, in that sense,none of the AI tools that I've
used played with.
That sense, none of the AItools that I've used played with
and now I have a list of over20, 25 of them, I think have

(26:17):
sufficient guardrails toactually say these are safe for
use.
You can trust your kid using itwithout any intervention, right
?
Because to me that's what safemeans is.
They can just go on there.
No one ever needs to read thelogs.
It's going to alert us ifsomething happens.
None of the tools properly alertLike I don't know what other
schools are doing, I just knowwhat our school does.
But if there's signs that astudent might be in emotional

(26:39):
distress or is thinking of harmto themselves or others for
other things like Google searchengines we have tools to help be
alerted to that, and none ofthe AI tools that I've played
with are actually alerting us ofthat nature.
So that's why I'm kind offlabbergasted that people are

(27:01):
excited to put AI in the handsof students, when a student can
just type in there that I'mdepressed and no adult at the
school is going to be alertedand so, and now we don't know
what the AI tool is even goingto like give them for advice,
wise.
Now I've played out thatscenario.
None of the scenarios does theAI tool suggest the student like

(27:21):
continues with that train ofthought.
Like they do recommend talkingto someone.
Sometimes they even mightprovide some names or phone
numbers.
I'm not sure if they actuallygave contact info because that's
like regionally specific, right, but um, I just felt like
they're not doing their duediligence and especially when

(27:43):
you get into the situation ofteachers and adults at schools
are mandated reporters.
None of the AI tools are reallytaking the place of that
mandated reporting.
They don't have the proper.
I mean, there's a legal issuethere, and so that's where I

(28:04):
land on safe and unsafe withstudents.

Fonz Mendoza (28:08):
That was a great answer.
I loved it.
I mean everything that youdescribed and just covering
there.
You know, oftentimes, like Isaid, as educators sometimes we
get overtaken by the excitementof getting shiny stickers or
fluorescent shirts or gettinginvited to a party, you know,
for a particular app, and we'rejust there and we think like, oh

(28:31):
, okay, they're pretty cool,cool people.
And sometimes we tend tooverlook the fact of you know,
once I use this app, is it doingwhat it should do?
Is it, does it have the properguardrails there for student
safety and is there anythingthat is going to warn or alert a
teacher?
If there is an issue like youtalked about and you're

(28:53):
absolutely right in thinkingabout that you know, usually
I'll go look in the terms ofservices too as well, and I'm
just there looking at detailsand, you know, looking very
closely, and for the most partit always says like you know, no
, we don't.
You know we don't keep yourinformation.
But as you keep going and keepgoing, then they'll say, oh yeah
, by the way, we would use yourinformation for third parties

(29:14):
and so on.
So it's almost like we're goingto give you what you like here
in this first page, but we knowyou're not going to continue
reading, but in this next page,that's where we're going to go
ahead and put in that is, shouldanything happen, you can't come
up against us and we will only,um, you know the?
I think it said something likeif you are, you know, pursuing
something, there's only a short,you know little fee, uh, that

(29:37):
they would have to pay on theirpart, or actually.
No, they said like they're notresponsible for anything at all
and they think you would have togo to against their third
parties.
You know, should there be,there be any litigation.
And I'm thinking, I'm alwaysthinking to myself it's like
something happens in a schooldistrict.
You can't come, you know, afterthe application, but now you
have to go to that third partywhich, like you said, for the
most part their APIs they'reconnecting to either open AI or

(30:01):
cloud or anything else that isout there.
So now you know, a schooldistrict can't go up against
somebody that big or a bigentity.
It's almost like oh wow.
So it kind of goes withaccountability too and I know
that's something that you stressalso in some of the you know
posts that I saw too as far asthat AI has no accountability

(30:22):
and no consequences for errorsor hallucinations.
So I want to get your thoughtprocess on that in this question
being that you have, you know,a history background.
One of my biggest concerns isalways applications where they
say hey, you can go ahead andtalk to George Washington, you
can go ahead and talk to MartinLuther King, or you know Amelia

(30:45):
Earhart or you know anybody else, and to me, like Rob Nelson
said a couple of episodes back,it's like that digital
necromancy what the history thatthey're getting is, because,
since these applications arescraping everything, you know

(31:16):
whose history are they gettingand is it in line with what that
particular state is seeking,and so what answers are they
getting?
So those are my concerns.
There Now for yourself, havingthat history background, what
are your thoughts onapplications that are
student-facing, where they cango ahead and talk to, you know,
a historical figure?

Job Christiansen (31:49):
regard because initially, even though I'm a
little bullish on like AI forstudents, I did think that if
you're going to use AI withstudents doing some sort of
interactive chat like this wouldactually be really beneficial.
So, with my experience withsome of the ones I tested out,
with some of the ones I testedout, the best ones are the ones

(32:10):
that aren't just relying on howthe model was trained, basically
.
So if you're going to do that asa teacher, you should be having
some sort of a not a script but, like I think, you can like
attach files or be very clear inhow you want that historical
figure to respond, so you're notjust trusting AI to come up

(32:31):
with oh, the historical figurehad acted in this way and had
this background and this history.
If I was going to do that, Iwould say, okay, I want to
create a chat activity aboutGeorge Washington, like your
example.
As the teacher, then I shouldgo in and provide AI with these
are the characteristics ofGeorge Washington.

(32:52):
These are some of the famoushistorical events I want you to
touch on and reinforce for thestudent's learning, and then you
can tie that learning back tothe standards you tell it to.
Okay, we want to make sure thestudent understands like.
These are the main points Iwant them to get across.
That's how the educator canstay in control of that learning
process.
It sounds fun and flashy tojust.

(33:14):
You can whip it up andliterally create an account and
from not having an account tomaking this activity for
students, you can do that inless than a minute.
You can make an account, createthat activity of just a plain
George Washington and share thatwith your students without any
extra information.
But is that actually what'sgoing to help the learning

(33:34):
process and outcomes best forstudents.
Again, that's what I keep comingback to.
No, you need to provide more asthe teacher, and sometimes that
might be.
Hey, I pulled these historicalweb links and I put them in as
links.
So now that AI chat tool isgoing to be pulling from there.
So it's almost like the studentis interacting with.

(33:55):
Like what if you could type andask this historical article
about George Washington?
That's a little bit moreappropriate, I would think.
That's where I feel about it.
I haven't necessarily like theexercises, like the, the tests
that I did.
My go-to because this is mybackground is in like ancient

(34:18):
history and especiallyarchaeology.
I did some archaeological workfor a couple years and so what I
do is I've made some activitieswhere a student is interacting
with a generic archaeologist andI want to reinforce these
points.
So I think, especially that's amuch you can make a much

(34:38):
stronger case for just like ageneric person.
There's a lot of weird ethicalscenarios surrounding.
Hey, you're going to beinteracting with this
hypothetical historical figure,george Washington.
But you're right, how does AIactually really know what George
Washington was like whenhistorians have been studying

(34:59):
him for 200 years and we'restill like discussing stuff Like
that's.
It's a big gray area.
And then that ties to the wholesafety thing of if a student's
interaction with this GeorgeWashington chat activity, if

(35:23):
that's their only experience andthat's how they're learning
about George Washington, now howis that going to inform and
shape their view of history whenit wasn't shaped by the human,
the AI or the educator in theprocess, or actual historians?

(35:47):
Because I don't know how muchAI tools are scraping.
My understanding is there's anew model that's kind of put
together every six months and sothey feed a bunch of training
model data in there, but whatthey're feeding it isn't
everything.
And I know some of these LLMsare going out to the internet

(36:09):
and pulling live information,but they never pull everything.
They're pulling a few.
And so that's where it's reallyimportant as the educator, when
you're interacting withsomething with the students, is
to make sure that you're keepingyour critical thinking in the
loop, and especially historicalcontext.

Fonz Mendoza (36:32):
Yes, I love it, and this is actually Joe.
This is kind of like a nicesegue to this next question,
talking about a post that youput up like two weeks ago,
talking about AI pollution.
You know, of course, we'retalking about the information
and the way that LLMs work, andso that was something that was
very interesting that kind ofcame up and how you mentioned

(36:55):
that there's a higher value onknowledge and information prior
to the AI detonation of 2022,like you put.
So we're talking're talking,but I mean it makes sense.
So, uh, tell us a little bitmore about your thoughts on that
.
As far as ai pollution,possibly diluting, uh, untainted

(37:15):
human knowledge- yeah, so thatI'm trying.

Job Christiansen (37:20):
I don't remember the original source,
but that terminology kind ofcame in.
I saw an article um and so, justto break it down, I mean, you
did a pretty good summary, butthe um, basically they did
research and they found thatalmost everything that's been

(37:41):
created since that release ofthe LLMs back in 2022 has now
has bits and pieces of, like AIgenerated or influenced
information.
So now they're calling that AIthe pollution.
And so now, if everythingthat's been made since 2022 has
that polluted or taintedmaterial in there, when they're

(38:02):
training new models, they're notjust taking everything prior to
2022 and feeding it, becausethat's what they use to train
the model in 2022.
You want to keep updating it,so you're going to feed it new
information, but the newinformation is tainted, so the
new models progressively,theoretically, are getting more
and more tainted as this goes on, goes on, um, unless, as I was

(38:33):
pointing out, there's now a um,now, theoretically, there would
be a higher value on what's notbeen like what you hasn't used
ai to um create it.
um, I'm trying to think how muchmore to unpack it, but so my
thoughts on that are thisactually puts a premium on
original thought and on thosewho can think and communicate

(39:01):
and create without using AItools.
So I wonder if this is going tocreate like a new, almost a
hierarchy or a rolling class ofhere's the people.
Like if this was back in theday I don't remember if I posted
this, but I was like if thiscame out hundreds of years ago,

(39:23):
how would like ago, how wouldlike they're skipping me Anyway,
how would like Newton and DaVinci, would they have used AI

(39:57):
tools or not?
And like is there a premium ontheir information?
Or are there certain people andcreators operating at a certain
level that they can put outsuper high quality stuff that
doesn't need that, not taintedat all?
Or maybe they are using AI, butit's in such a way that, with
their human critical thinking,what they're putting out doesn't
have that taint, right?
There's not like hallucinationsor false data in there.
So these people now everythingthey do is at a premium and
their information not theirinformation, but what they
produce, their product now iftheir thinking now becomes the

(40:20):
product that they can now sellto the big AI models open AI Now
we'll probably want to buythese philosophers,
theoretically, information,their writing and their thinking
, because they can use it totrain their data and have a more
pure AI thinking algorithm,versus the rest of us who can't

(40:42):
think at that level and there'sgoing to be tainted stuff in
here.
Now our stuff has less value.
Are we going to ascribedifferent value and worth to
thoughts?
And this is getting reallymurky and philosophical.
But I think we need to betalking about it because it's

(41:03):
nice and fun to jump into an AItool and create something I do
it all the time with like AIimages but if we're not really
thinking about the long-termconsequences and what is the
actual future we're creatinglike?
We think we're creating afuture where teachers can create

(41:24):
all their lesson plans in onehour right for the next several
weeks because they have this AIassistant and now they have so
much more time, but you can'tactually create all that with
tainted information.
Now, how is that subverting theeducation process and the
learning process?
And now, now what's going tohappen 20 years from now, when

(41:49):
students grew up only learningfrom AI-tainted and polluted
information.
Now, they can't ever producepure information either, because
everything they have in theirhead is tainted already, right?
So this is getting really murky, and that's where I'm kind of
like let's put the gas pedal orthe brake pedal on a little bit

(42:10):
and think about this.

Fonz Mendoza (42:14):
Yes, no, I love that and I'll let you know kind
of my reflection, being at ISTEand moderating an AI panel.
What it seemed like is thatthere was this kind of switch
now, where before, I think, wekind of went at it, in other
words, we're using it andimplementing it in everything,
but now, you know, we're 2025,july 2025.

(42:41):
And now the conversation islike okay, now let's kind of
reel it back and kind of justmake a pause and kind of start
really focusing on that teacheraspect, where before it was just
go, go, go, whatever you canfigure out on your own.
There were no rules.
No, you know, maybe manydistricts didn't have anything

(43:01):
in place, but now thoseconversations are slowly coming
back and being okay.
Now we need to be very cautious, have that responsibility as
far as how we're going to talkto people in our district, you
know, and not just includingdistrict members, but we're
talking about parents too,because some of the questions
came up where that came up,where.

(43:22):
Well, what if there is a parentthat chooses to opt out of
using one of those platforms?
Or have you, as a district,informed parents of the
platforms that are being used inthe classroom, because
sometimes we know that, asindividual teachers, they go to
a conference, they may come backand they may start using a tool
that may not be something thatis allowed in the district.

(43:44):
And now you're have you toldyour parents that this is being
used and how that information isbeing used and how you're using
it to input some of the studentinformation?
So those are some of theconversations now that are kind
of coming to fruition and kindof slowing down a little bit.
But, as far as you know, seeingthe those big five, you know

(44:07):
for the most part and thenyou're starting to see also a
lot of smaller apps that arekind of coming out and, you know
, trying trying to really getinto this space.
And I know that there's a lotof money that is backing this
space, a lot of investment thatis going into a lot of these
applications, and so they'removing forward, they're doing
their thing.

(44:27):
And you know, my thing was islike you know how long will this
last and if it's something thatis just going to continue to
grow year after year which Ithink it will, you know, because
everybody's like really pushingit and you know it's already in
most of our classrooms and it'sbeing used.
But I want to ask you just tokind of wrap up you know if, if,

(44:49):
for our listeners that are outthere that this is the first
time they get to hear yourthoughts and your experience and
so on, from this conversation.
What is one thing that youwould hope that they would kind
of carry forward and be one ofthose practices that maybe they
either use themselves orsomething that they may share
with their district.

(45:09):
So what is one key takeawaythat you would just love to
share with our listeners?

Job Christiansen (45:20):
just love to share with our listeners.
Hmm, I'd say that the biggestthing is ask questions.
I I actually didn't anticipate,like a year ago that I'd be
talking about AI like this.
I pretty much I jumped in lessthan a year ago I and just tried
to learn as much as I couldabout AI because I was asking

(45:41):
those questions, and so that'sone of the biggest things is
whether you're just a parent oran educator.
If you haven't heard, likepolicy from your school, I would
start asking those questions.
And I think, at some level, thetypes of questions you should

(46:05):
be asking are, like you said,what are our tools being used?
What is the policy or what'sthe perspective towards AI use,
like, what's the vision and longterm plan?
Because sometimes it might bewell we're just trying to fill
this gap for three months, butwithout the foresight of are we

(46:27):
still going to be doing the samething three years from now and
we don't know what's going tohappen three years from now.
But if you're just putting abandaid on, like I'll be honest,
I shared mid-year I I had someteachers asking me.
They weren't asking about aispecifically, but I had some
teachers asking like how do Ihelp change reading levels.

(46:47):
They wanted to differentiatereading text for their students,
and so I saw this as anopportunity.
It took some time, and I wentin and I showed them a couple of
ai tools that they couldactually, like um, put in their
text and then change the readinglevels so that they could help
students, and I didn't have thislong conversation about what AI
is, I just knew right now we'remid-year they just need this

(47:11):
little thing, and so I don'teven think they necessarily use
it that much.
All they knew how to do was I,I can change reading levels.
Right, that's like a reallyquick, small thing, and that
isn't necessarily something thatI think needs to be
communicated to the parents,because that's actually
something where there's notreally that much of a risk of

(47:33):
hallucination.
Right, the information's alreadythere.
It's just changing the um, theappropriate reading level, and
so I think where parentsespecially need to be asking
questions, though, are when itgets into the gray areas of are
teachers using ai to help givefeedback and grades?
And that's where I've seen somelawsuits around the country,

(47:55):
where teachers have been doingthat and providing comments
without being transparent, andso that goes in tandem with
asking questions On the otherside, if you're actually like a
user of AI, the big thing andthis is what we're reinforcing
in our policy that we're goingto be rolling out transparency.

(48:16):
So when I use AI whether it'sto I mean, you never really see
this on my posts on LinkedInbecause I don't use AI to write
my posts, but if I used AI tohelp write text, I'd put a
little disclaimer.
I used AI, I used like thismodel, like you say who it was,
and then that's like part ofwhat I would consider good

(48:39):
standard operating practice.
Now with AI Be transparent thatyou use it and what you used it
for.
Otherwise, you're going tostart to mislead people about
who you are and what yourthoughts actually are.

Fonz Mendoza (48:53):
I love it.
Well, thank you so much, job.
I really appreciate it.
This is such a greatconversation and I really want
to say thank you so much forjust kind of meeting with me
here, having this talk andhearing your perspective, and I
think I definitely took a lot ofvalue, a lot of valuable gems.
I should say that I definitelywant to dive in and you
definitely had a lot of greatsoundbites that I can't wait to

(49:14):
share.
But thank you so much.
Now, before we kind of wrap upwith our last three questions
that I always ask all my guests,I would love to give you a
little bit of time.
Can you just tell us, for ouraudience members that are
listening, and especially ifthey are on LinkedIn as well,
you know, or you know, ifthey're on different social
media platforms?
Can you please let our audiencemembers know how it is that

(49:37):
they might be able to connectwith you?

Job Christiansen (49:41):
Yeah, so I'm most active on LinkedIn.
You can look me up with just myname, job Christensen.
I don't think there's anyoneelse out there with that name.
I'm not.
I use other social media, butnot it's all like personal.
So I also, if you like longerform stuff, I do have a personal

(50:04):
.
I have a blog website that'scalled Seek Grow Align and
that's where some of it is likepersonal blog stuff, but also
that's where I put like bookreviews.
So, uh, if I read a book andmostly I'll do this with
education focused books, butsometimes non-education, but
anyway I'll read a book and thenI'll write like my reflection

(50:27):
book review on it and it'll justbe pages and pages, just stuff
that the character count doesn'tfit on LinkedIn, so I post it
there, um, and that actuallyhelps with my learning process.
So if you want to know likedeeper thoughts, it's all on
that separate site.

Fonz Mendoza (50:42):
Excellent.
And what was the site?
One more time.

Job Christiansen (50:45):
It's SeekGrowAlign.

Fonz Mendoza (50:47):
Okay.

Job Christiansen (50:48):
I can send it to you.

Fonz Mendoza (50:50):
Yeah, is it SeekGrowAligncom.

Job Christiansen (50:53):
Yes, SeekGrowAligncom.

Fonz Mendoza (50:55):
Perfect, excellent , we'll definitely make sure
that we link that on the shownotes as well.
Uh, all right, but before wewrap up, again, last three
questions.
So, job, I hope you're ready toanswer.
And here we go.
Question number one as we know,every superhero has a weakness
or a pain point.
For superman, we know thatkryptonite kind of weakened him.

(51:17):
So I want to ask you, job, inthe current state of AI in
education, I would love to knowwhat your edu-kryptonite is.

Job Christiansen (51:31):
My edu-kryptonite, like what I
spend a lot of time talkingabout.
It has to be the whole safetyissue, particularly with the
fact that we can't hold AIaccountable for its output.
So I would say that lack ofaccountability is just like that
thorn that's constantly jabbingin my side.

(51:51):
I'm like, yeah, this is cool.
I use a new tool, and then I'mlike, wait, oh, we can't have
accountability yet, and so it'sjust constantly there and I
don't know how to resolve that.
There's just tension when I'mplaying with tools.

Fonz Mendoza (52:08):
Excellent, all right.
Great share, great answer.
I love it, all right.
Question number two Job is ifyou could have a billboard with
anything on it, what would it beand why?

Job Christiansen (52:25):
I would say probably for a billboard
something along the lines oflet's get better, or like keep
learning so that let's getbetter phrase.
It came to my mind a few weeksago and I don't remember why it
sparked it.
But did you ever see the old 90sshow Frasier, yeah, yeah.

(52:50):
So in one episode Frasier'sbrother, niles, comes on and he
actually ends up like runningFrasier's show.
Frasier Gets Sick, so Nilesdoes on and he actually ends up
like running Fraser's show.
Fraser gets sick, so Niles doeslike the radio psychiatry thing
, and Niles comes up with thiscatchphrase and it's let's get
better.
And so in my own world, let'sget better relates to let's just

(53:11):
keep learning and growing.
I have a growth mindset.
I'm a lifelong learner.
My mantra now is just likelet's get better.
You're never really going to bedone learning and you're never
going to reach this like perfectstage.
And so I just have that like inmy head now and I keep thinking
about it.

Fonz Mendoza (53:31):
Well, hey, that works.
That's definitely a great,great message to share because
it can fit into so many you knowcategories in life.
Like you mentioned you're,you're, you're never, you never
stop learning.
And for anybody that's going toget better at anything, it's
just to continue to pursue thatyou know, that knowledge and
just practice, and it's justrepetition and things of that

(53:53):
sort to get to that point.
So I really like that.
Let's get better.
And it's so simple yet sopowerful right now.
You got me really thinking onthat and I'm like, yes, let's
get better, you know All right,and my last question for you Job
would be is if there is oneperson that you can trade places
for or trade places with for asingle day, who would that be

(54:16):
and why?

Job Christiansen (54:22):
Man, I know you gave me time to think about
this, but uh, do they have to belike living people?

Fonz Mendoza (54:30):
no, no, it could be anybody, anybody.

Job Christiansen (54:35):
Um, I'm sorry.

(54:58):
Sometimes it takes me a whileto think of things.

Fonz Mendoza (55:02):
Oh, don't worry about it, joe, it's all good.
Don't worry, we can edit thatpart, but just anybody.

Job Christiansen (55:17):
It can be anybody.
I'm trying to think about thethings that really drive me and
like I want to know about, so um, like the mysteries at keeping

(55:41):
a bit night.
Okay, so this is a guy thatprobably, like very few people
have heard of.
His name was Howard Butler, andthis goes back to my
archaeology days.
Howard Butler worked atPrinceton and so Princeton sent
this expedition in, I want tosay, 1905.
It started in Jerusalem andthey went up through what is now

(56:03):
Israel, palestine, jordan andSyria.
Right Then they were catalogingand mapping a lot of ancient
historical sites as they went,and so these are some of like
the earliest records we have oflike Western documentation of
these ancient sites.
So the site that I worked at inJordan, this is like the first
stuff we have of like westerndocumentation of these ancient
sites.
So the site that I worked at injordan, this is like the first
stuff we have, and so wereference this, reference all

(56:23):
this stuff.
So, but the thing is likehoward butler didn't.
He did some really gooddocumentation, but I know
there's stuff that's missing andI wish I could have been him
for like the day he first cameto this archaeological site that
I worked at.
It's called umel jamaa injordan, and I wish I could have
been him for like the day hefirst came to this
archaeological site that Iworked at.
It's called Umel Jamal inJordan, and I wish I could have
been him for that day to see itlike in the state it was in 120

(56:47):
years ago.
I know what it's looked like thelast 10 years, but a lot of
it's like collapsed, fallen down.
There's a lot of stuff that'shappened in between, and even
though he has really gooddocumentation, the photographs
are not good and there's stuffmissing from documentation and I
just wish I could like see andexperience that wonder of what

(57:08):
it was like in that state.
Right, that's just I love.
I just love like like stuffpreserved in time, and so it
just would have given me a lotof perspective on the people and
what happened there.
That will.
It's basically lost right whenI'm never going to, we're never

(57:30):
going to know certain thingsthat Howard Butler saw.

Fonz Mendoza (57:35):
Excellent, that is a great answer.
I love that.
I think that's the very firstperson, or you're the very first
guest, that goes back and, youknow, chooses a historical
figure in that sense, you know,and that's very interesting.
You know, like I said, I neverthought about that.
You know, a moment caught intime, you know, and being able
to go back to the way it waswhen it was first discovered,

(57:57):
because now we only get to seewhat we may know now and,
depending on when you get tomake that trip, like you said,
you know you went over there andyou saw it in a very different
state.
So, yeah, that's veryinteresting.
I had never thought about that.
But another thing that causesme to kind of pause and, you
know, think about those things,like you know, really capture

(58:17):
those moments and so, yeah, loveit.
Well, joe, thank you so much.
I really appreciate it.
Again, thank you from thebottom of my heart for being a
guest and, you know, sharingyour experience and, of course,
sharing your thoughts of AI andeducation.
Like I said, I know a lot ofaudience members are definitely
going to take some gems thatthey can sprinkle onto what
they're already doing.

(58:37):
Great, so thank you for thatand for our audience members,
please make sure you visit ourwebsite at myedtechlife
myedtechlife where you can checkout this amazing episode and
the other 327 episodes where, Ipromise you, you will find a
little bit of something for youto continue to grow and to
continue to learn, as always.
Thank you so much to all oursponsors, thank you so much to

(59:00):
Book Creator, thank you so muchEduAid, thank you so much
Yellowdig, and if you areinterested in being a sponsor of
our show, please don't hesitateto reach out to me.
We would love to collaborateand work together with you.
But, as always, guys, from thebottom of my heart, until next
time, don't forget, stay techie.
Thank you.
Advertise With Us

Popular Podcasts

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.