Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 2 (00:00):
Let's get weird.
Speaker 1 (00:01):
That's my number one
that's going to be my opener.
Speaker 2 (00:03):
when we do the opener
, let's get weird.
I love it.
Speaker 1 (00:09):
Welcome back to
Change Ed the national.
What am I going to?
Speaker 3 (00:14):
say it's a podcast,
this is a podcast.
Speaker 1 (00:16):
Yeah, no, no, no, I
want to say the national
lighthouse for educationalpodcasts.
Wow, wow, even our guest.
I'm no longer your host, andrewKuhn.
Speaker 3 (00:36):
Thank goodness it
only took Brian Howsam coming
for us to lose Andrew Kuhn.
Education consultant forMontgomery County Intermediate
Unit, and here with me isPatrice Semicek, still an
educational consultant from theMontgomery County Intermediate
Unit, and our guest is someonewho is going to talk to us a
(00:57):
little bit.
Speaker 1 (00:57):
Dive into not only AI
, but also.
Speaker 3 (01:01):
We went to the best
session.
It was so good.
Speaker 1 (01:04):
It was so good.
It was really good.
We were hearing about yourperspective on AI and you had a
lot of great information butalso examples.
I really appreciated that youcould model as use.
So you weren't necessarilysaying here's the tool to use,
but it was more like here's howyou can use a tool.
I'm wondering if you couldshare with us a little bit about
your thought process, how yougot to that spot and why you
chose modeling in your sessionwhen you were talking about AI.
Speaker 2 (01:27):
Sure, well, thank you
for having me and I'm so glad
that you had a good, comfortedexperience.
You know it's it's interestingkind of having these
conversations about AI or reallytechnology in general.
When I first kind of startedout doing this type of work as
an educational consultant, Istarted talking a lot about
technology in a variety ofdifferent ways.
It was, you know, the early2000s, and everybody was trying
(01:49):
to figure out what they could dowith Google, and so there were
so many people within that spacethat were really kind of
talking about the tools withoutactually showing how the tools
work.
There's also, I think, a lot ofpeople that fall in love with
one particular tool rather thanreally focusing on, like, the
thinking that goes into thattool.
(02:09):
The tools are going to change,like you know, almost every day
at this point, but I think thatas long as we have an
understanding of how thethinking works with the tool,
then we're going to be able tocarry that a lot further along
the way.
So exactly the same type ofthing happened when people first
started picking up ChatGPT back, you know, in the distant year
(02:33):
of 2023, 22, like early 23.
Speaker 3 (02:36):
It feels forever ago
Like 2022.
Speaker 1 (02:38):
Way back.
It feels like it's been a longtime.
Speaker 2 (02:41):
It's not been very
long at all and people wanted to
talk about, like, the theorybehind it as opposed to what can
we do with it, and I think thatthe quicker that we can just
sort of jump into that deep end.
Try some things out, give, giveeducators some ideas of here's
how it can be useful for me.
Then there's going to be muchmore buy-in, at least from my
(03:02):
perspective.
Speaker 3 (03:03):
I really appreciated
how, in your session, you
started with the why Like.
Why is this even important?
Why is this even something weshould be spending our time in?
And then you listed like threeor four different ways we should
be thinking, because you usedAI to help answer this one
question, which was fabulous,and then you got into okay, well
, now let me show you which wasthe other thing that I think
Andrew mentioned too, showingthem that it's not this big,
scary tool that could possiblytake over their jobs or remove
(03:27):
something from them.
Instead, it's an addition towhat they're doing.
And then you like demonstratedokay, well, how can I use this
in the classroom?
I think that's a reallypowerful way of getting people
to understand, to get over thebarrier of AI.
Is this new tool that's gonnapotentially kind of really shake
things up, and the couldpotentially kind of really shake
(03:49):
things up.
The other thing we have a lotof really interesting
conversations, andrew and I iskids are using it anyway, so we
need to help them use theirpower for good instead of using
it for whatever the nefariousreasons they're using outside of
school or even within school.
Right, like so many nefariousreasons.
So, especially with gifted kids.
If we're going to be honest, ifyou only use your power for
good, it'd be amazing.
So I think the way that youpresented it was fabulous.
So have you, in terms of causeyou've been presenting for a
(04:10):
while now, right Like I've seenyou at a few conferences, have
you, like, read anything orfigured out?
How did you figure out how topresent things in that, in that
way?
Speaker 2 (04:18):
You know, I always
look for what is the good story.
You know, I think that everyeverything that we do in life
really relates to you know what,what is the narrative on that?
And you know, when structuringa lesson, when constructing a
presentation, a workshop,whatever it is, I always look
for what's the story thread.
(04:38):
You know, and you really kindof think of it from from that
story structure.
My first year is in English, soI have an understanding of you
know how a play works.
Speaker 1 (04:48):
So here's how you
develop a novel.
Speaker 2 (04:50):
So my first job you
know, because I was an English
major and I graduated and thatqualified me to do so many
things.
So my first job post-collegewas I started out as an
assistant manager of anindependent video store in the
Atlanta metro area.
That was like mid 90s.
So we were like blockbustercompetitor.
Speaker 3 (05:12):
Yes, yeah.
Speaker 2 (05:13):
All of the stories
that are coming to your mind,
all true.
Speaker 1 (05:17):
Be kind Rewind yeah.
Speaker 2 (05:19):
Every single one of
them.
And so, yeah, I mean, I justwatched a whole lot of film.
You know that was pre-streamingand you just really had the
opportunity really to make thatmovie, the movie story, like
your library, and really kind ofthinking about what are people
going to remember?
We're going to be those kind ofthose sticking points and how
do you create enough of thatcompelling story that they want
(05:40):
to be a part of that?
So, yeah, that's, I think, justwhat goes into a good, any good
lesson, anything that thathappens within the classroom
needs to have a good story to goalong with it.
Speaker 3 (05:51):
So how did you use AI
?
Or did you use AI to help youcraft the story?
Speaker 2 (05:54):
Yeah, so in thinking
through that, so like within
this presentation which this waslike totally the first draft or
that particular framework I wasthinking more of it through
using Kaplan's thinking like adisciplinarian framework, and I
wanted to provide a fewdifferent perspectives.
And the three that I kind ofcame up with is that we were
(06:14):
that we were thinking like aphilosopher, we were questioning
like a scientist and we werecreating like an artist, and so
by really kind of latching orattaching to those three points,
then it provided a lot ofleeway to really build things
out for each of those points.
Then then I asked AI, forexample, how does one begin to
(06:37):
start thinking like aphilosopher?
What are, you know, what arefive things that I could do in
order to think like aphilosopher?
And you know, give me, you know, some really good ideas from
there.
That was kind of the generaloutline.
And then, using those pointsthat I thought, oh well, how can
we build out or how can I, howcan I give more specific
examples of what that looks like?
And using AI really as thatthought partner to help take
(06:59):
what it was that I was alreadythinking and annotate it in a
way maybe that I wasn't thinkingabout.
Speaker 3 (07:03):
I love that.
I love that Andrew and I talk alot about AI being a co-pilot
not a pilot or a backseat driver, but like being a co pilot and
so being able to use any tool toenhance is a fabulous idea.
Thank you for explaining itlike that.
Speaker 2 (07:16):
Yeah, you're welcome.
I mean, I think you know, thereal worry, I think, of most
teachers is that their studentsare going to use it, use AI, as
sort of that.
Oh, I'm going to Google theanswer, like, oh, I'm going to
take my existing prompt orassignment, they're going to put
it in there and then they'regoing to copy paste and submit
you know the answer.
That's, for me, a real problem.
(07:38):
The problem for me around thatcomes with this, this
overemphasis on the finalproduct versus the process that
we go through.
Like, we only check on ourstudents when they submit that
final product and we aren't withthem on that journey of going
through the messiness andawkwardness of the creative
(07:58):
process.
Then, yeah, they're totallygoing to.
You know, there's the realpotential that they could just,
you know, ask AI and submit that.
As I've been talking more andmore about AI, that I've really
tried to emphasize to educatorsat all levels is the importance
of taking pretty much every oneof your existing project
assignments and just submittingit to all the AI tools that you
(08:18):
can get access to, Just to seewhat it is that they come up
with, so that you know like, hey, here's what is going to happen
, here's what a product mightlook like if I just submitted it
as is.
I'm really kind of fond of theidea of using that for your
students as a starting point.
Go ahead and saying like hey, Iwent ahead and submitted the
(08:39):
assignment to AI for you.
This is what it said.
Tell me why this project prettymuch sucks at this point, right
?
How can we use that as thestarting point and how can you
improve on it to make it better?
How can you personalize that sothat it's representative of the
things that you're interestedin?
Speaker 3 (08:57):
I think too.
Speaker 2 (08:58):
Or is it that you
want to go next?
Speaker 3 (08:59):
Yeah To your point.
I think if we don't talk aboutAI in our current classrooms or
we sweep it under the rug,that's when they're more likely
to submit something and say hereit's done.
And then you see all thosewords that you showed us that
are showing up.
So we this is TMI, I guess wewrote a proposal, we stuck it
into Gemini to say make itbetter, like make it fluffier,
(09:20):
and there was at least three ofthe words that you shared in
there and we were like nope,can't have that word in there.
That's very obvious.
Ai.
Like it was actually really.
It was really good to test itout and like kind of validated
exactly what you said in thatstudy.
But I think if teachers ignoreit, they're going to get those
cookie cutter.
I put it into AI and figured itout, kind of situation.
(09:40):
But if they do exactly what yousaid or if they say like okay,
we know this is what's going tohappen, I need you to write
something, put it in, make itbetter, and then you revise it,
use it as a revision tool, Ithink we're going to see a lot
of really cool things and I likehow using what you suggested
allows kids to figure out how toprompt AI better too, to give
them the answers that they'rereally looking for.
Speaker 2 (10:00):
Yeah, I mean, you
know, for me AI has that
potential to be that thoughtpartner that you can bounce
those ideas back and forth.
And for many of our gifted kidsthey do not necessarily like to
collaborate with others becausethey've had some really bad
experiences with that, but Ithink with AI then they have
this potential to, you know,have some really interesting
(10:23):
conversations.
Granted, they should also behaving human to human
conversations and not just humanto AI conversations, but I
think it affords them some newopportunities to to think about
things in ways maybe that theyhaven't been willing to do
before.
Speaker 1 (10:38):
Brian, one thing that
you were talking about that I
really appreciated and wanted tojust take a minute to talk
about was that you know, we allrely on previous experiences to
help us understand somethingthat's new.
What can we attach it to, whatcan we look at?
But I think one of theshortcomings of us doing that is
that we're Google-izing AI andwe're saying, oh, let me go
there and get the answer andcome back.
You know what does it look liketo authenticate that answer?
(11:00):
Right?
How do we consider all of thesethings that are part of it?
And what I love about whatyou're talking about is how is
this now a dialogue?
And one of the things that Isaw that was so powerful just to
see it come to life in yoursession was the speed with which
you can get that informationback.
And now it's all encoded, so theinformation comes to you so
fast you can make thesedecisions.
But the part that humanizes itis based off our previous
(11:22):
experiences.
We could have got there right,but it would have taken so much
longer.
And you're like, oh yes, andthat makes me think of the next
thing, so it actually pushes usto the next level.
We can stay at a differentlevel of thinking and processing
versus, you know, kind ofebbing and flowing.
We can get to that spot, likeyou said, with this thought
partner.
When we can get to that spot,like you said, with this thought
partner.
When I interviewed for this, forthe job that I'm at, I actually
(11:43):
said I don't do anything inisolation, I'm always talking to
other people.
And now you can develop thisthought partner that actually
eventually understands kind ofyour perspective or what are you
asking by what you're notasking?
And, right, we're only at thecusp of where this can go and
the potential of it.
So I again my, the thing that Itook away was just how quick it
could happen.
And even your, you give us fiveminutes in our session to talk
(12:07):
about different ideas we coulddo with something, and then
you're like, well, let's justpunch it in here.
Boom, we had the ideas we cameup with in the room, plus you
know easily 15 more that justyou know instantaneously.
But then we could take thateven further where, instead of
our entire session with youbeing about this one part, you
beautifully and masterfullydemonstrated for us that power
and how we can work with AI, notjust looking for an answer, but
(12:30):
we knew where we wanted to go.
We didn't necessarily have itall mapped out of how we were
going to get there, but we knewwhere we wanted to go.
And I loved how you were sayingyou know the journey can be
messy, and so we also kind ofdemonstrated that like, oh, okay
, let's, let's lean over and dothis example or let's look more
into this, but thatinstantaneous, not just fast to
be fast, but to help push usalong and take us further along
(12:51):
the journey.
Speaker 2 (12:52):
Thank you for just
giving me all the fuels and, you
know, thank you for thepositive feedback.
I I'm just going to put youlike on, like repeat.
Anytime, anytime, call us for ahit like on, like repeat,
anytime, anytime.
So, yeah, I think that you know,until people see that, those
possibilities, then they justdon't even know, like they don't
know what they don't know.
And then once you open up thatdoor, then they can start saying
(13:16):
, oh well, if it could do that,I wonder if it could do this
Also.
You know, in kind of thinkingthrough you know, this portion
of the conversation, I'm sort ofreminded of that quote from
Arthur C Clarke, who sayssomething to the effect of any
sufficiently advanced technologyis indistinguishable from magic
(13:37):
.
And indeed, the first fewhundred times that I saw ChatGPT
or Gemini or any of the AIlarge language models produce
some, you know, relevant content, it felt like magic just
because it happens so quickly.
No matter what it is that youask AI to come up with, then
(13:57):
it's going to, you know, it'sgoing to produce something
that's going to fit exactly whatit is that you're asking for.
That feels just really new andfresh and I still I mean, you
know, we're, you know, almosttwo years into this and and it
feels incredibly exciting still-Very good point.
Speaker 1 (14:16):
While we've we've
touched on this idea of how do
we authenticate the informationthat's coming through, and we
even talked about, you know,google-izing, ai and so forth
venturing into it in a differentway, one of the things that
struck me was that you talkedabout becoming a super critical
consumer of information and, ifI'm not mistaken which I never
(14:37):
am, brian you used the analogyof CAPES to kind of help
individuals work through that,and would you mind talking about
that a little bit and sharingwith us?
I'd love for our listeners tohave something they could at
least look to or start to grabon to, as to what does it look
like to figure out if this isfake news or if this is
something genuine?
Speaker 2 (14:55):
In the session we're
really kind of talking about the
importance of being a criticalconsumer.
This isn't necessarily a newconversation.
Ai just presents perhaps somenew challenges for us in
authenticating that informationthat we're accessing Back
probably 2018 or so.
I read a book called FightingFake News teaching kids the
(15:18):
importance of critical thinkingin a digital age.
It was really built on a lot ofwork that I did when I was in
grad school at the University ofConnecticut back in the mid
2000s during the aughts.
So in addition to workingwithin gifted education with Joe
Renzulli, salary Stills, eekley, all the others that are part
of the NEAG Center, now theRenzulli Center I also went and
(15:42):
played with the ed techplayground folks over there.
So right around that time DonLiu was a professor at that's
LEU.
Don was a professor at UConn,really focusing on new
literacies, because what theywere really kind of finding out
in those early 2000s is that asstudents were navigate search
(16:04):
engines, they were also havingto learn this set of new
literacies in order to be betterinternet searchers.
One of the studies that histeam, the new literacies
research team, was kind ofprobably famous for was the
whole Pacific Northwest treeoctopus website and that
students went and found the treeoctopus website but then they
thought that the informationthat they found there was a
hundred percent accurate.
I was like back 2005, 2006,.
(16:27):
It's now 2024, 20 years later,almost, or having exactly the
same conversations, like becauseAI can very quickly produce
this information doesn'tnecessarily mean that it is
correct.
So to really kind of battlethat in that fighting fake news
book I created a framework thatI called CAPES, so that what we
(16:48):
want to do is when we're lookingat that information and this
works mostly with more kind oflike news media outlet type of
information versus what AI isproducing.
So the CAPES is an acronym, sothat each letter stands for
something, a different kind ofway of thinking.
So the C is for credentials.
Are we going to be able to findout who is saying this
(17:09):
information?
What makes them an expert onthat particular topic?
The A stands for accuracy.
Can we look and determine howtrue or not true that is?
Can we compare that withanother source?
Can we triangulate that data?
P stands for the purpose.
Is this really the purposemeant to persuade?
(17:29):
Is it meant to inform?
Is it meant to entertain or isit trying to sell us something?
So kind of using that PI'sframework persuade, inform,
entertain or sell.
That purpose is reallyimportant.
When, again, when we're lookingfor that information,
particularly coming from onlinenews organizations, the E is
emotion.
As I looked at other criticalthinking frameworks and really
(17:52):
thinking about information thatwe find online, the emotion
portion was really left out ofthat.
How does that informationfactoid website, story, youtube
clip, whatever it is how does itmake us feel?
Does it make us feel angry?
Why are we angry about that?
Does it make us feel sad?
How are they playing with ouremotions and are we checking
those emotions before we're, youknow, looking like retweet or
(18:16):
re-X or whatever it is that wesay?
Now?
The last piece is support,which really kind of goes back
to that triangulation of data.
Who else is saying this and howdo we?
Are we verifying that thisinformation is really correct as
a part of that?
So the CAPES, I think, workssomewhat with AI, but you know,
(18:38):
honestly I think that weprobably need a whole is that
there is this real concern thatAI is going to hallucinate or
just make up information.
Probably one of the first timesthat I talked about AI, I was
doing a workshop in Texas andsomebody had posed that question
of like can we trust it?
(18:58):
And, almost without thinkinglike, my gut reaction was like,
do you trust me?
Like, do you trust me?
And she's like well, yeah, I'mlike why.
I could totally be making allof this up right now.
I could be speculating left andright about what is the right
or wrong answer.
She's like but I trust you.
You're like a speaker, you havea microphone, you're standing
(19:20):
in front of a group of people,it's like, but you should still
question what it is that I'msaying yeah, yeah, yeah.
You don't just need to accept it.
Yeah, like you got to factcheck it.
And that fact checking, I think, can be a lot of work.
People hallucinate a lot,especially at the rate in which
it's part of the human nature.
We make things up when we don'tknow.
Speaker 3 (19:41):
Yeah, especially at
the rate in which we're
inundated with information, likethere's no escaping information
of all kinds around us.
Yeah, yeah, we just have to becareful with that?
Speaker 2 (19:50):
Yeah, Not that we
need to, you know.
Question absolutely everything,but we just need to critically
consume and think about how farare we willing to trust that.
Yeah, and it's going to vary,and you know it's in every
situation it's going to be alittle bit different.
Speaker 1 (20:06):
There's so much more
quick to trust something that we
can see versus something thatwe can't see, but it doesn't
always.
You can't always judge a bookby its cover, as you were saying
.
I don't know that Brian Housenis necessarily saying all things
that are truthful right now.
Speaker 2 (20:28):
I'm going to have to
backtrack this podcast.
That's right.
I could be completely lying toyou right now, but it's, it's
fascinating because we I thinkwe just have we have to be
careful and we have to.
You have to believe in someone,something, otherwise you would
never get anything accomplished.
And I think that it's reallysomeone, something, otherwise
you would never get anythingaccomplished.
And I think that it's really ourprior experiences, our
knowledge base and really, Ithink, carefully examining what
(20:52):
our biases are and how ourbiases are influencing the way
that we're thinking and who itis that we believe or that we
don't believe.
And it's a complicatedminefield that we have to
navigate and, you know, to teachour learners, our kids, how to
navigate that minefield is, it'sa Herculean task, to say the
(21:12):
very least.
Speaker 3 (21:13):
Yes, yes, I would
definitely agree.
Speaker 1 (21:15):
Brian, one of our
longstanding traditions on this
World War Now podcast.
Very popular in North Carolina,by the way.
Speaker 3 (21:22):
It will be now.
Speaker 1 (21:23):
Yes, is that we like
to give our guests the second to
last final thought.
Do you have things that you'dlike to share, that maybe you
didn't get a chance to say, orthat you'd like to circle back
to and really emphasize?
Speaker 3 (21:33):
Andrew likes to give
the second to last because he
needs to have the last word.
Speaker 1 (21:37):
This is going to be
challenging.
This time, I'm not going to lie.
Speaker 2 (21:42):
I would say for my
second to the last thought, as
it relates to AI, is to becurious, to try things out with
AI.
Don't be so quick to judge whenyou haven't asked it questions
yourself that you have to bewilling to jump in that deep end
to see what it's capable of.
If we automatically discount itor try and shut it down or ban
(22:04):
it before we even have anunderstanding of what it can and
can't do, then we've alreadylost.
This technology is not going togo away.
It's going to continue to getstronger and more advanced and
it has a tremendous amount ofpotential.
I think that it can be a reallybeneficial tool for us.
It's not going to solve all ourproblems.
It's also not going to end theworld as we know it.
(22:26):
Yeah, at least today.
Speaker 1 (22:29):
Tune back into the
podcast to learn more about that
.
Well, obviously you've seen mynotes because you said
everything I was going to say,brian, so now I've got to come
up with new material on the fly.
We really appreciate coming onthe show and sharing your time,
your wealth of knowledge andeven your experience.
There were two things thatreally well.
One thing that really therewere a lot of things that really
(22:49):
stuck out to me Two, then one,then wow, now a lot.
Wow, one of the things thatreally stuck out to me when you
were talking about your ownpresentation, how you prepare it
, was you said what are peoplegoing to remember?
This truly believe this is oneof those moments in history
where people will look back andthey will remember how we
handled it, what we were doing,how we our apprehension about it
(23:12):
, but also the greatopportunities that were in front
of us, and and even look backand say, well, we had no idea
this was coming, this, thisopened up a whole new can of
worms or made a whole new worldof possibilities, and what I
appreciate about that is is, youknow, people will also remember
our own individual approachesabout with it and as educators,
we have this additionalresponsibility of you know we we
(23:34):
can elicit fear or we canelicit excitement or caution.
You know there's a lot of thingsthat we do, not just by what we
say but by what we actually do.
So I felt actually veryencouraged by that that people
are going to remember ouractions.
They're going to remember howwe talk about things and what we
do with it and for all of ourlisteners, the one thing that I
really liked in yourconversation about CAPES is
(23:56):
CAPES is at least a spot that wecan start and enter into this
conversation and start toempower ourselves and our
students.
The thing that I want to remindall of our listeners is to
follow the airlines and firstput on your own CAPES before you
help students put on theirCAPES.
Wow.
Speaker 2 (24:13):
Yeah, I've been
working on that for 20 minutes.
Speaker 3 (24:17):
That was really well
played.
Wow, that was a lot.