All Episodes

July 16, 2025 • 32 mins

Send us a text

Stephanie and Zac talk with Dr. Maggie Beiting-Parrish about the possible intended and unintended consequences of John Hattie's work and what we should be asking when we see research in education and beyond.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Stephanie (00:00):
Hey friends, welcome back to Academic Distinctions.
This is part two of our episodeon John Hattie's work.
In the first part, we talkedabout why our brains love lists
and order and what makes thingssticky.
And we introduced Hattie'smeta-meta analysis and its
impact.
This time, we're digging intowhy some say the math might not
really math.
And we'll talk to Dr.
Maggie Biding-Parish, whoseresearch specializes in

(00:23):
quantitative methods, tohopefully learn a little bit
more about whether or not any ofthis means, well, And

Zac (00:48):
we're back.
So eventually, all of this loveof Addy kind of broke down.
And stats and researchers...
Nope.
So eventually...
This love affair with Hattie.
Woke up the next morning,looked across the pillow and
said, something's different.

(01:09):
More specifically and lessrom-commy, statisticians and
researchers looked at the mathand noticed there were some
problems.

Stephanie (01:19):
Yeah.
So here's the thing.
Most teachers, most people havelittle to no experience or
exposure to two things.
Number one, researchmethodology.
Research follows the scientificmethod that we all learn in
school, right, Zac?

Zac (01:36):
You have a hypothesis, you design an experiment, you
conduct the experiment, and youcompare what happened to what
you thought would happen.
And then you run it again witha different group.
Lather, rinse, repeat, andrepeat, and repeat, and repeat,
and repeat, and repeat.

Stephanie (01:53):
And the other is statistics and probability.

Zac (01:55):
Studying the likelihood that stuff happens.
or happens by chance, and thenusing those numbers to make
predictions about what mighthappen if you do this, that, or
the other.

Stephanie (02:07):
So to get us through the rest of this conversation,
we're going to provide listenerswith some terms someone needs
to know in an age of researchand effect size.
So the first term is samplesize.
How many things were selectedto look at?
In Hattie's meta-analyses, thiscould be either the number of
studies he looked at or thenumber of students the studies
looked at.

Zac (02:29):
The next one is the term statistically significant.
And this is the likelihood thatsomething is happening because
of something you did or anintervention, or if it's just
happening by total chance.

Stephanie (02:44):
And the last one is effect size.
In this case, Hattie usedCohen's d.
It compares two groups, onegroup that got a treatment or
intervention and one that didnot.
And when you run thatcalculation, you get a number.
That number is essentiallyascribed to one of four
categories, which are closelyaligned to probably not likely,

(03:05):
maybe a little likely, sort oflikely, and probably likely.
But the actual number doesn'tmean anything, unlike in math
and stats, where numbers usuallyhave a context associated with
it.

Zac (03:19):
And that's really important.
We're not saying that effectsize is bad.
It means what it means, but itdoesn't mean something exact.

Stephanie (03:28):
Yeah.
So what was not so great washow Hattie determined effect
size.
Hattie took a ton of studiesand put them all together into
one big group and basicallysaid, aha, these things work
better than these other things.
And I know this because of mycalculation of Cohen's d.

Zac (03:43):
Which sounds, especially to me, as somebody who has minimal
stats training and a degree inEnglish and other things with
words that are not numbers, likeit isn't a big problem.
So Stephanie, what is the bigproblem?

Stephanie (03:56):
So there were a mixture of different types of
data, different experimentalmethods, different sample sizes.

Zac (04:02):
That's a lot of difference.
And different feels like it's aproblem when we're trying to
think about things being thesame.

Stephanie (04:10):
Right.
So you've heard of the saying,an apple a day keeps the doctor
away, right?

Zac (04:15):
Yes, I did.
And did you know that it hasits origins in a Welsh saying
from the 1860s, which goessomething like, eat an apple on
going to bed.
and you'll keep the doctor fromearning his bread.
Did you know

Stephanie (04:28):
that?
I didn't.
I did not.
Thank you for bringing that tomy attention.

Zac (04:32):
My guess is that you were not looking for history of
adages, but maybe you mean thereare a few things left out of
consideration in An Apple a DayKeeps the Doctor Away.
Like, I might eat an appleevery day, but it's not going to
keep me from going to thedoctor if I have some other kind
of health issue that applesdon't fix or probably don't fix.

(04:52):
Like, say, if I fall and breakmy leg.

Stephanie (04:55):
Right.
Rubbing an apple on it is notgoing to help.
They aren't super fortifiedwith calcium, so it's not going
to aid in keeping your bonesstrong.
Apples don't help there.

Zac (05:05):
So what does this mean?
Why are we talking about applesand doctors?

Stephanie (05:10):
Yeah.
So let's pretend you want totest the apple a day theory.
You first need to design anexperiment.
Let's say your first experimentgathers 200 people and you
force everyone to eat an appleevery single day for a year.
And then you total up thenumber of times folks went to
the doctor.
That might not hit a home run.

Zac (05:29):
Because you have nothing to compare it to.
You only know about people whoate apples every day.

Stephanie (05:33):
Right.
So let's go back to the drawingboard.
And now let's say you have 200people, but you randomly sort
them into one of two groups.
100 folks who have to eat anapple every single day for a
year.
And then 100 folks who cannoteat an apple at all for a year.
Is that a better design?

Zac (05:51):
It sounds better because then you are making a better
comparison.
I know what happens if I eatapples versus what I don't eat
apples.
And because everybody'srandomly assigned, I can say
these groups are probablysimilar.
Math, math, math, math, math.

Stephanie (06:06):
Yeah.
Yeah, sure.
That's, that's great.
So let's return to the drawingboard again and

Zac (06:12):
say, Hold on.
What?
There's a lot on this drawingboard.
Let me erase it.

Stephanie (06:15):
Oh, okay.
All right.
Let me know when you're

Zac (06:17):
ready.
Yeah,

Stephanie (06:18):
go ahead.
Okay.
So let's say you have 200people.
And you let them pick thegroups they join.
Five of them choose to eatbananas every day.
20 choose to eat apples.
100 choose to eat oranges.
And the remaining 75 choose toeat fruit salad.
All five of the people in thebanana group never went to the
doctor.
What can you tell me?

Zac (06:40):
I feel like you're trying to get me to be like, bananas
are the best.
Because on paper, it would looklike a banana every day kept me
from going bananas.
to the doctor, but I feel likeit's also a trick question.
Isn't it possible that thepeople in the banana group also
had other things in common?
Maybe they didn't have accessto a doctor, no health

(07:03):
insurance.
Maybe they were on amultivitamin.
Maybe they were just healthierin general.
There are a lot of things atplay here.
If I have asthma and I'mallergic to bananas, but I
hadn't been allergic to bananas,then that was the group I
chose.
I would have impacted thenumber.
I don't know.

(07:23):
It doesn't make sense.

Stephanie (07:24):
Yep, yep.
So keep that feeling in mind.
And what I want you to do istake all the experiments we just
designed and combine thefindings and say broadly,
bananas have the best impact onyour health, not apples.
In fact, we're going to useCohen's d and say the effect
size is 0.92, which isconsidered, just for whoever's
wondering, a pretty likelyvalue.

(07:46):
How do you feel about that?

Zac (07:49):
I don't feel great about it because I don't know who was in
the groups.
Like if we're still on thefruit issue, like, are we saying
everybody had a banana?
Everybody had an orange, nobananas, no oranges, all apples.

Stephanie (08:10):
Yeah.
So, so that's the issue in anutshell.
The issue we have is not witheffect size, but what had to use
to get it and the reporting ofit in his chart.
We don't know enough about thedesigns of the experiments,
whether they were good or bad,how the data were collected, the
sample sizes, or even what thepossible responses could have
been.
He just combined them all intoone big, beautiful meta meta

(08:32):
analysis.
And when statisticians calledhim out on it, he basically
said, well, I didn't think I wasgoing to include that list at
first.
And then I did almost as anafterthought.
And sorry, I can't help it.
If you didn't understand thethings I said, the way I meant
them to be said.

Zac (08:47):
Also, I think it's a, important to point out as we
talked with gee that that listis the thing that gets shared
the most from this work peopleare familiar with if not all 138
or 200 however many there arein the current incarnation of
addy people are familiar withthat list and they love the

(09:07):
order and hierarchy that itprovides so even if you didn't
intend it this seems like whatended up happening

Stephanie (09:15):
Right.
And honestly, you know, somefolks have said it's good
enough, that even if you takeout the bad studies, the effect
sizes are relatively unchanged,so it doesn't matter.
There's a list, there arenumbers, there are rankings,
that should be enough.

Zac (09:30):
But at the end of the day, where we had our apples,
teachers do not have time forthis.
We might look at Hattie's bookor we might just see the slides
where we capture the top of thatlist in our professional
learning.
And our school administratorsays, do these 10 things.
This is all we're going to do.
But they don't have time forit.
So when we come back, we'regoing to talk with Maggie.

(09:53):
Oh, biting.
She's muted.
So when we come back, we'regoing to talk with Maggie Biting
Parish to help us make sense ofeducation research.
whether Hattie's work was bador good, and what the unintended
consequences may have been.

(10:15):
And we're back, and we broughtsome reinforcements.
It is my pleasure to welcome tothe podcast Maggie Biding
Parish, PhD in EducationalPsychology and with a focus on
quality.
Nope, not qualitative.
People would have been angry.

(10:36):
All the scientists in the worldwould be like, who's she?
Quantitative, meaning math,methods.
I say it means math.
I know it doesn't mean that.
I know I'm simplifying things.
I'm just a tiny brain.
Maggie, you listened to thatlast segment and our discussion
of Hattie and his research andwhere he may have gone off the

(10:56):
rails.
What do you think of Hattie?

Maggie (10:59):
So I think in general, it is...
It's important to gathertogether all of the research
that exists around differenttopics, especially since they're
so far flown across differentcountries and contexts.
This was a really great attemptto combine it and kind of
gather all this differentresearch and all these different
domains and subjects into onesort of easy to use place.

(11:20):
I thought that was actually areally noble thing to do here,
especially since it combines thework of like 50,000 some
studies or something.
So I think that's good.
I think some of the longdownstream consequences of this
work are...
or where the problems start tocome in.

Stephanie (11:34):
What consequences?
What do you, what do you, what,what?

Maggie (11:39):
Well, I think especially in some like PDs and things,
people just sort of took hisranked list of the effect sizes
and have just sort of treatedthat as sort of like a 138 line
long 10 commandments and justkind of worked on the list.
Like this is the best thing wecan do.
So let's just work down thelist.
Anything towards the bottomthat has like a negative effect
size or very small people justchoose to, kind of removed from

(12:00):
their practice or kind ofdowngrade or don't focus on.
And I think sort of thatranking might have created a
sort of false belief for a lotof people about like what's the
best and what isn't, or thisbelief that something in
education could be the bestpractice.

Zac (12:14):
And I'm going to talk about that a little bit when I get to
my thing on this question.
Actually, a good thing aboutwhat you just said was a bad
thing, but we'll hold on.
Stephanie, you've now like,showered in the thoughtfulness
of Hattie and what works and whywe like it and where the math

(12:36):
went off the rails.
And as a person who works indata literacy, where are you
coming out on this Hattiediscussion?

Stephanie (12:44):
I just have a hard time with it.
You know, I was a teacher, aclassroom teacher when Hattie's
work came out.
And it was one of those thingswhere I got to sit in PD and all
over the place being told thatI need to focus on these moves
because these moves were theones that were going to work.
And as a person who likes dataand data science and statistics,

(13:09):
it's like, no, that's not whatthis says.
That's not what this says.
And...
Like, there's good there.
Like, there's something topoint at.
But with...
when you look at it and you,you point it as though it's
going to work because it workedin this particular circumstance.

(13:32):
And you, you, you walk aroundas though like, like what you
were saying, Maggie, that's the10 commandments is the gospel
truth.
It's like, that's, that's nothow data works.

Zac (13:41):
And I think anybody who's been in a classroom as a student
or ever moved from one schoolto the other as a student, um,
or gone from high school tocollege knows that location
matters.
And when, when, when we'rethinking about how teachers
teach and how students learn.
So I think that for me, the bigaha is, oh, we weren't

(14:02):
comparing things that werenecessarily the same.
And so that makes me worry alittle bit about exactly what
Maggie, you just said about,well, let's rank it and do these
top 10, right?
So that was, again, I was ateacher when Hattie's work came
out as well.
And that was how it was shownto us.
And we had the 20, right?
These are the 20 practiceswe're really going to focus on

(14:23):
this year.
And it became really difficultto do anything that wasn't those
20.
So I think that was an issue.

Maggie (14:34):
Yeah.
So I think a big thing isHattie is located in New
Zealand.
And so a lot of the studies hepulls down are also, you know,
some of course representAmerican classrooms, but they
also represent New Zealandclassrooms, Australian
classrooms, schools in Japan andSouth America, like all
different places.
And it's all across alldifferent contexts and all
different groups of students.
So yeah, you have to take thesort of those overall average

(14:55):
effect sizes by a study with apretty big grain of salt because
you're lumping in students whomight be from very different
contexts, age groups, gradelevels, subjects, classrooms,
all different teaching styles,all different things, and kind
of distilling down that onefinding, that one number based
on just like one set of findingswithout taking the whole larger
context into consideration.

Zac (15:17):
I think he tries to answer that a little bit in the
introduction where he talksabout, you know, I started
writing this book here.
Then I was in Australia.
Then I was in North Carolina.
And then I was in New Zealand.
So I think he tries to answerthat by saying, so I've seen
education in a lot of places.
And I believe that that istrue.
And people have kind ofbelabored that point.

(15:38):
But I think that what you'resaying is well taken.
So those are kind of the flaws,right?
Those are the kind of worriesand where we feel like we should
itch and chafe and push backagainst some of this.
I wonder if we can think aboutthe positives in this space.
So for me, I think about asaying I heard from Richard

(16:01):
Elmore, which was language isculture and culture is language.
And so as much as I just said,we were given the top 20 and
said to do these things.
If I were a school that had oran institution that had a broken
culture or no culture or a lotof really new people to do a
job, being able to look at those20 things and have a shared

(16:23):
definition of what we areworking toward, I think would be
a really, really helpful thingto say, all right, what is it
I'm supposed to be doing if I'mnew at this?
Or what are the expectations ofus in this space?
And what's the common languageso that when we come together in
a faculty meeting, what can weall talk about doing?

(16:44):
And if those 20 are the thingthat we're focusing on, that
does give me a chance to build acommon language and thereby a
common culture.

Stephanie (16:52):
Yeah.
So I think it is what's niceabout this work is that it
really kind of brought to lightthe magnitude of research that
had been done in education.
You know, one of the thingsthat when I worked in the
Department of Education, it wasreally surprising how much

(17:15):
research had been done andfunded.
And like, it never wentanywhere into the classrooms.
Like, it wasn't until I workedfor the US Department of
Education that I even knew theWhat Works Clearinghouse existed
within the Institute ofEducation Sciences.
And so knowing that there wasthis magnitude of ed research
that had been compiled into asingular like report,

(17:37):
essentially, as something thatcould be used as, you know, like
a rudder, if you will, for forschools that needed to know
where to go, I think was apretty positive aspect for it.

Maggie (17:51):
I think additionally, one of the really strong
positives is that it I thinkthere's a lot of you know,
popular discourse right nowthat's like everything's the
teacher's fault and everythingthey do is their own fault.
And I think what's nice aboutthis is it kind of breaks down
the different aspects oflearning, whether it's like the
teacher, the curriculum, thestudent aspects, the school
aspects, like it kind of bringsin all these different elements
of the whole picture of what itmeans to learn in a classroom.

(18:13):
And while it does treat thosekind of separately as their own
individual pieces, it does showthat there's all these different
aspects that go into teachingand learning that aren't just,
it's a teacher and she's notdoing well.
And so I thought that wasgreat.
And also it gives a lot ofdifferent examples and different
kinds of approaches you canuse.

Zac (18:31):
I know that we are focusing on the positives right now, but
you just brought up somethingwe haven't talked about at all
in this episode that is a thingthat drives me completely banana
pants.
And that is that like theteacher as the most important
aspect in a classroom and theway that even though you said
it's not just like there are alot of different factors that

(18:52):
that finding or has been used toas a weapon.
Right.
Like you are the most importantor effective or impactful
factor in a classroom as ateacher.
So it isn't like as thoughteachers did not know that.
And I think that when we lookat effect size and we don't know

(19:13):
what those numbers mean and wesee that it's bigger than the
other numbers, We think that wehave to, like, that that is the
thing that if we could justchange how teachers do this or
think about these things, theneverything else will come out in
the wash.
If you're listening to thispodcast and you are not in
education, this is a, everyteacher you know or have ever
met has struggled with the factthat there is, yes, they are the

(19:36):
most important factor in astudent's education, but they
are not the only factor and theycannot on their own overcome
things.
If you scroll down to thebottom and see the negative
effect sizes, One of thosepieces is depression, right?
Depression has a negativeeffect size on student learning.

Stephanie (19:54):
Shocking.

Zac (19:55):
Well, yeah, this is not a surprising finding.
I don't think anybody looked atthe findings and was like, oh,
really?
But it does say to me, like, myquestion then becomes, I think
it's like a negative 0.6.
I don't remember.
I'm not looking at it rightnow.
can that positive teacherexpectation of a student

(20:16):
overcome depression?
Can a positive teacherexpectation overcome poverty,
hunger, anxiety?
Like, I think that is an issuethat gets lost in this ranking.
I have a question for you,Maggie.
Talk to me about negativeeffect sizes.

Maggie (20:38):
Sure.
So, In the site, they did finda few.
A lot of them weren't verystrong negative effect sizes,
but there were some.
Essentially, like you mentionedwith your depression example,
they are factors that they havefound have a net...
If a group has a depressionversus a group of students who
doesn't, the kids withdepression would have lower

(20:59):
achievement levels than thegroup of kids with no
depression.
And so another example theygave was...
the effect of long summervacation on student learning.
Again, this is a very, verysmall negative effect size.
It's like negative 0.09, sobasically zero.
But again, that would say thatthe more summer vacation you

(21:20):
have, there is a negative impacton your achievement or your
academic achievement, at leastat the beginning of the school
year when they measured it.
But even that was sort ofproblematic because that value
is only based on onemeta-analysis.
And so And that only had a fewstudies within it to begin with.
And so again, if you just tookthat one value, you would say,
oh, I guess we should shortensummer break for everybody.

(21:41):
But if you really thought aboutit, it's like, that's not very,
very negative or very strong.
And so it's very close to zero.
And so we shouldn't cancel allof summer break just because
this value was right on thebubble of zero.
It kind

Stephanie (21:54):
of is interesting to me to hear that you know, an
extended summer break as only aminimal negative effect size for
how often we hear about thesummer slide, right?
And how to mitigate summerslide.
It's like, well, is it reallythat drastic of a slide at that
point?
Like, if it's only a tiny, arewe freaking out about nothing?

(22:15):
Or is it just as an impact ofor is it just as a result of you
know, not having enough data toback it up.
Would there have been a greaternegative effect size if there
was more data to go behind it?

Maggie (22:28):
Right, exactly.
And I think that meta-analysiswas in the mid 90s.
And so I would be curious tosee if you updated that now, how
that value would change if itwould change at all.
I suspect it would.

Zac (22:40):
Okay, so those are positives.
Mostly, we tried to staypositive.
It was a little difficult.
Cynics that we are.
I think it's interesting as wego through the book and in fact,
in the first edition, Hattiesays, this is what I'm not
trying to do, right?
So there's at least a nod tounintended consequences.
And I keep thinking about thelaw of unintended consequences

(23:03):
and our conversation with G atthe top of the episode and what
people latch onto, you know,what is the worst consequence of
our best idea?
So Maggie, again, not raggingon him, but, What is the
possible harm of all of this?

Maggie (23:19):
I think one of the possible harms is you're
combining a lot of studiestogether to get these sort of
average effect sizes, thisCohen's d here, that may not
actually be super related toeach other in any other really
form.
There's a lot of differentcontexts and a lot of different
aspects and different, as Imentioned before, different
countries, different gradelevels, different subjects, all

(23:39):
different things, all beingcombined into one place.
And then furthermore, I thinkthis maybe was one of the sort
of purposes of creating this,but in education research in
general, replication rarely everhappens.
In fact, one study found thatonly about 0.13% of studies are
replicated in educationalresearch.
And so I think part of thelogic of why this was created

(24:00):
was to help bring researchtogether, even if it wasn't
exactly the same study over andover, at least to try and find
these sort of general patternsor things across sort of the
same types of approaches andlearning curricula and things
like that.

Zac (24:14):
Stephanie, as somebody who thinks about data science a lot,
Maggie just said these thingsaren't replicated.
why do I care that educationresearch is not replicated?
I

Stephanie (24:24):
mean, part of the design of like the, the data
science process is that it'siterative, right?
You, you ask the question, youcollect the data, you analyze
the data, you come up with your,like your, your findings, and
then you go back to the drawingboard.
You do it again.
It's not just a one-time thing,right?

(24:45):
So if I'm in a classroom andI'm teaching a lesson and, to
students, one of the things thatI need to do is I need to
consistently check forunderstanding.
If I wait for the very last dayof the unit to give the
assessment and, you know,two-thirds of my kids don't pass
it, that says something.
But if I'm doing repeatedchecks for understanding, that's

(25:07):
going to give me a betterinsight as to what my kids need
assistance with, right?
And the why it's so problematicthat we're not repeating
things, but then they get toutedas what works after that
singular study.
If you can't replicate it, thenhow do you know that it works?
And I feel like that's a bigthing in ed research in general,

(25:31):
right?
We do these studies, we do thisresearch in a particular set of
classrooms, in a particularlocation, And then we don't go
and try to replicate it in adifferent state or a different
district or different gradelevel.
Right.
But suddenly this is labeled assomething that is beneficial to

(25:55):
all students.
Good teaching is good teaching.
Like, well, but OK, but everyclassroom is different.

Zac (26:01):
In a larger context, this makes me think of the initial
conversations we were havingabout a decade and a half ago
around climate change.
Right.
Those who criticized climatechange.
change activists and thoseclaiming we needed to take
action said, well, yeah, butyou're just one study, right?
And so what the scientificcommunity did was say, we're

(26:23):
going to do these studies overand over and over again, and
we're going to make themcomparable so that what we think
here in Greenland or Denmark orthe US or a country named here,
we're able to replicate thosefindings so that we have greater
certainty of what's going tohappen.
And that as much as there stillare people who don't want to

(26:47):
pay attention to that evidence,as there are in any field or on
any topic, more people havesigned over to the, oh, we
should probably do somethingabout that.
And so replicating that whenwe're talking about the
education of human beings alsoseems important.

Maggie (27:02):
But it's also really difficult to do.
I'm sure anyone who's taughthas the experience of teaching a
lesson in the morning to onegroup of students and it went
beautifully and wentexcellently.
And then it's the exact samelesson, just a different group
of students in the afternoon andit bombs out terribly.
So even with the same teacher,same lesson, same school, same
classroom, controlling as manyfactors as you can, doing that

(27:23):
exact same lesson might be verydifferent in the afternoon from
the morning or even Tuesdayversus Thursday or something
like that.
So this is very difficult to dobecause there are so many
factors for when you're teachingchildren.

Zac (27:33):
Okay.
So...
Data scientists, researchers,friends of mine, what would we
recommend or what would yourecommend to folks who are
thinking about or interpretingeducational research or research
in general?
But let's use education as thelens.
But I think these are probablygoing to be applicable in a

(27:56):
larger sense as well.
But what are three things weshould keep in mind when we hear
research says?

Maggie (28:01):
I think the biggest one is who is the sample and how big
is it?
Like, who are these students?
How big was your sample?
Was there a diversity ofbackgrounds and ages and
different things?
Or is it all a very homogenoussample?
I think that's a big one.
Because I think if it's a veryhomogenous, very small sample,
then the generalizability ofthose findings are pretty
limited.
And, you know, you can sayresearch says, but really it's

(28:24):
just, you know, Miss Green'sthird grade classroom research
says that actually it's notapplicable to everybody.
I think the other thing tothink about is how did they
actually define what they werelooking at in this study?
So what I mean by that is ifthey're if.
we're saying academicachievement.
What do you mean by that?
Is it kids just end of the yearstate test scores?
Is it some kind of end of unitassessment?

(28:46):
Is it some kind of exit ticket?
How are you defining the thingsyou're actually studying?
And does it make sense withwhat the actual findings and
results are saying?
And then the third thing wouldbe to say, I think the overall
study design and researchdesign, a lot of these effects
take a long time to...

(29:07):
Are a lot of things in learningtake a long time to actually
really sink into students?
And was the experiment evenlong enough to really teach what
you needed to teach andactually find what you wanted to
find?

Stephanie (29:20):
I love what you just said.
How often have I been in aclassroom or in a district
setting where there's been somekind of initiative that has been
put into place as being thething that we have to do?
And then it changes the nextyear because we didn't get the
results that we wanted rightaway, right?
Sometimes the change takesthree to five years and we have

(29:43):
to give it an opportunity tomake that shift, you know?
So I know that's not fully interms of what we're talking
here, but like this sweepinggeneralization and how long of a
period of time has passed inorder for things to be
internalized, I feel like is...
connection to a lot of that.

(30:04):
But also, I have a question, ifyou don't mind, like expanding
a little bit in terms of thissweeping generalization to the
larger field of learning.
To me, that sounds likeextrapolation.
Is that what you mean?

Maggie (30:19):
Yes, I think a lot of times with these very small, you
know, 30 student classrooms,you look at two or three
classrooms, and you're applyingthat to like, oh, all students
learn best by using note cards.
And that's sort of one of thefindings you find.
It's like, well, For those, youknow, 60 or 90 students, that
was a helpful vocabularyintervention.
But can you really extrapolatethat to all kids across all
grades for all subjects?
Like, probably not.

Zac (30:40):
And I know what it means, but just, you know, for people
who don't know what it means,but extrapolate, not a word that
a lot of people use every day.
What does that mean?
So it's

Maggie (30:49):
basically like taking from the smaller sample of
research that you're doing andthen sort of making a larger
claim about how learning isdone, for example, or what's the
best practice for thiscurriculum or whatever it is.

Zac (31:01):
So if something is true in Montana, it may not be true in
the Bronx.

Stephanie (31:06):
Yep.

Zac (31:06):
Gotcha.
Maggie, thank you so much.
This has been great andhelpful.

Stephanie (31:12):
I'm so glad that you joined us, Maggie.
Thank you for having me.

Zac (31:16):
And thank you everybody for listening to this episode of
Academic Distinctions.
Hopefully we've added somecontext and created some
understanding around the work ofHattie.

Stephanie (31:38):
Thank you so much for joining us today on this
episode of AcademicDistinctions.
We, your pod hosts, believe inyou and know from Hattie's list
that your learning will beimproved if you talk about what
you learned today.
So share with your friends,your family, your doctors, your
pets, anyone who will listen.
Follow us on Instagram atacademicdistinctionspod.
Find us on Blue Sky at FixingSchools or find us on Facebook.

(32:01):
As always, this is your call toaction to share the podcast,
like us and subscribe.
You can find us online atacademicdistinctions.com
buzzsprout.com i have a questionfor the pod or a topic you'd
like us to dig into email us atmail at academic
distinctions.com until next weekfriends this podcast is
underwritten by the federationof american scientists find out

(32:22):
more at fas.org so
Advertise With Us

Popular Podcasts

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.