All Episodes

February 8, 2024 46 mins

Do teachers and managers give special treatment to those who they're told have great academic or professional promise? Does this create a self-fulfilling prophecy, regardless of the truth? That's just part of the fascinating Pygmalion Effect.

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:01):
Welcome to Stuff You Should Know, a production of iHeartRadio.

Speaker 2 (00:11):
Hey, and welcome to the podcast. I'm Josh, and there's
Chuck and Jerry's lingering too. She's a lurker and this
is Stuff you Should Know. The Education edition.

Speaker 3 (00:21):
Yeah, I'm pretty excited about this after learning more about it.

Speaker 2 (00:24):
Yeah, this is your pick. Where'd you come up with this?

Speaker 1 (00:27):
I don't know.

Speaker 2 (00:28):
Well, that's good. I was hoping that it wasn't like, well,
I had a really bad experience with a teacher when
I was a kid.

Speaker 3 (00:35):
No, I had always good experiences generally. But now I'm
worried that it was a listener because I've gotten a
few of those lately. Like, hey, when you said you
didn't know it was me, I usually make a note, but.

Speaker 2 (00:49):
Sure, sure, I don't know. Well, if you suggest to
the Pygmalion effect, you're probably the only one, and you
can feel free to email and be like, hey, sorry,
and you said you didn't know it was me. Yeah,
Well we're talking about the pig melion effect and it
does have to do with education, but it has to
do with you know more than that too. And for
those of you who don't know the pigmalion effect, is

(01:11):
a kind of self fulfilling prophecy. It's called an expectancy bias,
I believe, and it basically says, in effect that if
you have high expectations for say a student or an
employee or something, they're likely to perform better than other people.
And it has something to do in all sorts of

(01:33):
different ways. It turns out from that relationship, that high expectation,
and it's pretty neat if you think about it. And
the Pygmalion it's named after, I guess an ovid metamorphosis story.

Speaker 1 (01:47):
Right, yeah, I think it was. I believe it was
a statue, isn't that right?

Speaker 2 (01:53):
I think Pigmalion was the sculptor, and the statue is Galateea.

Speaker 1 (01:58):
Okay, I knew it from you know, because I'm not
the art major.

Speaker 2 (02:04):
Well i'm not either.

Speaker 3 (02:05):
I was the English major. So I read George bn
Ard Charles's play pig Malion in college in a class.
And then, of course My Fair Lady was based on
pig Malion, in which I think her name was Liza Doolittle.
Sort of, Hey, let's take this rough around the edges
young woman and make her into a fair lady.

Speaker 2 (02:26):
I knew it is trading places.

Speaker 1 (02:29):
Exactly, but you know, sort of a classic story.

Speaker 3 (02:32):
The original play is great, and it all has to do,
like you said, with this sort of self this idea
of the self fulfilling prophecy, which had been around for
a long long time, but in the nineteen sixties, of course,
when psychology and doing studies on all kinds of things
was really blossoming and just sort of exploding.

Speaker 1 (02:52):
In all directions.

Speaker 3 (02:53):
Super hip tho was well, I don't know about that,
but maybe in those communities. But there was a psychology
name Robert Rosenthal who got pretty interested in this idea
of how bias can affect something like performance or assumptions
or you know, thinking like you know, it moved out

(03:14):
of the classroom, but initially like hey, this kid has
promised or this kid does it, and then they end
up being like that.

Speaker 2 (03:19):
Yeah, yeah, for sure. And there's a lot of implications
obviously of you know, okay, well, then does that mean
that there's kids who are not performing as well as
they could because they're not being treated well by their teachers?
Like sure, there's a lot And I think one of
the things that I like about this is that it
just how much debate and research and argument has gone

(03:41):
into just this one segment of approaching education really just
goes to show how seriously we take education, or have,
at least in the past.

Speaker 3 (03:51):
Yeah, I think so. I mean that certainly doesn't mean
we figured it out, but no, I think people have
long studied and tried and argued and debated on the
best way to help kids reach their potential.

Speaker 1 (04:04):
Then that's a good thing. Yeah.

Speaker 2 (04:05):
So there was a sociologist named Robert Mertin, and he
turns out to have been the person who coined the
term self fulfilling prophecy. I hope he copyrighted it because
I owe him some money, just me. Anyway, that was
back in nineteen forty eight, and even by then that
was a good almost fifty years after experimental data started

(04:26):
coming in that showed fulfilling prophecies existed. So I guess
our kind of hero or at least protagonist antagonist? I
guess it depends on how you look at him. Robert
Rosenthal in the sixties he hit upon a pretty great
study idea along with a colleague of his name, Kermit Fode,
which is a great name in writing out loud, blinked

(04:51):
out in Morse code, it's a great name all across
the board. But working together back in nineteen sixty three,
they took on running rats through may which was already
like just so cliched back then that it was like
a perfect thing to experiment on, because it was like
the people that they were actually experimenting on, the students
who were running the research, were the ones who were

(05:13):
being experimented on. But the rats amazes was just so
ubiquitous they didn't question that at all. It didn't even
occur to them that they would be being experimented on.

Speaker 3 (05:23):
Yeah, and they ended up coming up what I think
should be just these words on a T shirt and
just don't even explain it, because what they told experimenters
that we're working with these rats, they said, all right,
you got some really great rats in this group, and
they were bred to be maze bright, but those other ones,
they're maze dull, and I just think that would be

(05:45):
fun on a T shirt. But actually these rats were
assigned randomly. But what they found out was that the
dull rats, the Mays dull ones, hit their peak performance
three days in and then started to go downhill where
these really bred to be Mayze bright rats just kept
on improving, and so the conclusion was, I think these

(06:07):
students are getting these rats that are Maze Bright and
are just you know, hey, little buddy, you can do it.

Speaker 1 (06:13):
I know you got it.

Speaker 3 (06:13):
Andy, Sure, you're a smart rat. Like they're handling them better,
they're talking them up, they're encouraging them, and it's working.

Speaker 2 (06:20):
Yeah, because I mean again, you said that they were
assigned randomly and there was no such thing as Maze
Bright or Maze dol rats. They were all just the same.
So they had to have something to do with the
researchers because there was no difference between any of the
rats that were assigned. I think in the worst, the
worst interpretation is you could also suggest that the Maze

(06:42):
Bright rat student experimenters could could even have been fudging
the numbers a little bit to meet their expectations. Had
con but it's a possibility. And actually that kind of
led to one kind of branch of study that that
came out of that, Mays Bright May's Dole rat experiment.

(07:05):
How much expectancy bias affects researchers in scientific studies. That
was the first leap that it went to, but shortly
after that it ended up in the classroom because a
principle of Spruce School and Elementary in San Francisco read
about this rat experiment. I think it was an American
scientist in nineteen sixty three, yesh, and the principal, Leonora Jacobson,

(07:30):
wrote to Robert Rosenthal and said, Hey, if you ever
want to replace like rats and experimenters with students and teachers,
I'm your person. And very quickly Rosenthal took Leonora Jacobson
up on that.

Speaker 3 (07:44):
Yeah, by very quickly, I guess in science terms. A
couple of years later, right, and they said, all right,
you know, we don't know it now, but this is
going to end up being a very very famous experiment
called the Pygmillion experiment. And again you know, named for
or the art and the play and.

Speaker 1 (08:03):
What else was it?

Speaker 2 (08:05):
Trading Places?

Speaker 1 (08:06):
Trading Places? That's right.

Speaker 2 (08:08):
I keep going to say forty eight hours, but that's
not it at all.

Speaker 1 (08:10):
Yeah, but actually trade that came afterwards, so you know
what I mean? Sure, great movie, though, which one Trading Places?
I love it?

Speaker 2 (08:19):
Okay, I've never seen forty eight hours?

Speaker 1 (08:22):
Oh really, yeah, it's a good one.

Speaker 2 (08:24):
Okay, So which one's better? Trading Places or forty eight Hours.

Speaker 3 (08:28):
Well, I mean they're both kind of great. One is
just more of a straight up comedy, which is trading places. Sure,
forty eight hours was sort of in that cop buddy
movie action thing. Also has laughs, Yeah, for sure, but
you know it's prime Eddie Murphy.

Speaker 2 (08:42):
Okay, that sound like more of a trading spaces person
to me.

Speaker 1 (08:46):
Trading places, trading place, that's an HDTV show totally.

Speaker 3 (08:53):
So boy, that was a good sidetrack. Eddie Murphy's got
a new Beverly Hills cop coming out, by the way.

Speaker 2 (08:58):
Oh yeah, that's right. I wonder how that's going to be.

Speaker 1 (09:01):
I wonder too.

Speaker 2 (09:02):
I don't feel like he's aging poorly. He doesn't seem
to be getting less funny over time. Although I haven't
seen any of his stuff very recently.

Speaker 1 (09:10):
We'll see I haven't either. I'm reserving my opinion untill
all right, that's fair, all right.

Speaker 3 (09:16):
So Spruce School, San Francisco, it was performed on these kids,
white majority of Mexican American minority, but mostly working class kids.
This wasn't some like when I first heard Spruce School,
I thought it was some like Super heighty twenty Private.

Speaker 2 (09:29):
Side I did too, sounds like it.

Speaker 1 (09:31):
It sure it does, doesn't it, especially in San Francisco.

Speaker 2 (09:33):
But one more thing, very crucially about this school. The
kids were grouped by reading ability. So if you weren't
a very good reader, you were in a group or
a class with other kids who weren't a very good reader,
and so on and so forth.

Speaker 3 (09:45):
Yeah, and we're going to talk a lot about grouping
because it has a lot to It's not a great
thing to do, as it turns out, and it's got
a lot to do with a lot of this for sure.
So students at the school, they were given a test,
and the researchers told these teachers. And as we'll crucially
find out too, the test was not given by the researchers.

Speaker 1 (10:02):
They were given by the teachers. Correct.

Speaker 3 (10:04):
Yes, so that's going to come into play as well.
But they told the teachers, said, all right, we've got
these results. You've got some bloomers or quote unquote growth
spurters in your class, and they're probably just like these
maze bright rats. They were like the they're going to
really improve over the school year, just you watch. We

(10:26):
gave him this test. It was the Harvard's Test of
inflected acquisition, and it's supposed to assess their potential, which
was not true at all. What they actually took was
an IQ test called Toga Flanagan's Test of general ability,
and there were some problems with that right off the bat,
right with this Toga test.

Speaker 2 (10:47):
So one thing chuck about that test of inflected acquisition.
It didn't exist. They just they made it up so
that teachers, if they were possibly familiar with the test
of general ability, they wouldn't be like, wait, this is
isn't This isn't what you would use to find gross
burgers or bloomers. They just made up a test. Because
this was a made up the results were supposed to

(11:07):
be made up too. Again, the teachers thought that they
were administering a test and that the results were real world,
but they were being lied to. They're being manipulated in
the exact same way those students were told that some
of their rats were mays, brighter, maze dull. Exact same experiment,
just with humans.

Speaker 4 (11:25):
Now.

Speaker 3 (11:26):
Yeah, because the idea is to see if teachers think
that a kid is supposed to have a growth spurt intellectually,
then that will end up being the self fulfilling prophecy.
So these students were chosen at random. The teachers were
given that information, and after months and months, they took
this test again, the Toga test at the eight month mark,

(11:49):
the one year mark, in the two year mark.

Speaker 2 (11:52):
Yeah, and so, just as Rosenthal predicted in his hypothesis,
I should say Rosenthal and Jacobson, the principal the the
people who had been or the kids who had been
identified as growth spurners or bloomers actually did bloom academically.
They gained all sorts of IQ points over the course
of the eight months and then year and then two

(12:13):
years when they took and retook the tests. And that
even though the effects were mostly pronounced among first and
second graders, that was enough. That was enough to just
kind of show like this is a real deal. These
kids were no different than the other kids. The only
difference was that these bloomers were the ones whose teachers
were told keep an eye on them because they're going

(12:35):
to be amazing kids.

Speaker 1 (12:37):
Yeah.

Speaker 3 (12:37):
And what's interesting is for the uh, I think, the third, fifth,
and sixth graders, they showed that they actually improved at
the same rate the bloomers did the ones who were
assigned that tag at least at the same rate or
slightly even slower rate or lower rate did than the
control group, and the researchers.

Speaker 1 (12:56):
Rosenthal basically said, well, that's because.

Speaker 3 (12:58):
When you're younger, you're your mind is more malleable, and
so that that's probably it. And also because the school
and these teachers probably you know, think that their reputation
wasn't like they may have felt bad for these kids
who weren't who didn't get the bloomer tag, so they
may have like paid more attention to them or something.

Speaker 2 (13:21):
Right, or the younger kids hadn't been at school long
enough to establish like, hey, I'm actually not that smart
or hey I'm actually really bright, so their their reputation
wasn't established. You know, no big man on campus label
had been applied yet.

Speaker 3 (13:37):
Yeah, so if you're starting to sense like, oh wait
a minute, then he just immediately sort of explained away
something that didn't agree with his finding, you will see
that that kind of becomes part of the story.

Speaker 1 (13:47):
Right.

Speaker 2 (13:48):
So they published a study in nineteen sixty eight called
pig Malion in the Classroom, And again they named it
after Pigmalion because in that story from Ovid, the sculptor
Pigmalion sculpts a beautiful woman falls in love with her
and loves the statue so much that the goddess Venus says,
I'm going to make you a real live person. So

(14:09):
the attention that Pygmalion paid to Galatea his statue created
a magical transformation in his statue from statue to human.
There was some sort of magical intervention. So it actually
is a really great, great name. And I can't think
that that didn't have something to do with how much
it exploded onto the scene, because it's really difficult to

(14:33):
understate what just to bomb. This dropped not just in academia,
but in popular culture. It got picked up and talked
about four years afterward.

Speaker 1 (14:43):
It sounds like a great place to break.

Speaker 3 (14:45):
A I thought you might say that, all right, well,
let's take a break and we'll be right back and
talk about this explosion of understanding right after.

Speaker 4 (14:53):
This shot shot.

Speaker 3 (15:19):
All right, So where we left off. The study was
called Pigmilion in the Classroom, published in sixty eight as
the paper and then also notably is a full book.
If it was just the paper, it may have just
sort of been, you know, passed around through academia. But
because it was a book, it became very popular, and
all of a sudden, Barbara Walters is interviewing Rosenthal, and

(15:42):
the New York Times has got it on the front page,
and you know, like the mainstream media is all over
this is you know, basically saying and you know, kind
of like the media does with something like this. They're
not digging into the data like academia will, as we'll see,
but they'll run big headline and they'll say this is
really significant because we all knew that the way we

(16:04):
teach our kids is wrong, and this kind of proves it.

Speaker 2 (16:07):
Yeah, So I think that was another reason why it
had such a huge effect on like the larger culture,
because people have been suspecting for a while that putting
kids into groups by you know, reading ability was a
bad idea. It was doing a disservice to him. Now
there's this paper that showed demonstrably that that was absolutely true,

(16:28):
that was a terrible idea. And yeah, like you said,
there were headlines all over the place. People were discussing it,
and Rosenthal he was basically the ringleader, the ring master
to all this stuff. He was very much on board
with not pointing out, oh, actually, you guys are missing
a lot of nuance. It's not quite that cut and dry.

(16:49):
He was like, yep, absolutely, like exactly what you're saying,
this black and white thing, where like, yes, this is
absolute proof. I'm totally going to go along with that.
And he got criticized just for that alone, just not
intervening in how his science and findings was being communicated

(17:09):
to the larger public and in fact kind of playing
a role in making that happen, just kind of capitalizing
on the general populations. In comprehension of statistical analysis, we
don't know what that is or how to do it,
so we rely on scientists to explain it to us
in terms we can understand or the press, and if

(17:31):
the scientist, as we've covered many times Chuck isn't forthright
or honest, that stuff can get turned into all sorts
of misunderstandings or overblown findings.

Speaker 1 (17:44):
Yeah.

Speaker 3 (17:44):
And one of the big things too that we should
point out was that the fact that we told you
earlier that before the break, that these tests really kind
of showed this effect for these younger kids in first
and second grade, but not for the you know, the
kids in the older grades, and in fact, it showed
a negative correlation sometimes in some of the older grades.

(18:05):
They didn't even put that in the book at all,
so they're already sort of cherry picking stuff. And the
book was the thing that really blew up more so
than the paper. Yeah, and so of course the press
isn't covering that aspect of it, probably because they didn't
even know about it.

Speaker 2 (18:19):
Yeah. So there's two tracks. The popular press like the
New York Times or Today Show or whatever, they're covering
it in glowing terms like it's absolute proof of what
everybody always suspected. The other track was a pretty wide
river of criticism coming out of the halls of academia
from other psychologists of different stripes who just were teeing

(18:40):
off on this paper. And even though it didn't necessarily
capture the attention of the larger public in academia, there
was a thorough debate that started right after the paper
came out and went on for a good decade and
actually turned out to be really healthy, not just for
Rosenthal's paper, synhal and Jacobson's paper, but for I guess

(19:03):
statistical analysis as a whole. But I think because Rosenthal
ended up inadvertently creating the meta analysis study. But before
we get to that, took. Let's talk about some of
the stuff that was wrong with the paper statistically speaking.

Speaker 3 (19:19):
So yeah, I mean the Toga test we should talk
about right out of the gate, because this test was
not supposed to be used on first graders or with
kids with an IQ below sixty, and that alone probably
accounts for, or at least accounts for some of the
fact that these low results were coming in on these

(19:41):
kids in the younger grades, and then they would obviously
gain much more ground because they've then aged into the
test by their time, they're taking this when they're really
supposed to.

Speaker 2 (19:51):
Be exactly and there was something that that was Rosenthal
like responded to that. It even said, hey, even if
that test does and apply to these younger kids, the
fact that the same kid is taking the same test
over time, it really it renders that moot. It's still
going to show accurate results.

Speaker 3 (20:11):
I see what he's saying there, but it's just moot
to me because it wasn't even supposed to be given
to a kid that.

Speaker 2 (20:17):
Young, right. But also, he's totally full of beans right there.
He's so the first initial findings that first test produced
such totally skewed results that as those kids aged into
the test and started getting normal results, and you compared
those those later results to the first results, you would

(20:38):
see all sorts of crazy gains that were completely incorrect,
like they just weren't true. That was a big part
of it. That Toga test was not set up for
kids with IQs under sixty, which is a big problem
because first graders in the United States on average had
IQs of fifty eight, and so you can see it
reflected in some of those results, like some kid had

(20:59):
an IQ of eighteen. That's almost impossible and certainly they
wouldn't be like reading at that point. Same with the kid,
I think with thirty. And then one of those kids
later went from like thirty to like one hundred, which
is coming close to maybe even gifted level. Like the
results were just terrible and even worse than that. In

(21:21):
the book as an academic should they didn't include any
of the raw data either.

Speaker 3 (21:27):
Yeah, so that means, you know, you can't go out
as another researcher and sort of try and replicate that.
It just kind of occurred to me what he was
sort of saying with that initial defense was like, you know,
you have a broken scale that doesn't say what your
true weight is, but you can still you know, it's
still accurate because you can see how much weight you

(21:47):
gain or lose by using that same scale, right, And
you're like, yeah, but you still don't know how much
somebody weighs.

Speaker 2 (21:53):
Yes, that's true, But then also the thing that makes
him dishonest in that response is that imagine and it's
broken the first time you weigh yourself, and then you
fix it, and you weigh yourself after that, and so
those are the right results, but you're comparing them to
that first broken result. It's completely useless.

Speaker 1 (22:12):
Why does anyone even have scales anyway?

Speaker 2 (22:14):
I don't know, doesn't make any Why for God's sakes
do they keep them at hotels like at the Beach?
I don't know.

Speaker 1 (22:23):
Man.

Speaker 2 (22:23):
There was one in one of my rooms when we
were on tour, and this one was in San Francisco.
I'm like, I just glared at it a couple of times,
and it eased itself back under the vanity.

Speaker 3 (22:35):
I mean, hey, I weigh myself to keep track of things,
but for God's sakes, don't weigh yourself on vacation.

Speaker 4 (22:40):
No.

Speaker 2 (22:40):
I was gonna say I do at home, but not
on vacation, not even on tour.

Speaker 3 (22:45):
So anyway, scale diversion aside, Like you said, he didn't
include raw data. That means you can't come along afterward
and try and replicate it, So that's a big problem.

Speaker 1 (22:52):
Yeah.

Speaker 3 (22:54):
Other people chimed in and said things like I think
Richard snow Is psychologists who said, also, you know, apparently
teachers couldn't remember, and a lot of them reported that
they even didn't really even glance at this list on
who was a bloomer or not a bloomer, so, which
is very strange. It sounds like some of these teachers

(23:15):
didn't even fully realize or care much that they had
an experiment going on.

Speaker 2 (23:20):
Yeah. I thought that was kind of weird too, because
I didn't get the impression that these were anything but
you know, normal dedicated teachers. But I don't know, maybe
they suspected that this was made up, or that this
was Maybe they were like, there's no test that can
really pick that up, so I'm not even going to
pay attention to that kind of thing.

Speaker 1 (23:37):
Or they were busy teaching that.

Speaker 2 (23:39):
That could be it too, for sure. Another one was
that the teachers themselves administered the tests, the initial tests,
so they weren't administered by professional child psychologists. They were
ministered by teachers who already had an impression of the kids.
They were administering the tests too, because it was the
previous year's teacher. So if you were saying second grade,

(24:02):
your first grade teacher was the one who administered your test.
I didn't get that, but that was another criticism for academia.

Speaker 1 (24:08):
Yeah.

Speaker 3 (24:09):
Absolutely, So people are debating this, they're starting to sort
of argue positions, you know, foreign against over time. There
was a twenty eighteen overview of a lot of these
debates from someone named Thomas L. Goood and Natasha Stirsinger
and Alison Levine and hats off Olivia for like getting

(24:29):
all these names mcau. There's a lot of people that
did a lot of follow up stuff, so nice job. Yeah,
but they noted that the individual the individual students result
results varied a lot on the different post tests, saying,
you know, basically, we just don't have a lot of
evidence that these IQs really improved at all.

Speaker 2 (24:51):
Yes, but here's the thing. This is what's astounding of it.
I think it was. Robert Snow in his book review
of it, wrote that it's possible that the pygmalion in
the in the classroom like study actually did turn up
evidence of you know, this, this idea that we've all considered,

(25:12):
you know, for a long time as possible that teachers
expectations affect student performance. Right, But if it did, it
did it by accident, because he was just saying, like,
the study was so poorly executed. And it seems like
Robert Snow was correct in that guess that somehow, some
way this study did show this is a real thing.

(25:35):
And over time from this ten year long debate over
the results and the methodology and all that stuff and
has off to Rosenthal. He didn't just like, he didn't
just like throw the study out and run off with
a big bag of money with a dollar sign on it.
He stood there and he answered his critics he had.
He engaged in the debate for a good decade, and
over the course of that decade, more studies with better

(25:57):
methodology and better execute were created and studied the same effect,
this pigmalion effect, and they found Nope, he was right.
Whether it was a bad study or not, this is
it produced some sort of correct results that we do
realize now. The pigmalion effect does have it is real

(26:18):
to some degree.

Speaker 3 (26:20):
Yeah, and depending on who you were, you could come
at it from a different angle, and each of you
have a point because, as Livia points out, like just
sort of politically, as far as being hard on teachers
or not, it would play out in different ways. A
writer for the San Francisco Chronicle said like, see, here
you go, these low expectations on these children of lesser income.

(26:45):
That's what's causing them to fall behind and maybe even
drop out of school later on. Whereas the Albert Shanker,
who was with it United Federation of Teachers said, no,
it's not the teacher's fault. It's you know, it's poverty itself,
and we have too many kids in these classes and
we don't have the right materials.

Speaker 2 (27:04):
Yeah, and regardless of where you fall on it, there
was still a big push to do away with like
advanced placement classes or gifted tracks or even remedial stuff.
They were like, just because you think of kids are medial,
do not put them in a remedial class. Put them
in like a mixed aptitude class, and they'll do way
better than if you put them in a remedial class.

(27:26):
That was a big deal. I don't guess it wasn't
successful because there was still plenty of AP classes and
snotty little AP students in the nineties. When I was
in high school, I just loved to shove it in
your face. Oh I'm in AP history.

Speaker 3 (27:42):
Did you not take any AP classes? No, I took
a couple. I took AP history in English. But looking back, like,
I don't know, I can definitely see like they were
great classes, and I felt like the teachers were better,
But it also may have been my own bias because
it was AP and also like a student that you know,

(28:06):
they don't think should have tested into there, if they
had been thrown in there, maybe they would have risen
to that level. So it's, you know, with with adult eyes,
I know, look back at kind of how messed up
all that stuff was.

Speaker 2 (28:17):
Well, if it was better teaching from better teachers with
better material, then the argument for people who are like
against that would say, then all classes should be like that,
Every history class should be taught like that. Don't just
make it for the ones who you think are gifted
or whatever. So that was a I think that still

(28:37):
probably is a big deal. I'm not particularly up on
the state of education today or early childhood education. Yeah,
so I don't know if they're still putting kids in classes,
different separate classes or not, But if not, I'm sure
there's still people arguing against it.

Speaker 3 (28:54):
Yeah, I mean, I'm not sure yet as far as
upper grades. You know, all my experience right now is
with Ruby and the third grade at her little hippie
dippy private school, where of course everybody is you know,
treated equally and given the same opportunities.

Speaker 1 (29:08):
Everyone wins or loses.

Speaker 3 (29:12):
So one kind of cool thing that came out of
this was because there was there was it was so famous,
and because there were so many people sort of criticizing it,
so many people defending it, and so many people doing
other studies. Because as we'll see, this soon leapt into
the private sector with like business, into the military, like
people started sort of applying this kind of thing to

(29:33):
all kinds of you know, stuff outside the classroom. It
led to Rosenthal saying, well, hey, now I can look
at like all these studies together, and like, was that
the literal birth of meta analysis?

Speaker 1 (29:46):
Was he one of the first.

Speaker 2 (29:47):
That's how I took it. Yeah, that shame. Nineteen seventy eight,
he got together with a colleague Donald Rubin, who was
the head of Harvard's statistics or statistical analysis department. Uh huh,
So this guy's like as good as it gets with statistics.
And they got together three hundred and forty five studies
that looked at expectancy effects and found that there was

(30:12):
like there was a pronounced effect that was detected in
the if you just looked at the high quality studies on.

Speaker 1 (30:19):
It, right, Okay, so they're looking at this stuff.

Speaker 3 (30:24):
Like you pointed out earlier, the people couldn't replicate because
there wasn't raw data, and there were psychologists and neuroscience
researchers that were pointing this stuff out like, hey, we
can't even replicate this thing. There were also people pointing
out that the people that are criticizing it and the
people that are defending it, like sometimes they're not even
looking at the same data, right.

Speaker 2 (30:46):
Yeah. So Rosenthal was like, hey, this if we're looking
at actual student progress based on teacher expectations and you're
just looking at gains in IQ testing, yeah, that's not
looking at the whole picture. Like if you also take
into into account scores from year end achievement tests or
teacher assessments on improvement or how many books a kid

(31:11):
can walk around with on their head without spilling them
over because they have really good posture. If you take
all this stuff into account, you get a much clearer
picture of whether the student actually did improve or not
thanks to teacher expectation.

Speaker 3 (31:24):
Yeah, exactly so. And you know, I mentioned they did
it outside the classroom. I think they found the biggest
gains in military settings. Yeah, which, yeah, the idea that
you know, you probably just have more sway as a
drill sergeant than you do as a teacher.

Speaker 1 (31:39):
Maybe.

Speaker 2 (31:39):
Yeah, for sure, you have that much more influence in
the more influence and control or you have over somebody,
the more effect your expectations can have on them.

Speaker 1 (31:49):
Yeah.

Speaker 3 (31:50):
But they definitely like they saw this play out, so
it's not like we're this is an episode on how
this isn't a thing, because it is a thing, whether
they found it by acts and or not. Like there
was one example that Lyvia found where, uh, there were
employees putting together medical kits and they brought in this
group of new hires and told the managers like, hey,

(32:11):
these people they're they're mays happy.

Speaker 1 (32:14):
What was it, may bright?

Speaker 3 (32:17):
Yeah, they're maze bright, like you ought to see them
put together these kids like you're going to do great.
They got a lot of potential here, and that that
group ended up breaking records for production levels.

Speaker 2 (32:27):
Right. And so if you're in management science, you're teaching
everybody this, like just just go out and lie to
your managers, and your employees will start like actually producing
way better than you would think for for no reason
other than their manager has higher expectations thinks they're better
at their job than other people. And that is a huge,

(32:50):
huge part of all of this. Yeah, there doesn't seem
to be the same effect if you are forthright and
honest with the teacher, because the whole thing seems to
be rooted in the idea that the teacher or the
manager has to genuinely believe that this kid or this
student or this employee is above average and expect above

(33:14):
average results from him.

Speaker 3 (33:16):
Yeah, I say we take another break and we talk.
We dive a little bit more into that after this.

Speaker 2 (33:20):
Eh sounds good, ma'am. Shot shot, So, Chuck. One thing

(33:51):
that I think probably everybody listening to this episode so
far has come across as a question, is okay, if
teacher's expectations actually influenced student performance. How what are teachers
doing that that can have that effect? And that's been
a big thread of this study as well.

Speaker 3 (34:09):
Yeah, because you know, to put this into you know,
to implement this is kind of the important thing. Sure,
it's not just to sit back and say, well, we
know all this stuff now, because hopefully the goal is
to help kids, you know, learn better. So they did
put together some broad categories over the years of how
if you're a teacher, you might be transmitting positive expectations

(34:30):
you might not be and not even know it by
saying certain things. And they put together a four point
thing which after I read it, I was like, oh
my god, why weren't they already doing all this?

Speaker 2 (34:40):
I know it's kind of sad, but.

Speaker 3 (34:42):
Here it is climate that is giving a warm emotional
environment input, giving them more and tougher assignments these students. Output,
allowing the students more opportunity to engage with that material,
and then the fourth win is give more detailed feedback.

Speaker 2 (34:58):
Right, so teaching essentially ideally, well yeah, exactly. So what
they found was that given that the idea that some
of their students were growth spurgers or we're going to
really you know, make some crazy good moves. This school year.
Teachers did different stuff with that information, Like, they didn't

(35:21):
all just follow what Rosenthal would have expected, which is they,
you know, create these high expectations of warm learning environment
for those those growth spurgers or bloomers. Instead, some of
them were like, Okay, well then that kid's good. Let
me go focus my attention on.

Speaker 1 (35:38):
The lower amazing students.

Speaker 2 (35:40):
Yeah, the maze dull students.

Speaker 1 (35:42):
Yeah, you know.

Speaker 2 (35:43):
And so what was interesting about that too, because that
actually is kind of sensible. It's a sensible strategy if
you have a finite amount of time and attention to
give to all your students. They found that in some cases.
There was a psychologist, Rona Weinstein, who found that when
that was done, in some cases, the low performing students

(36:05):
who got more attention actually still did worse than the
higher performing students. Yeah, and she hypothesized that that was
because those kids were basically being patronized, and even though
they're six, they still understand that on some innate level,
and so they were still getting signals that there the
expectations for them were low.

Speaker 3 (36:26):
Yeah, or maybe they were already separated out, which kind
of goes to that whole idea that like putting kids
in a group just labeled, and you know, and a
lot of times, you know, they would. I remember my school,
even where my father was principal, the you know, the
the troubled Kids program, and this wasn't necessarily academically, but

(36:47):
the behaviorally troubled kids were all put in a special
group that had a label. I can't remember. It was
an acronym that basically indicated kind of how great they were,
which is, you know, it's a good things like you
should definitely shouldn't say like call them like these are
the bad kids or whatever. So I think you're they

(37:07):
would put labels on them that would hopefully give them
an aspirational expectation or something, or they or they did.
In the seventies, my dad's whole thing was outdoor programs.
He was the first person in the i think in
the state, definitely in the county that started like all
these camping programs and he really believed that getting kids
out in nature if they had behavioral problems could really

(37:28):
you could see gains there and stuff like that. So yeah, yeah,
it was pretty cool, great, great principle, Yeah, and full stop.

Speaker 2 (37:39):
So what your point is is that if you if
you are if you separate kids, or you even talk
about certain kids in certain ways, if you even have
them separated mentally, it's going to be transmitted or telegraphed
to both groups of students as a whole.

Speaker 3 (37:56):
Sure, And they found that even if they weren't separated,
just sort of the language that teachers would use in
the class would divide them, the way they talk to
certain kids and other kids.

Speaker 2 (38:07):
Right, Yeah, that's what I was saying, okay, which is,
you know, it's pretty interesting. But again, all of it
comes down to this, and I shouldn't say again because
I haven't made this point yet. What this is all
predicated on the fact that teachers are human beings with biases,
with prejudices, with just thoughts that they can't, you know,

(38:29):
avoid unconscious ways that you treat or act towards certain
kids where you favor some over others. And then there
was something that stuck out to me because there was
a researcher from New Zealand named Christine Ruby Davies. No
bet when she talks it sounds awesome, but she has
set up a project called the Teacher Expectation Project where

(38:50):
she's like Hey, remember how you guys said a minute
ago that for this to be effective, you have to
lie to the teachers. You have to mislead them so
that they genuinely believe that the students are gifted. I
say nuts to that. I'm going to figure out a
way to teach teachers to be high expectance or high

(39:10):
expectancy teachers so that they for everybody right, so that
they have those effects on everybody without them, you know,
being duped. But one of the things she came up
with that to me was like, yes, I think that's
seventy percent of it right there. Teachers don't know all
of their students equally well in the classroom. And if
you've ever been one of the students who you your

(39:33):
teacher didn't really know you very well and clearly knew
other students better.

Speaker 1 (39:38):
Yeap.

Speaker 2 (39:38):
That is a that is an isolating feeling, and it's
not as easy to learn as it is when you're
one of the students that the teacher knows that kid.
And so that's one of the things that Christine Ruby
Davies teaches, like, know all of your kids equally well.
It's very important.

Speaker 1 (39:57):
Yeah, for sure.

Speaker 3 (39:58):
I mean I was well known by all my teachers
because I didn't consciously make a point to but I
was the class clown and I was always involved in
trying to crack jokes and being funny, and I may
have been disruptive, but the teachers also loved me because
it wasn't usually like a like super negative disruption. I
would just see a good opportunity for a joke and

(40:20):
run with it.

Speaker 2 (40:21):
But as you know, well plus your dad would have
fired them if they gave you any back.

Speaker 1 (40:25):
Talk, right yeah, right, well into high school too, you know.

Speaker 3 (40:29):
But as a long story short, I was well liked
by teachers and so they paid me more attention. Livy
also points out something really important about grouping kids is
if you just throw kids in a group of like,
you know, may's dull group.

Speaker 1 (40:47):
Some of these kids.

Speaker 3 (40:48):
May have dyslexia, some may have ADHD, some may have
insecure housing and family issues and be stress. Some may
have limited English fluency. So you're throwing all these different
issues in as one group, and of course that's going
to be an issue.

Speaker 2 (41:06):
Yeah yeah, so yeah, that's why they say use mixed group,
mixed ability groups. That's when I'm easually teaching the teacher
Expectation Project. Another thing that was touched on is creating
a caring, non threatening environment where you just it's a
warm environment for all students and you use respectful language.
You can't be like, gosh, you're so dumb, you dumb dumb.

(41:29):
You shouldn't say that to students, right. Yeah, And this
is another one too, working with students to set their
own goals, which a lot of teachers would be like,
you can't actually do that, but apparently Ruby Davis research
has shown, or some research out there that Ruby Davy
sites has shown, if you allow students to set their
own learning goals, they will actually shoot for something that's

(41:52):
challenging but doable. They probably aren't going to be like, well,
I'm just gonna learn to draw huckleberry hound this year.
That's my learning goal, you know, like they're going to
do something a little more challenging than that, and they'll
learn along the way, and they will have a sense
of like agency and a stake in their learning, Like

(42:13):
they'll take it that much more seriously, and they'll know
if you plot and chart their learning through learning goals
and allow them to track it themselves, they will they
will know when they've learned rather than I saw having
to look to the teacher to be like, yes, you
just learned something. Way to go.

Speaker 3 (42:31):
Yeah, what near's the horse? If I remember correctly, you
can draw it ye of a horse?

Speaker 2 (42:36):
I used to. I lost it as I approved on Instagram.

Speaker 1 (42:41):
No, everyone should go check that out. I thought it
was a great drawing of a horse.

Speaker 2 (42:45):
Thanks.

Speaker 3 (42:46):
Another thing that they said, as far as the high
expectations teaching from Christine Ruby Davies goes is praising effort
rather than accuracy. Very big deal and working equally with
all students. And I'm not going to name my daughter's
school for obvious reasons, but like they're doing it right
and it's just great to see that happening. So just

(43:09):
big props to her teachers and everyone at her school.
And it is not just her school. It's happening more
than when we were kids at more and more schools,
but it's still not as much as it should in
the same breath, you know.

Speaker 2 (43:23):
Yeah. Instead, two more just related effects that have to
do with the Pigmalion effect, the Gollum effect, which is
the opposite if you have low expectations at least the
lower performance which makes sense too, and that galatea effect
named after Pigmalion statue that what we expect for ourselves

(43:43):
impacts our performance, mostly because it mediates how the people
in authority, a teacher or a manager or something, sees us.
So the way that they see us impacts how we
see ourselves, which impacts how we perform, which impacts how
the manager sees us. And it's just like our boros.
That's right, pretty interesting stuff. Man, good pick Chuck. I

(44:06):
think it's so great. You just came up with this
all by yourself.

Speaker 1 (44:09):
Oh man, I hope I did.

Speaker 2 (44:11):
I hope you did too. Well. Since we both hope
that Chuck came up with this by himself, it's time,
of course for a listener mail.

Speaker 3 (44:19):
Hey guys, I live in Rhode Island, where I run
a Charter Books, an independent bookstore opened in the spring
of twenty twenty one.

Speaker 2 (44:25):
Nice.

Speaker 3 (44:26):
We report to the New York Times Bestseller List. Nice,
and I can confirm that you guys really nailed just
about everything about it.

Speaker 1 (44:32):
And I thought you might like a few more tidbits.

Speaker 2 (44:34):
Yes, please.

Speaker 3 (44:35):
Every week we export a CSV document from our bookstore
point of sales software, upload it to the bestseller list portal,
and as mighty as they are, it's still amusing to
see that it's basically just comes down to us emailing
them at Spreadsheet, along with all the other booksellers. Of course,
if we haven't done it by eleven am on Monday,

(44:56):
they send a gentle reminder if we inadvertently miss a week,
because they that you report all fifty two weeks. They
send a message about how much they value our input
and how disappointed they are that we forgot.

Speaker 1 (45:07):
Oh wow, a little passive aggressive.

Speaker 3 (45:11):
And then every week they also send an email asking
about any bulk orders, which you explained very well in
the episode. You are correct in implying how powerful it
can be.

Speaker 1 (45:20):
The list.

Speaker 3 (45:20):
That is, authors, publishers, publicists, and other entities in the
industry frequently ask if we report to the Times. And
years ago, when I was with another bookstore, we received
a weird order for twenty copies of a random ya
fantasy book. Turned out to be a bungled effort by
an obscure publisher to do some book laundering, as Chuck
would say, so hours after we took the order, we

(45:43):
received a sternly worded message from The New York Times
that they wanted documentation of all orders, basically asking.

Speaker 1 (45:50):
For our receipts.

Speaker 3 (45:52):
None of this is our shattering to you guys, probably,
but it was fun to hear you talk about my
day to day work.

Speaker 1 (45:56):
That is Steve from Charter Books.

Speaker 3 (45:58):
So hey, if you're near Charter Books and Rhode Island,
support your indie bookstores.

Speaker 2 (46:03):
Yeah, no, matter where you live, support your indie bookstore
friends for sure.

Speaker 3 (46:08):
That was Steve right, Yeah, any similar picture they had
our book on display?

Speaker 2 (46:12):
Awesome, Thanks Steve. We love it when people round out
information that we've talked about. And if you want to
be like Steve and do something like that, you can
do it via email. Send it off to stuff podcast
at iHeartRadio dot com. Stuff you Should Know is a
production of iHeartRadio.

Speaker 1 (46:31):
For more podcasts my heart Radio, visit the iHeartRadio app,
Apple Podcasts, or wherever you listen to your favorite shows,

Stuff You Should Know News

Advertise With Us

Follow Us On

Hosts And Creators

Josh Clark

Josh Clark

Chuck Bryant

Chuck Bryant

Show Links

Order Our BookStoreSYSK ArmyAboutRSS

Popular Podcasts

2. In The Village

2. In The Village

In The Village will take you into the most exclusive areas of the 2024 Paris Olympic Games to explore the daily life of athletes, complete with all the funny, mundane and unexpected things you learn off the field of play. Join Elizabeth Beisel as she sits down with Olympians each day in Paris.

3. iHeartOlympics: The Latest

3. iHeartOlympics: The Latest

Listen to the latest news from the 2024 Olympics.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2024 iHeartMedia, Inc.