All Episodes

August 29, 2025 53 mins

Episode 234 

Michelle Hosp joins us to break down the different types of literacy assessments within an MTSS framework in the most approachable way.

We talk:

  • universal screeners
  • diagnostics
  • progress monitoring
  • formative assessments

Most importantly, we talk about when and why to use each one. Michelle helps us shift the question from “Which test should I give?” to “What do I need to know to help my students grow?” We also dig in to the power of curriculum-based measures (CBM), what makes assessment data meaningful, and how schools can align their resources to actually make a difference.

If you're feeling overwhelmed by data or unsure how to use it effectively, this episode will help you think more clearly about assessments and walk away empowered to use your data to help all your students become readers. 

Resources


We answer your questions about teaching reading in The Literacy 50-A Q&A Handbook for Teachers: Real-World Answers to Questions About Reading That Keep You Up at Night.

Grab free resources and episode alerts! Sign up for our email list at literacypodcast.com.

Join our community on Facebook, and follow us on Instagram, Facebook, & Twitter.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Lori (00:00):
We know assessments can feel overwhelming.
There are screeners,diagnostics, progress monitoring
.
What is the difference and whenshould we use each one?

Melissa (00:11):
In this episode, we're joined by Michelle Hosp, author
of the ABCs of CBM, who helps usclear up the confusion and make
sense of how the rightassessments can drive strong
literacy instruction.

Lori (00:26):
Hi teacher friends.
I'm Lori and I'm Melissa.
We are two educators who wantthe best for all kids, and we
know you do too.

Melissa (00:36):
We worked together in Baltimore when the district
adopted a new literacycurriculum.

Lori (00:41):
We realized there was so much more to learn about how to
teach reading and writing.

Melissa (00:46):
Lori, and I can't wait to keep learning with you today.

Lori (00:52):
Hi, michelle, welcome to the podcast.
We're so glad you're here.

Michelle Hosp (00:55):
Oh, it's so great to be here.
Lori and Melissa, Thanks forhaving me.

Melissa (00:59):
Yeah, and we cannot wait for this episode to
honestly help us make some senseof some assessments that are
out there, because there arejust so many different types of
assessments.
Just for literacy and if you'rethinking about, you know, an
MTSS framework or just how tosupport all your students who
are learning to read,assessments are at the core.

(01:19):
You have to know what they knowright to know what to do, so
you know, to sort through allthose different types of
literacy assessments.
We can't wait for you to talkabout the differences, the
purposes of each, what theyshould be used for, what they
shouldn't be used for.
So we're, we are so gratefulthat you're here to talk about
all of this today.

Michelle Hosp (01:40):
Oh, let's do it.
I'm excited too.
Excellent.

Melissa (01:43):
But before we dig into all of all those things I just
said, we wanted to like zoom outfor a second because when we
talked to you on the pre-call,you shared with us that you like
to actually start with helpfulquestions.
So I would say, listeners, like, get your pen and paper out for

(02:06):
these questions, but can youtell us what these big questions
are like within an MTSS system?
What are these big questionsthat would drive assessments?

Michelle Hosp (02:16):
Yeah, great question, melissa.
So I do like to pull it out,and so I'm going to repeat these
questions as we continue totalk, right?
So if you don't miss it, or ifyou miss it the first time,
you'll catch it the second time.
So when I think of MTSS and Ithink of the questions I first
go down to, it's all aboutresources, right, like it

(02:38):
literally is about whatassessments schools have, what
curriculum and interventionmaterials, how they're
supporting teacher knowledgeright, like all of those things
are the resources that they have, and so schools should really
know what those are, right, theyshould be able to list off what
are our assessments, what arewe using for core curriculums?

(02:59):
What are our interventions?
How are we supporting teacherknowledge?
Right, and then we can reallystart thinking about aligning
those resources for specificquestions.
So the biggest question is theone I'm going to start with,
because it takes up the majorityof our resources, right, it's
like people, things, stuff, andthat is how effective is tier

(03:22):
one?
It sounds simple, right, but itdepends who's asking the
question.
So I want to pull this apart alittle bit and like so,
administrator hat, right, versusteacher hat?
So from an administrator's hat,if I think about.
So how effective is tier oneright?
So what I need to know is Ineed data about all of my kids

(03:45):
at every single grade, acrossspecific times of the year.
So to do that, I think I coulduse a universal screener, right?
I could also use an outcomeassessment, Right?
So like a statewide assessmentthat does tell me, like, how
effective is tier one?
So there is one piece, but then, if you drill down as an

(04:06):
administrator, what I hopepeople are asking is so, for
those students who actuallystarted out the year on track,
how many of those students endedup on track?
So here we are at the beginningof the year.
Another way to ask howeffective is tier one is did our
students grow?
Did our students who were okaycontinue to show growth?

(04:27):
Did our students also whoweren't on track, did they bump
up?
And so in order to do thatthough, that's different data,
because now we need multipledata points across the year so
out goes the statewideassessment, because we're not
going to use it for that, but incomes universal screening,

(04:48):
right?
Because most people give thatfall, winter, spring.
So I can look at that and sayhow effective is tier one Right?
Really, from a district level,district level, and so it's also
a proxy right of how good isour core, how good is teacher

(05:09):
knowledge right implementingthis?
So so many things just comefrom that one question from an
administrator's view, of howeffective is tier one.
But teachers also want to knowthat.
But it's different.
Because I want to know, becauseI'm responsible for these kiddos
in front of me, I want to knowwhich of my students are on
track and which need additionalsupport.

(05:30):
I also might want to know whichof my students are advanced,
right.
We often forget about.
Well, what about those kidsthat are, like, already blowing
us away with their skills?
So I want to know.
So that's one thing I want toknow.
I want to know who's on track,who needs support.
I don't like to talk about itas who's at risk, because it

(05:52):
sounds so horrible.
You're at risk, right?
I really want to be at risk.
What does that mean?
I want to know.
I need support to do thesethings so I can get better,
right, like that's such adifferent view of how we think
about our kids.
So I want to know who's ontrack, who needs support, who's

(06:17):
killing it, right, who's likereally hitting the ball out of
the ballpark.
And then I want to know forthose students who hitting the
ball out of the ballpark, andthen I want to know for those
students who again, similarquestion, how many for those
students who are on track, arethey staying on track, right?
So I want to look at multiplepoints.
Are they the students whostarted off strong?
Are they continuing to grow andstay strong?

(06:39):
Again, for those students whoneed support, if what I'm doing
in tier one is helping, are theygrowing?
So, again, that's, you know,the data I need is a universal
screener, because I needmultiple data points and I need
it to be really robust andreliable and valid and all of
those things.
I know we're going to dig intothat in a little bit.

(07:00):
But the other thing is that thatalso is a proxy, you know, for
for my core curriculum, right?
So if things are not going well, then, and if that's also true
at the district level, then it'snot me, it's not my teaching,
you know it's, it's probably noteven my knowledge, it's that we
may not have the best materials.

(07:20):
So that's a different problemto solve, right?
And again, that's all at tierone.
From that I drill down into,you know, tier two and tier
three, right, so it's like okay.
So first, when you think aboutresources, the heaviest amount
is at tier one, right?
So we really got to have gooddata.

(07:42):
But then, once we get thatfigured out, then I'm looking at
tiers two and three and it'sthere are administrator
questions, but I'm gonna.
This is really when these arethe kids in my classroom and
they need help.
So I really want to know, youknow, is this student improving?

(08:02):
Is this student that I'mproviding more support to, are
they improving?
Another way to think about thatis is this intervention
effective?
Right?
So those are the questions thatI think about for teachers at
tier two and tier three, right?
Is the student improving?
Are they getting better?
Is the intervention effective?

(08:22):
So the data that helps me answerthat is progress monitoring
data hands down right.
So I should be looking atthings like a general outcome
measure.
What I mean by that are I'mjust going to refer to like a
curriculum-based measurement, aCBM measure, like passage, oral
reading, right?
That's the one.

(08:43):
Actually, that was the originalCPM measure and the one most
widely used.
So it's cool because itmeasures so many skills.
So that's why it's a generaloutcome measure.
But I also might want to know ifI'm teaching a very specific
skill, like if I'm focusing onletter sounds, right, or phonic
skills, then I also want to lookat are they making progress in

(09:06):
that specific skill?
So I really think to answerthat question is the
intervention working?
Is it effective?
Is the student improving?
We probably want multiplepieces of data, progress
monitoring data.
More in a general, is thestudent getting better in
reading overall?
Right, that's our generaloutcome measure.
And is the student improving onthe specific skill I'm teaching

(09:28):
, whether it's phonics, whetherit's letter sounds, so those
things.
So the thing about that ispeople are like oh well, I can
just guess.
I just know, I know, right, I'mdoing this intervention.
I just know that my kids aregetting.
I'm doing this intervention.
I just know that my kids aregetting better.
Here's the thing.

(09:48):
It's a resource issue, right?
These teachers are spending somuch time helping these kiddos
that need it the most, and whatI want to know is is that time
well spent?
So we really want to make surethat we're collecting that data
frequently and that it's reallysensitive to growth.
So those are the two big painpoints that we really want to

(10:12):
attach to that data, because ifwe can't collect it frequently
and it's not sensitive tostudent growth, then we really
can't answer those questions.
And those questions are prettydarn important because, again,
they go back to the resources.
Am I spending my time wiselyhelping this student?
Notice, we're not talking aboutthe student, we're talking
about teacher behavior.
Right, it's all about what theteacher can do, not whether the

(10:36):
student, because we know we canhelp students.
So it's not the student, can't,it's the.
What do I need to do?
As their instructor, you knowthe person who's caring for
their learning what do I need todo to impact them?

(10:57):
On the opposite side of that,the administrator, right, also
needs to know about tier all ofthese materials, right, it's a
resource issue.
Out of all these materials thatwe have purchased, that we're
using, that our teachers are,you know, masters of which are
leading to the greatest studentimprovement.
That's a really importantquestion, because those

(11:18):
materials are expensive and thetraining is expensive and the
professional development'sexpensive.
So the best way for anadministrator to answer that
question, you know, for tier twoand tier three, like which of
these interventions should wehold on to and which should we
let go of, is to be able to lookat all of that progress
monitoring data and literallylook at the rates of improvement
, the ROI, and say which ofthese show the greatest rates of

(11:42):
improvement?
Which of these show thegreatest rates of improvement?
I actually sat in a meeting oncewhen I was in Hershey,
pennsylvania, and I was blownaway at how they did this.
It was at the end of the yearand around it sat all the
interventionists, the schoolpsychologists, the principal
right, all of the leaders andthey literally put up all of
their progress monitoring dataand went through by intervention

(12:05):
and went through interventionby intervention and they called
it their spaghetti graphs, right, because all the kids were like
a line on a graph and they werelooking at the rates of
improvement.
And by the end of that meetingthey said these are the
interventions that clearly workfor our kids, these are the
interventions that don't andwe're going to get rid of kids,
these are the interventions thatdon't and we're going to get

(12:26):
rid of.
And so they literally at theend of every year, kind of went
through and cleaned out closetsand said we really want to
double down on this because wehave the data that shows that
that resource is worth our timeand effort.
So I thought that was abeautiful example of how to
collectively look at thatinformation.

Lori (12:41):
To support tier two and three All right, michelle, I've
been taking notes while you'retalking.
I'm furiously taking notes andI'm not quite sure that I have
it right, so I would love foryou to do a quick recap.
Can you do like Michael Scottfrom the Office, like a tell me
like I'm five.

Michelle Hosp (12:59):
Love the Office, absolutely, lori.
So because it is complex, right, and we're talking about
questions and it's like, wait, Ican use this for that and what.
So here's the deal.
Tier one I need a universalscreener.
I need data that tells me howeffective is my core, and that's
at the administrator level.
The teacher wants to know who'son track and who needs support.

(13:20):
That's like boiling it down toits essence.
Underneath that, let's saywe're going to come back to the
support issue, but let's saywe're in tier two and we're
giving interventions.
What I want to know, thequestion I'm looking for to be
answered, is is it effective?
Is the student improving?

(13:42):
That's the teacher lens.
The administrator lens is outof all of these assessments or,
sorry, out of all of theseinterventions, when I look at
the progress monitoring data,which ones are showing me
greater rates of improvement?
So we now have answered thequestion how effective is tier
one, who needs support?
Right, universal screener.

(14:03):
Our progress monitoring fortiers two and three, our
administrator lenses.
Right, universal screener, ourprogress monitoring for tiers
two and three, our administratorlenses.
Which of these gives us thebest rate of improvement?
And our teacher is going to beable to say is this helping the
kid.
Is the kid getting better?
So that's where we are.
Oh, but there's more right.
So, from there, what we haven't,what we haven't touched on, is

(14:24):
well, what does the kid needsupport with?
So lots of times we overuse, Iwould say, universal screeners
and progress monitoring tools,because sometimes they're one
and the same.
We overuse that data to say Iknow exactly what the kid needs.
Well, maybe, but more likelythan not, you might be.

(14:45):
I like to think about it, asyou might be in the right zip
code, but you're not yet to theright street or really to the
house that you're trying to getto and so you could spend your
time, you know, driving aroundthe city and the city's
beautiful and that's nice, butyou have got to get to the house
because the person in the houseneeds your help.
So you want to find that streetand you want to find the house.

(15:07):
So that is where we think aboutdiagnostic information, right?
Because the question in thatcase is what skill does the
student need support with?
The universal screener has toldus who needs support.
The progress monitoring isgoing to tell us are they
getting better once we give themthe support.
But wait a minute, what do wesupport them with?
So we need a diagnostic lensfor that and so we can look at

(15:36):
multiple pieces of data and wecall them diagnostic data.
So it could be something like adiagnostic test, like a phonics
assessment.
The Core Phonics Screener, starPhonics, gives you a lot of
diagnostic information, right?
Those are just two examples.
There's others out there.
Or you could look at a subskillmastery, right.

(15:59):
So if you had a CBM measure thatwas assessing letter sounds and
most, most publishers have that, right, whether you're talking
about Dibbles or Amesweb orEasyCBM or FastBridge or
something like, right, I couldgo on.
Most of them have letter soundsI could look at which sounds

(16:19):
the kid doesn't know, like okay,like, then I can teach that.
So that gives me morediagnostic like information.
One of the things to I wouldcaution is that are they
assessing all of the sounds,right?
Because sometimes some of thoseassessments get cut off because
it's their affluency base,right?
So they stop after 60 seconds.

(16:41):
So maybe the student didn't getto try all the sounds.
So some of them have extensionwhere you can actually give the
entire CBM sheet so that you canassess that.
So there's ways to collect thatinformation.
But those are really important.
Other things to think about withdiagnostics is that you want to

(17:01):
make sure that the student hasmultiple opportunities to
respond, that it's not based onone attempt.
So sometimes we think of thoseassessments and it's like, oh
well, you know, the studentactually only saw the letter S
once.
I really want to test thatmultiple times.
We also want the student toproduce.
We don't want them to identifyright.
So that's the differencebetween a multiple choice type

(17:23):
of test.
That doesn't make a gooddiagnostic.
We actually want the student toperform, we want to hear them,
we want to see the skill inaction, because that is going to
give us the best indication ofwhether they have it or they
don't have it right.
So do we need to teach it or dowe not need to teach it?
So that's the diagnostic.
So that's really important andsometimes, honestly though,

(17:45):
based on the screening data andwhat we know about the kid and
other formative assessments thatwe have, just by working with
that kid we know whatintervention.
So what I would say is don'tover-test kids.
Not every kid needs adiagnostic, right?
Like?
People are like, oh, I'm goingto give a diagnostic.
It's like well, what's thequestion?

(18:05):
If the question is, I don'tknow what to teach.
Yeah, go right to thediagnostic.
But if you kind of have an idea, dig in, start teaching and use
that progress monitoring datato to give you an idea whether
you are ringing the bell theright bell for that kid.

Lori (18:20):
I think where it gets tricky in my brain is that I
want there to be like a clearlike you do this first, so
obviously you do the universalscreener right.
That's like that is the numberone, and then it's kind of a
back and forth, is what you knowand maybe maybe getting in the
town is enough.

Michelle Hosp (18:50):
Right, if you start working with a kid and
using other information you have, you can like, maybe get to the
street.
But for those kids who arediagnostics, our assessments are
really meant for those kids whoare not responding, who you are
still trying to figure out.
Oh, what does this kiddo need?
Lots of times when we work withour kids, we know is it a

(19:11):
phonological awareness issue?
Is it that they can't blend orsegment sounds?
Is it a phonics issue?
Well, do they know all of theirletter sounds?
How are they with long vowels?
How are they with short vowels?
How are they with blends?
Right, if you don't know, teach, teach and then formative

(19:32):
assessment, right.
So I don't think we talk enoughabout formative assessment,
because this is what teachers doall day long.
They're working with their kids, they are listening to them,
they're hearing them, they areprobing them in a most positive
way possible.
So we want to do that samething with formative assessment.

(19:54):
We really want to.
It could be an informalassessment, it could be an
observation, it could bequizzing the kid, but we want to
be able to capture theinformation because we want to
know how do I change my teachingright To support this student?
So we want to know how muchsupport do they need?
So I'll give you an example ofthat.

(20:14):
So if the student is havingdifficulty with a particular
letter, right.
So let's say the letter is A.
So if I ask the student to giveme the sound that A makes, I'm
hoping I'll hear A, right.
If I don't, then I'll do a fullmodel, right.
And I'll say that letter A sayssay it with me and see if they

(20:40):
can model while I do it.
Right, if they can, then I'mgoing to even pull back the
support even more and I'm goingto say well, this is the other
thing I would do with the, withthe A.
I would give them a hand signaland I do an apple, like I'm
holding an apple.
I say A apple, ah, right.
So I give them a hand signal.

(21:01):
So if they can do that, andthen when they get to the A, and
if I look at them and I justgive them the hand signal, can
they get it from that?
Have they learned that signal?
Right?
So I'm not doing the talking,I'm just giving them a clue.
And if they can go, ah, cool,then I don't have to do the full
model, right, like I'm pullingback the support they need.

(21:22):
Or the next thing I would do is, when they get to that A, if
they said, eh, you know, insteadof ah, if I just tap on it, is
that enough to get them to giveme the correct response?
So the formative assessment islike how do I need to teach this
?
How much support does thiskiddo need?
And so with that information,that really helps me a lot.

(21:47):
So now we've answered prettymuch all of our questions.
Right, how effective is ourcore?
Who needs support?
Right, which skills to teach?
If I need that?
Right, the diagnostic, theprogress monitoring is the kid
getting better?
And you know, if they're not,how do I change my instruction?

(22:07):
What's support?
What do I need to be doingdifferently in my teaching?
That's the formative.
That's really digging down andseeing how does the student
respond?
What do they need in order toget this skill?
So that covers all of thoseassessments and all of the
questions.
I'm sure there's more questions.

Lori (22:25):
No, that was really helpful, really helpful.
I think it's just messy, soit's helpful to hear it again.

Michelle Hosp (22:31):
It's so messy.
And here's the thing thinkingis required.
So people are like, well, but Ihave, I have to give a
universal screener.
And then we also do you knowlike maybe they have to give
like a phonics screener too.
And then they're like is thatenough?
And I'm like I don't know whatyour question right, so you have

(22:52):
to go back to what is thequestion you're trying to answer
.
And then do you.
So this is what I would say doyou already have the data?
Do you have the data already tohelp you answer that question,
or do you need to get additionaldata?

Melissa (23:10):
Michelle, I'm wondering if we can dig in a little more
to each of the types ofassessments, because I mean,
let's start with the universalscreener that you just talked
about, because I know a lot ofpeople are talking about those.
They're now on.
These, like states arerequiring universal screeners.
I will say my experience youmentioned it was with we did the
iReady.
That was our universal screener, and what I think we did wrong,

(23:32):
though, was like we tried to dotoo much with it, right, Like
we were trying to like dig downto what the students needed, and
it was like I don't.
I mean, I was a middle schoolteacher and they didn't even
have a fluency assessment onthere, or phonics or phonemic
awareness.
They didn't have any of that.
So I'm like I don't know thatthis is really showing me what

(23:53):
their true needs are.
So can you just talk a littlemore about the screeners and
like what can we expect fromthem and what should we not be
looking to do with them?
Universal screeners.

Michelle Hosp (24:03):
honestly, their main purpose is to tell us how
effective is tier one.
I mean, honestly, it's you know, how good is our core
instruction?
And then, for those kids whoare on track, are they staying
on track?
Right, we can look at the.
Then we can look at how muchour kids are growing.
Are the kids who are on risk?

(24:25):
Are they getting better?
Right, that screening data islike you guys said it's a
snapshot, it's a one point intime indication.
The reason why it's reallyimportant, though, is because
the benchmarks that go with thatdata.
I think this is where people getconfused.
They're like well, what doesthat benchmark mean?

(24:45):
So what I'm sharing is that itonly tells you whether the kid
is on track or off track right.
And then, if you put all thosekids together, it's like well,
how effective is our core?
If we have a lot of kids ontrack, then our core is good.
If we have a lot of kids offtrack, then our core is probably
not good.
But that benchmark score is anindication.
It is a low threshold, and Ithink that's something people

(25:08):
don't understand.
They think that that benchmarkmeans yay, means the kid is
rocking it, and it's like no,the kid is like squeaking by.
So I think we need to look atthat benchmark.
That benchmark is set at a verylow threshold.
It doesn't mean that the kid isrocking it, highly likely to

(25:31):
show up bare minimum proficienton some other larger assessment
right.
So we think about.
Often those universal screenerspredict to like a statewide
assessment or a large scaleassessment.
So what score does my kid needto get in my benchmark to get
over the threshold on that largescale assessment?

(25:53):
But the threshold is set low.
It's set it like the 40thpercentile.

Lori (25:58):
Oh, wow, I didn't know that.

Michelle Hosp (26:00):
Yeah, so the 40th percentile doesn't buy you much
.
That's not even college andcareer ready.
So if we really wanted to saywe want all of our kids to be
college and career ready andsuccessful, then that benchmark
would be much higher.
So I think we need to keep thatin mind.

(26:21):
That that benchmark, yes, it'sa good indication on whether
that student will be successfulon another large scale
assessment at a later point intime.
But that success is a low bar.
So kids who fall below that arereally in trouble.
Right, they need a lot ofsupport.

(26:41):
But even those kids justslightly above it probably need
support.
Right, but it goes back to theresources.
I don't have.
Like I only have so manyresources.
So where am I going to put myenergy and effort and time?
So universal screening is is isinteresting, it's hugely

(27:03):
helpful, it tells me lots ofthings.
But I also think we need tounderstand is that it's not a a
level of great success, it's a.
Those benchmarks are set at abare minimum.

Lori (27:18):
I didn't know that either.
That's amazing and I feel like,michelle, the thing that you
keep talking about is like whatdoes this tell us overall big
picture?
And I'm just thinking aboutsome factors.
You've mentioned a couple ofthem, such as like the quality
of instruction what?
And like curriculum materialswe haven't really gotten into,
like teacher knowledge, or justlike basic support, right, like

(27:42):
we know that students might needsupport, people that you know
sometimes, you know, call outsick, there's not funding for
there's teachers who get sick.
I mean, there's a millionthings that go into this that I
think, when we think aboutquality of tier one, I just want
to make sure that we're sayingthis like it's like the big
picture of tier one, like theevery, every, everything.

(28:04):
So is there anything you wantto add to that?

Michelle Hosp (28:06):
It is and I'm really glad you brought that up,
lori it is every, every,everything right.
So it's not just theassessments we're using, it's
not just the curriculum we have,but it is the teacher knowledge
right, and it is the supportthat teachers get.
It also comes down to, you know, literally like what are our
schedules for instruction?

(28:27):
Are they uninterrupted?
You know, do we really maximizeevery instructional minute that
we have for our kids?
Are we?
Is there enough practice builtin?
Right?
When you look at a lot ofprograms and curriculums and
interventions, what they oftendon't have is sufficient

(28:47):
practice, and Anita Archer is sobeautiful in reminding us that
you have to have practice.
You can't just learn a skilland plow through the curriculum
and not allow that studentenough sufficient, really
intentional practiceopportunities.
So there are lots of thingsright, like we could have a

(29:07):
great curriculum and maybe thecurriculum is actually well
built and it has a really greatscope and sequence and the
teachers have the knowledge andthey're teaching it to fidelity
right, as the curriculum shouldbe taught, but it's still
falling flat.
So then I would say, okay, thenare we giving kids enough
opportunities to practice?

(29:28):
Are we doing differentiatedgroups appropriately?
Right?
Differentiated is for tier one,right, like, differentiated is
not just those kids in tiers twoand tier three.
Differentiated is I'm teachingthese core skills.
Which of these kids in myclassroom need that additional
support?
Right, we talked about that'sthat formative assessment.
What else do they need toreally really grasp this

(29:52):
opportunity, this skill, right?
What opportunities do I need toprovide?
And then, what additionalpractice can I provide for them?
So, yeah, it doesn't tell youwhat the problem is, right, it
tells you that there's a problem.
And then you have to startthinking, well, what do we think
it is?
I mean, maybe one of the thingsI do here is that, you know,

(30:15):
with the whole science ofreading, right, which is
beautiful.
And then I talked to districtsand they're like, well, we're
doing the science of reading.
I'm like, well, tell me aboutthat.
And they'll say, well, webought this new curriculum and
it's, you know, research-based,and blah, blah, blah.
And I'm like, well, great,how's it going?
And they're like, oh well, yeah, it's great.
And I'm like, oh, how do youknow?

(30:37):
And then when you walk intoclassrooms, things are still
shrunk wrap.
You know, everything is stillin their shrink wrap form, right
, because that's what they know.
And it's like okay, wait aminute.
You can't just buy somethingand give it to your teachers and
they've never before had thatopportunity or experience to

(31:00):
teach in that way and you expectthem to just flip overnight.
That's unrealistic and it's notvery respectful to the teacher,
right?
Because then the teachers arelike well, wait a minute.
I've been doing this for 20years and I thought that this
was great, and now you hand methis package and say this is
better than anything you've everdone.
You need to do this now.
So I think that's also part ofthe problem so much.

Lori (31:25):
That's why I wanted to bring it up.
I just think it's like a lot ofthings that you know.
I think often it's like a blamegame, right.
Oh well, it's this thing orit's that thing I think about.
I live outside of Baltimore andrecently, like the Orioles
weren't doing so hot and so theygot it.
You know, they were losingpretty badly, and then what did

(31:45):
they do?
Oh, they brought in a new coachand oh, this guy's really,
really going to get it.
The next couple of games alsodidn't go so well.
Big surprise, right.
I mean, sports teams do thatall the time.
I'm like one person, typicallyone person and I will say
typically, cause I knowsometimes it is actually true,
but typically that one person isin the problem.
There's systems and structuresthat are built up.

(32:08):
There might be things that areinvisible, right, like the
dynamic of the team.
Who knows?
So I appreciate you justtalking about all of these
things.
There's, there's just so much.

Michelle Hosp (32:19):
And I think teachers you know, I think
they're getting a bad rap ofthey're not doing things right
and they haven't been doingthings right and that's just a
shame game and that's justnobody's gonna help.

Lori (32:30):
It doesn't help anyone, and so what I?

Michelle Hosp (32:34):
want.
What I want people to do is toreally highlight what are the
things that they traditionallyhave done that are amazing, like
those read alouds oh my gosh,right.
Like, don't get rid of yourread alouds, please.
You know, it gives kids accessto material they can't yet
access.
It provides opportunities forvocabulary, for comprehension,
for oral language, right.

(32:55):
Like those things are amazing.
And the love of books like comeon.
Like the more, the more we showkids we love books and can
share that with them, the morethey're going to be engaged and
motivated to work on the skills.
Because we do have to drilldown to those skills, right.
So we do have to say, yeah, andyou know what, someday you're

(33:16):
going to be a reader like thisand we're going to work on these
skills today because this isgoing to help you get there,
right.
But like, let's just be honestabout what we're doing and
intentional.

Lori (33:26):
Okay, so you wrote a book to help teachers unpack.
So many things about curriculumbased measures I want to get
into this book.
It's called the CBM, I'm sorry,the ABCs of CBM.
There's a lot of letters here,michelle.
The ABCs of CBM.

Michelle Hosp (33:40):
XYZ.

Lori (33:42):
So the ABCs of CBM, which are curriculum-based measures.
I'd love for you to just tellus a little bit about those and
where they fit into theseassessments that we've been
talking about.

Michelle Hosp (33:52):
Yeah, so clearly I love curriculum-based measures
.

Lori (33:56):
I thought you were going to say letters, I was like ah,
and-.

Michelle Hosp (34:03):
Um, they fit in because first off, they um, they
have over 30 plus years ofresearch behind them.
You know Stan Dino and PhyllisMarkin and all of his wonderful
uh students that were at theUniversity of Minnesota at the
time, I mean Doug Fuchs, lynnFuchs, um, mark Shin, like all

(34:24):
of these people have reallyadvanced the field.
And here's the cool thingthey're quick, they're efficient
, they're reliable, they'revalid.
Most of them are doneone-on-one, so it's a the
student has to produce, right,so the student has to say, has

(34:44):
to read, has to do the thing.
And they can be used formultiple purposes.
So a lot of CBMs are the sameassessments that we talked about
for universal screeners are thesame assessments that we want
to use for progress monitoring.
So if I'm doing like lettersounds, right, I want to screen

(35:06):
for letter sounds.
Letter sounds, right, I want toscreen for letter sounds.
And then if I'm actuallyteaching kids letter sounds,
then I can actually use thoseCBMs to actually monitor kids'
progress.
So they're hugely helpful inthose ways.
But there are other assessmentsthat do those things.
So there are computer adaptivetests we call them cats that do

(35:31):
the same thing, right, so, butthey don't do it exactly the
same.
So cats rely most of them,although the technology is
getting better, where we're nowusing voice recognition and
things where the kids canproduce.
But most of the responses areidentification, right, so think
of multiple choice.

(35:51):
So they might be hearing on aheadset, click on the letter
that says the ah sound, right,so that's identifying the sound
for A, versus on a CBM, theywould be looking at the letter A
and they would have to say ah,right, so those are the
differences.
They would be looking at theletter A and they would have to
say ah, right, so those are thedifferences.
One of the bonuses for computeradaptive assessments is that you

(36:14):
can give it to all the kids atthe same time.
Right, so it can be reallyefficient, right, so we're
thinking about resources.
If I can bring all of my kidsdown to the computer lab and I
can gather that data in 20, 30minutes from beginning to end,
that's really helpful.
The CBMs are short andefficient.
Most of them are one minutearound, one minute each.

(36:36):
But when you add all of that upand then you add every single
kid, the actual time it takes ismore.
But sometimes, I would sayparticularly for our youngest
readers and our readers who areshowing not as robust skills.
That is really where a teachersitting with a kid giving an
assessment is really helpful,because you learn a lot just by

(36:58):
sitting with a kid.
But you know why?
Can't there be a combination?
So I also think, with a lot ofthese states and all of the
requirements they have, it'slike OK, I have to screen, then
I have to give this.
It's like okay, well, if I canuniversally screen with a
computer adaptive test and findout who actually is above that
threshold for being on track,and then just take those kids

(37:20):
that are below the threshold andgive them additional one on one
assessments to drill down alittle bit more.
I think there's lots of waysthat it can be done.
So I'm clearly partial to CBMs.
We do have a new book that'scoming out.
This is what blows my mind.
Cbms have been around for likeover 30 years and the last time

(37:45):
we did the book, I think it's a2017 publication, right, so it's
getting quite old and you wouldthink that.
So from that publication, fromthe original because this is the
third edition, which is kind ofcrazy, so the original edition
was done in the early 2000s orlater 2000s there wasn't much
change between that firstedition and the second edition.

(38:05):
There are huge, huge changesfrom that second edition to this
third edition.
So things like well, first off,we're doing a preschool chapter,
which we didn't do before, andthese preschool measures have
been around for a while.
But let's talk about, you know,the reason why we want to
screen is we want to intervene.
That's, you know, screen tointervene.

(38:26):
So the earlier we can findthese kids, the better off we
are.
So there's a whole new chapteron preschool and then even the
reading chapters.
Oh my gosh.
So the early literacy chapterused to have just 11 skills and
now has like 14 skills and it'sincluded.
Things like vocabulary, orallanguage.

(38:47):
And it's included.
Things like vocabulary, orallanguage, rapid automatic naming
, right Like vocabulary and orallanguage, are huge and they're
now starting to get attention.
We should be teaching it more,we should be assessing it better
.
The same thing is true for theother chapters.
We have the chapter that wejust called our reading chapter.
It used to have two skills oralpassage reading and mazes and

(39:09):
now it has seven skills, seven.
It now has vocabulary, silentreading, comprehension, spelling

(39:36):
is different, writing isdifferent.
It blows my mind how much thefield has advanced and a funny
story.
So I was there, the AIR,american Institutes for Research
, who also houses the NationalCenter for Intensive
Intervention, right, I highlyrecommend people go there and
look at their tool chart.
If you're looking for CBMmeasures, universal screeners,
progress monitors, it's a greatresource.
That center has been aroundforever and it started as the
National Center for ProgressMonitoring, then it was the

(39:58):
National Center on Response toIntervention and now it's the
National Center for IntensiveIntervention.
I have been collaborating andworking with that center since
the very beginning and Iremember sitting there with Lynn
and Doug Fuchs as we weretraining and doing this huge
presentation in Washington DC,like 300, 400 educators in a

(40:20):
room and it was all on CBM, andI remember looking at her and
saying so, do you think this ishere to stay?
And she's like no, it's a fad.
Right, because it's like no,cbf, this is a fad.
This is what's you know,because also, you have to think

(40:40):
about.
You know, reading First wasstarting to.
You know, come into and and thenational panel for reading.
But it's here to stay becauseit is amazing and it's efficient
and it's quick and it gives usgreat data and particularly for
progress monitoring.
Nothing as is as sensitive asusing CBM.

(41:02):
You can use a cat to progressmonitor, but it's not very good.

Melissa (41:06):
So, michelle, I'm thinking about this, like things
change right and I have tobring this up.
I know we could talk about thisfor a whole nother hour, I know
.
So sorry to bring it up now,but people talk a lot about,
like, running records and I knowwe used to give the QRI and
there's the IRI and I know somepeople are moving away from

(41:27):
those and saying we should notbe giving those and you know
others are still holding on tothem.
Can you talk a little bit aboutthis, like, should we be moving
away from those assessments?

Michelle Hosp (41:38):
What we want to be asking our teachers,
respectfully, is you know, whatis the question that they're
trying to answer is what is thequestion that they're trying to
answer?
What are they doing with thatdata?
Because they're hugely timeconsuming, right.
They take a lot of teacher time.
They don't always have goodpsychometric properties, and the

(42:03):
things that we have been toldthat they're good for don't
necessarily pan out in research,for a quick example of that is
doing miscues, right?
So if a student is reading apassage and makes an error on a
particular word, and we do allthese miscues and we try to
figure out, we try to use thatformatively, right.
Like oh, what do I need toteach the kid?

(42:24):
Like, oh, what do I need toteach the kid?
What we're not giving enoughattention to, though, is the
context.
So if a kid reads a wordcorrectly within a passage, it's
not just about their phonicsskills.
It's about so much more.
It includes their backgroundknowledge.

(42:46):
It includes their vocabularyright, their comprehension and
their motivation, right.
So here's the problem is thatthe error that a kid makes on a
word in one passage, theyactually might not make that
same error on another passagethat they're more familiar with.
They have background knowledgeon.
So now we've said, oh well, thekid makes this error and I'm
going to change my instructionand that actually might not be a

(43:09):
good use of my time.
So we're trying to use them inways that that inform our
instruction, and actually thathasn't.
That hasn't played out in theresearch.
Here's the thing, though ifyou're doing it because you
think it's important to listento your kids read, and you then
do that right, but don't attachit to a test.

(43:30):
Attach it to that opportunityfor kids to read, try to shorten
it.
Try to do it when you're doingconferencing with kids and
follow up right.
Ask them questions, ask them topredict, ask them to summarize,
right, like, ask them realquestions about what they're
reading, not at the word levelof what they're reading, because
those assessments are not goingto do that.

(43:51):
If you want a permanent productof that, though, you could use
a spelling test.

Lori (43:55):
It's like a window right, it is a window right.

Michelle Hosp (43:59):
So it's like what is this kid thinking about?
What that letter, what thatsound makes and what letter are
they attaching to it?
So that is helpful.

Lori (44:08):
So helpful.
That was actually the firstthing you said, michelle, was
that it takes such a long timeand that I remember that from
when I did these when I taughtfifth grade.
I remember assessing some kids,you know, three to four weeks
prior by the time I got donewith my whole class, and then
that data was already three tofour weeks old and okay, well,

(44:28):
what do I kind of do right now,cause it's already like a month
old.
And then inevitably there'slike a spring break or a winter
break and you know, by the timeyou actually come back you're
like, well, that was six weeksago, should I do it again?
But then you didn't actuallyreally get to do much with it
because you were so busy givingthe assessment.
So I'm so glad to hear like thepart that I felt was so

(44:49):
valuable from that was I feltlike it was such an intimate
experience listening to thatchild read and I gained so much
from that and you know, notnecessarily the results and I
feel like I could get the sameresults or the same kind of
effect with a one minute fluencypassage or something of the
sort.
That was like a little fasterevery day, right, yes, I love

(45:10):
that.

Michelle Hosp (45:11):
Yeah.
So why do you?
Why do you think teachers holdonto it?

Lori (45:15):
Oh my gosh.
Lots of reasons.
I'm not sure that it's clearabout what else to do.
I also think resources can beslim, so if that's what you have
, then you do it.

Melissa (45:27):
I said that, like there's still the.
It's what I, what I mean, it'swhat I was taught when I went to
grad school, you know.
So it's like hard to let go ofthat when you're like well, I
learned that from a greatuniversity that I went to, from
professors that I trusted.
You know, some people still puta lot of trust in what they,
what they learned, and fairenough, you know that is fair.

Michelle Hosp (45:48):
So I do think bringing it back to why, why am
I doing this?
What information is helping mebecome a better teacher for all
kids, become a better teacherfor this student in front of me
and, like you guys said, you canget that same information?
I do believe having kids readaloud is really important and

(46:09):
having that time with kids, butthat could look so many
different ways.
And if I mean I do want toemphasize that if we think
gathering that data to look aterrors is helping inform our
instruction, the research reallyis not, does not support it
because of all the other stuffthat we bring when we read a

(46:32):
passage, but that's hard tobreak away from those.

Lori (46:36):
As we gather all of this data.
Michelle, teachers aregathering data every day, right?
We're having all thesedifferent kinds of inputs.
How do we help teachers avoidlike data overload and really
kind of sift through to get towhat matters the most?
And also I'll ask, like what dowe do with all of this stuff,
like how can we use it mostefficiently?

(46:57):
I'm just going to bring that onat the end here.

Michelle Hosp (47:00):
Well, so you know , I mean the funny thing is, as
a school psychologist you wouldthink I'd be like, well, we need
all this data, and what I wouldsay is we actually probably
don't even need half of the datawe're collecting.
So I think really, again, it'sabout the questions what
information, what questions do Ihave?

(47:22):
What information do I need anddo I already have it in another
place?
Do I need to collect it?
And if I do need to collect it,what is the most fast,
efficient way to do it?
And again, it's really like ifI give a test, what I say to
teachers is that if you don'tknow why you're giving the test,

(47:42):
don't Just teach, because yourkids will be better served.
Now, that doesn't mean youdon't give your statewide right,
like there's also like youcan't just be like Michelle Haas
says I don't have to test mykids Because I don't know why
we're giving it, because I don'tknow why we're giving it.
I mean, I also think it's agood, it's a good opportunity to
talk to your administrators andyour leaders and say what do

(48:04):
you do with this data Right?
Say what do you do with thisdata right, so that you can have
a little bit more buy-in andunderstanding and then saying,
well, how can I use this data?
And if it really is, it's just,you know it's a requirement,
it's a reporting requirement.
Then at least you know, as ateacher, of how much stake to
put in that right, like, okay, Ihave to give it.
Give it, move on and get to thebusiness of teaching.

(48:24):
For administrators I would say,seriously, look at all of the
data you're collecting and againgo back to your questions.
The question should drive whatdata you collect.
If you have those questions andyou notice that you are
collecting multiple pieces ofdata to answer the same question
, then get rid of some of that.

(48:45):
The other thing I was, I wouldsay, is that the data displays
right, I see often, as you know,like I'll say to teachers are
you, are you doing, you know,universal screen?
Yes, okay, great, what do youuse?
I'll just say I use dibbles,great, can you show me that?
And they go to a drawer andthey open their drawer and they
start pulling out all of thisstuff and I'm like, oh my gosh.

(49:06):
So again, it's, it's not enoughto just give the test and you
have to be able to see that datain real time and make sense of
it.
So the data display and thereporting is really important,
regardless of what assessmentyou're using.
So it's important for teachers,because if I have to give a

(49:27):
test and I have to put it in adrawer and then someone else has
to enter the data, it startsgetting stale and you know it
just, it gets farther andfarther away from me.
So things that can beimmediately I give the test, I
can see the results, are reallyhelpful.
And for the administrator, Iwant a data dashboard.
I want to see all my data atone place and I want to see my
attendance data and I want tosee my behavior referrals right,

(49:49):
I want to see all of thistogether.
So if I'm looking so here's aninteresting thought If I'm
looking at my interventionsright, and we said, well, look
at your progress monitoring dataand look at your rates of
improvement and figure out whichones are better, then if you
drill down and you say, well,this intervention is not working
, the kids are doing horrible,but you actually include your

(50:09):
attendance data in that andguess what, 80% of the kids only
got 10% of the intervention.
Oh, wait a minute, thatintervention I can't throw it
out yet because the data I haveis not enough to make a decision
.
So for administrators having areally good data system to pull

(50:31):
all of that together to askthose questions is really
important.

Melissa (50:36):
Well, we could probably keep asking you questions all
day.
Maybe we need a part two aboutassessment at some point.
This was really really helpful,though, and I know it answered
a lot of my questions, but itleft, like I said, I have more
questions for you, so we mightneed to do a follow up.

Michelle Hosp (50:54):
It's complicated, you know, and so I appreciate
you guys kind of like resetting,like we could probably go back
and reset all over again, right.
But I would say for teachersdon't get discouraged.
But I would say for teachers,don't get discouraged, right,
like and trust yourself.
I think.
I think teachers don't feellike they trust themselves
anymore because they're gettingconflicting information, right

(51:16):
Like oh, you're doing this wrong.
You need to do this.
So I think, really, going backto well, what is it I need as a
teacher?
What information do I need tohelp my kids in my classroom
every day?
And I would also say toteachers that if you can clearly
show to any administrator thatthis is the information you need

(51:37):
and this is how my kids areimproving and use data, that is
going to be enlightening for theadministrator and freeing for
you, because it's proof thatwhat you're doing is helping
kids right.
So, use the data to serve you.
Have clear questions.
Collect it the most efficientway to really show that your

(52:00):
kids are growing.
And people are going to be blownaway and say, hey, how do I do
that?
How did you do that?
How come your data looks thatway?
Those are really cool thingsand teachers don't get enough
love, right?
So I also think supportingteachers and giving teachers the

(52:22):
opportunity to use their dataas a hey, look at what I'm doing
, look at my data, look at whatI'm doing with my kids here's my
evidence, here's my proof,here's where all of my love and
work shines and just giving themthe space and platform for that
, that's what assessment datashould be used for.
Also, right, the celebrations.
I can't thank you guys enough.

(52:44):
You guys are amazing and yourpodcast is amazing and teachers
are lucky to have you guys.

Lori (52:49):
Well, thank you.
Well, we're grateful that youcame on today so that we know
all about assessments now.
Obviously, we needed you.

Melissa (52:58):
We need Michelle every day in our life To stay
connected with us.
Sign up for our email list atliteracypodcastcom, Join our
Facebook group and follow us onInstagram and Twitter.

Lori (53:14):
If this episode resonated with you, take a moment to share
with a teacher friend or leaveus a five-star rating and review
on Apple Podcasts.

Melissa (53:24):
Just a quick reminder that the views and opinions
expressed by the hosts andguests of the Melissa and Lori
Love Literacy podcast are notnecessarily the opinions of
Great Minds PBC or its employees.

Lori (53:36):
We appreciate you so much and we're so glad you're here to
learn with us.
Thank you.
Advertise With Us

Popular Podcasts

NFL Daily with Gregg Rosenthal

NFL Daily with Gregg Rosenthal

Gregg Rosenthal and a rotating crew of elite NFL Media co-hosts, including Patrick Claybon, Colleen Wolfe, Steve Wyche, Nick Shook and Jourdan Rodrigue of The Athletic get you caught up daily on all the NFL news and analysis you need to be smarter and funnier than your friends.

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.