Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Jocelyn (00:00):
Hello, hello, welcome
to this episode of the
Structured Literacy Podcastrecorded right here in Tasmania,
the lands of the Palawa people.
I'm Jocelyn, and today you aregoing to find answers to the
question (00:12):
how do we get our
assessment schedule in order?
When I'm working with schools,one of the to-do list items is
often to define the assessmentschedule for literacy.
It's common for a school'sassessment schedule to be
really, really full.
Leaders and teachers knowinstinctively that they probably
(00:33):
shouldn't be doing all thethings, but knowing how to cut
down is another thing entirely.
Now, I've written several timesabout assessment over the
years, and I have a wholechapter of my book devoted to
it.
So if you have a copy, you'relooking for chapter 15 in
Reading Success in the EarlyPrimary Years.
I've also talked a ton aboutdata in general here in the
(00:56):
podcast and its role in helpingus evaluate the impact of our
practice.
In this episode, we're going todive into an aspect of
assessment that will help giveyou clarity and guide your
efforts to strip back theassessment schedule to only what
is really needed.
The key to determining whatassessments to include in that
(01:20):
yearly schedule is to know whatquestions you're trying to
answer.
This is also part of thepicture in helping to grow our
capability in using dataeffectively.
And this, incidentally, isanother of the common goals that
schools have.
So, what are the questions we'retrying to answer with
(01:41):
assessment?
We're all familiar with theidea of short-term and long-term
data and diagnostic, formative,and summative assessment.
We've heard it talked aboutsince university.
And the funny thing is thatwhile we've been hearing about
this for years, most schools andmost teachers still grapple
with it.
I think part of the reason thatthis is so difficult is that we
(02:04):
have been so used to doingassessments because someone told
us to, and so used to usingassessments in particular ways
that don't really connect toevidence, that we're now having
to rethink a lot of what wethought we knew about assessment
tools and their value.
(02:24):
Let's begin with the first andmost important question we're
trying to answer.
Who amongst our students is atrisk?
This is where normed universalscreening comes in.
It's critical that we identifywhich students are most in need
of high-intensity instruction.
In many schools, this is leftup to teachers to decide based
(02:46):
on their impression of students.
The impact of this is that thestudents referred for support
are often the wrong students,which leads to an inefficient
allocation of precious resourcesand students who actually need
support not getting it.
And this comes about notbecause teachers aren't
(03:08):
intelligent, but because thereare lots of false positives that
can come up when we think abouthow confident a student looks
or how well that student is ableto articulate verbally.
Now, a normed screener isn'tnecessarily the only tool we can
use to identify students whoneed additional support, but it
(03:30):
is an important part of theassessment schedule.
The other question that ournormed screeners help us answer
is how effective have we been inour literacy instruction?
Tightly targeted instructionthat's doing what it needs to
will result in a reduction ofthe number of at-risk students
(03:52):
and an increase in the studentswho are at or above benchmark.
So it's not just about sayingyes, we do intervention, it's
also about saying how impactfulis our instruction across the
school?
If we're not asking thatquestion and acting on the
answers, then we're never goingto get where we want to go.
(04:14):
There are a few tools aroundthese days that help us measure
things like oral readingfluency, but you actually don't
have to start with a heavy-dutyassessment straight away.
So if your school is currentlyusing a benchmark assessment
tool that uses levels such asthe PM benchmarking or Fondison
(04:37):
Pennell, and trust me, there arelots of schools out there still
doing it, they're very new intothis space, and if that's you,
I don't want you to feel likeyou've done something wrong.
Change is hard, and so to makethis change, one of the things
you can do is to just take anygrade appropriate text and the
(04:58):
Hasbrouck-Tindal Fluency normsand use those things to make
that small change in the rightdirection.
So these norms will tell youhow many words students need to
be reading correctly per minuteat the beginning, the middle,
and the end of the year.
And what is a grade appropriatetext?
Well, that's a bit of a trickyquestion to answer.
(05:21):
But at the end of the ACARAliteracy general capabilities
document, there's an appendixwhich gives you a description of
what those texts need toinclude at different grades for
different students.
It's worth mentioning that thenormed measures of oral fluency
(05:42):
don't begin until Year 1.
And in fact, theHasbrouck-Tindal norms from Jan
Hasbrouck and Gerald Tindal,they don't kick in until the
middle of Year 1.
And this makes a lot of senseto me because students won't
have learned enough code beforethat point to be able to tackle
the texts, which contain a wholerange of alphabetic code,
(06:03):
they're not decodable.
I want to take a bit of asidestep here and talk about
decodable text assessments.
There is an increasing range ofdecodable text series that
comes with text-level readingassessments.
I'm yet to be convinced of theusefulness of these assessments.
One of the common uses is todetermine which text students
(06:28):
can read.
So teachers feel hamstrungbecause they think I can't
choose books for my students toread because I haven't tested
them.
And I worry about this ideabecause there's a real danger of
teachers staying stuck in theidea of levelling when it comes
to choosing instructionalmaterials.
Another implication of theseassessments is the time that
(06:51):
they take to complete.
When they're not normed,they're not being tested to show
that they measure what they saythey're measuring in all of the
aspects.
If our students are readingdecodable text daily, teachers
will have ample opportunity toobserve their reading and take
some brief notes about howstudents are progressing.
There's no reason that youcan't focus on two to three
(07:14):
students each day to check inwith.
If this practice isestablished, there's no need to
take an extra 15 or 20 minutesper student to listen to them
read one by one.
So these tools feel good becausethey look like what we're used
to.
But in their use and in relyingon them, we could actually be
(07:37):
holding our teachers back inbuilding their capability.
The other problem is when werely on a test to tell us what
books students can read, itmakes it virtually impossible to
use a large range of text,which is a really good thing
because we don't have a test forall of those.
So learning to choose whichbooks the students can read
(07:58):
based on their current skill andknowledge is a much more
productive use of time.
Question two.
We have to look at that next layer of assessment which is about measuring the skills and knowledge that lead to success in the universal screening. Yes,we can get some of that
(08:20):
information from the universalscreener, but there are other
sources of that information.
So let's begin with the earlyyears.
And predominantly, this isabout phonemic awareness,
phonics, and early morphology.
In the early years, we look atwhether students have learned
phoneme-grapheme correspondenceand are blending effectively.
This is usually conducted onceper term in an interview style
(08:43):
assessment.
It's really important that thisassessment is aligned with your
phonics scope and sequencebecause this is also the
assessment that will help youknow that you're teaching the
right content to the rightstudents.
It also acts as your diagnosticassessment to determine which
gaps need to be filled for eachstudent.
(09:06):
If your school is usingsomething like DIBLES, yes,
there is a correct letter soundscore.
But what you're testing in thecorrect letter sound score is
mostly basic code.
You're not testing the fullbreadth of the complex code and
looking at whether the studentshave achieved understanding of
(09:27):
the full alphabetic principle.
So the test that comes withyour phonics program is really
important.
The questions we're answeringfor all students with this
assessment are (09:36):
has the content
we have taught stuck into the
medium to long term?
Did the thing we taught in weekone stick until week ten?
What gaps do our students havein their phonics knowledge that
we need to actively fill?
That is so important if we'regoing to have universally strong
(09:58):
reading and spelling outcomesfor our kids.
Are our students makingappropriate growth in phonics
learning to move their readingforward?
This question is criticalbecause if students aren't
learning somewhere between eightand ten graphemes every term of
the early years, they simplyaren't going to be where they
(10:18):
need to be by Year 3.
And remember, we're not talkingabout how many graphemes you've
introduced, we're talking abouthow many graphemes the students
have consolidated.
Finally, how are our studentsprogressing with blending?
There are several milestones inblending that students achieve
(10:39):
on their way to wordrecognition, which is just being
able to say the word when yousee it.
You can read more about this onpage 186 of my book, Reading
Success in the Early PrimaryYears, if you have a copy.
Students progress fromsound-by-sound blending to
automatic word reading, andtracking this development is
essential so that you can seewhere more intensive support is
(11:03):
needed.
In between the once-per-termphonics assessment, the priority
becomes answering the question:
has what we taught stuck in the (11:09):
undefined
short term?
Can the students remember whatwe covered last week?
Now, this assessment I call acheck-in.
It's not a full assessment.
We're checking in, it's amedium-term check for
understanding, if you like.
And it can be done once perweek and does not involve
(11:31):
sitting each child down one at atime.
Instead, physically spread yourstudents far apart in the room
and have them write down thegraphemes you taught last week
and any that you know they'vebeen having trouble with.
Having students write thegrapheme when you say the
phoneme is an excellent way todetermine whether students are
(11:52):
retaining what you've taught.
It also allows you to shortenthe feedback loop so that you
can act immediately if newcontent isn't sticking.
After all, there is littlepoint in continuing to teach new
content if what has been taughthasn't stuck.
Because if you just forge aheadand you just follow the pacing
(12:14):
guide in your phonics program,and incidentally, we don't have
one in ours because we want youto make decisions based on where
the students are up to.
If you just forge ahead withthat pacing guide, there's a
really good probability that youwill have students who at the
end of the term and the end ofthe year have significant gaps
in their learning.
(12:35):
We have to be watching what ishappening as we're teaching.
And then of course you'll bechecking for understanding
within lessons to see if whatyou have taught has stuck in the
moment.
Before we move on to upperprimary, I will also mention
that proper assessment inphonological and phonemic
awareness should be conductedwith all students in Foundation,
(12:57):
at least, to identify thosestudents who aren't progressing
as they should be.
So, yes, there's blending andsegmenting in DIBLES, but it
doesn't cover the full range ofskills that need to be
developed.
Once students are blending andsegmenting with confidence and
you've seen that they canperform the other skills, you
can just pull right back onthat.
(13:18):
If they've got it, don't keeptesting them.
But it's an important part ofearly years assessment and
shouldn't be overlooked,particularly for students who
you know have some struggles.
Let's change course and talkabout upper primary now.
So in upper primary, Year 3 to6, we will complete the same
(13:39):
universal screener as in theearly years.
Obviously, there are differenttests for different grades.
We also need to look furtherinto the knowledge and skills of
students to make sure that theyhave what they need to succeed
in oral reading fluency.
Now, I won't go into a lot ofdetail in this episode about
looking into upper primaryskills and knowledge, because
(14:01):
I've recorded a few podcasts,including season six, episode
12, When Repeated ReadingDoesn't Work, that helps us
understand four reading profilesand the needs of each one,
including how to determine theknowledge that students have
about how a language works.
And we're basically talkingabout phonics to start with.
(14:24):
When it comes to assessment, wecan learn so much from a whole
class spelling test.
And we have these tests freelyavailable on our website to help
you work out which of yourstudents have the phonics and
early morphology they need forstrong reading.
When it comes to spelling,things get a little trickier.
As much as we all wish that wehad one, there just isn't a
(14:49):
normed reliable assessment forspelling that gives us
corresponding information thatwe have for reading.
There isn't a test that saysthis student is where they need
to be up to for their age inspelling, in phonics,
orthography, and morphology.
In the absence of a normedscreener, the best we have is a
(15:11):
diagnostic assessment to help usfigure out what students do and
don't know.
Now, there are some reliableassessments that tell us what
students know when it comes tophonics, and they're diagnostic
in nature, and I've just talkedabout them.
But when it comes to morphologythat sits beyond inflectional
morphology, that goes beyond thepast tense 'ed' and the plural
(15:36):
's' and the 'ing', we don't haveanything nearing complete
effectiveness.
Part of the reason for this isthat derivational morphology,
which is everything beyond thosebasic eight suffixes that we
teach usually in the earlyyears.
When we go beyond that point,we're getting into the realm of
(15:59):
sitting in the same space asvocabulary.
So we don't expect to have avocabulary test that we give to
all of our students becausevocabulary is unconstrained.
And a lot of the knowledge thatcomes with morphology is also
unconstrained.
So I'm sorry to tell you, thereisn't a test that will tell you
where your students are up toin morphology broadly before you
(16:21):
begin teaching.
The best we have available atthe moment looks at the very
basic levels of suffixes andsuffixing conventions.
Now, there was supposed to be anormed prefixes and suffixes
assessment released a year ago,but at the time of recording in
October 2025, it's still not outyet.
In the meantime, theassessments we have determines
(16:46):
whether what we have taught hasstuck.
So know that if you are usingour Spelling Success in Action
program, which leans veryheavily into morphology, that
there is so much learning thatcomes out of that.
It's not just about spelling,it's about vocabulary building,
it's about comprehension becauseof the vocabulary building,
it's about reading accuracy whenit comes to multimorphemic
(17:09):
words.
Know that the assessment thatyou do each week with that gives
you that short-term data.
So it's the same as phonics,it's important that we're
testing and checking at the timeof teaching.
We also need to be checking inafter a period of time so that
(17:30):
we can see if review has beeneffective.
And for us, that sits in thatfifth week of no new content,
that consolidation week, whereyou can check in on what you
taught four weeks ago and threeweeks ago and two weeks ago, so
that you can determine whetherwhat you have taught has stuck.
Getting the balance betweenrigour and time management right
(17:53):
in assessment isn't alwayseasy, but if we know what
questions we are trying toanswer through our efforts, we
can make sure that we aren'twasting time in assessment that
doesn't help instruction.
Yes, there are systemrequirements like the phonics
screening check that areimportant and we know about
NAPLAN, but at the school level,we have quite a lot of choice.
(18:18):
When optimising your assessmentschedule, start with clarity
about the questions you'retrying to answer.
Firstly, identify who is atrisk through normed universal
screening completed at thestart, middle, and end of the
year.
Second, measure the skills andknowledge that lead to success
through aligned phonics,phonological awareness, and
(18:40):
morphology check-ins conductedwhen they're done.
Remember that not everyassessment needs to be done
one-to-one.
Whole-class spelling tests andquick graphene checks can tell
you what you need to knowefficiently.
And finally, let the questionsyou're trying to answer drive
your assessment choices, not theother way around.
(19:03):
Resource Room members haveaccess to assessments and
professional learning that helpsthem understand how to use the
instructional materials we haveto respond to student need and
achieve the outcomes that we'relooking for.
And remember, the onlyacceptable outcome of all of the
(19:24):
work we're doing is that everystudent is learning to read and
spell with confidence.
That's it.
If we do not have every studentsucceeding to achieve their
best, then we are not done inthe work we do.
The last discussion point I wantto leave you with is this.
(19:45):
When it comes to assessmentthat sits outside of what I've
discussed in this episode, let'sthink about a few questions
that can guide us in makingdecisions.
The first one is (19:55):
Does the test
measure what we think it's
measuring?
Has this been confirmed?
Has this been reviewed todetermine that it is actually
measuring the thing that it saysit's measuring?
Is a multiple choicecomprehension assessment really
measuring comprehension?
Considering what we know aboutcomprehension and knowledge, I'm
(20:19):
going to reiterate what otherpeople in the field have said in
that any test involving askingstudents a range of questions
about an unseen text is at besta knowledge test.
And those multiple choicequestions?
I've never been particularlyconfident that they're reliable.
Some students can use theirbackground knowledge and their
(20:41):
capability in language to makereally good guesses and give us
lots of false positives.
So I would be seriouslyquestioning the validity of some
tests that are widely done.
Which activity will give us themost value?
Engaging in assessment involvesprep time, practice time, and
(21:02):
the actual time to do the test,including the reallocation of
people to help, particularly inthe early years.
Is the time spent in thisassessment going to give you
more benefit than if you justuse that time for actual
teaching?
If what you get from doing anassessment is some slightly
interesting data that you'renever going to think about
(21:24):
again, well, maybe we can thinkabout that spending that time
may be better off for thestudents in learning rather than
assessing that gives us datathat we don't use.
Does the assessment inform ourteaching?
If doing an assessment justgives us some kind of score
(21:45):
without connection to thecurriculum, without giving us
clarity on what to do next tofurther the student's journey,
where is its value?
What are we getting out ofthis?
Are we getting out of this thatwe can say, look, we have an
assessment schedule that hasthings on it that other people
recognise?
Or are we getting actualinformation out of it that helps
(22:09):
us to do our core work?
If the assessment is anice-to-have exercise, then it's
probably time to have arethink.
Let's hold space for theassessment that is, we cannot
teach without this assessment.
That makes it a must-have.
And we're going back to questiontwo.
Is there more value in doing anassessment and looking at the
(22:34):
data or in taking that time andusing it to deliver instruction?
These are hard questions.
And I understand completelythat I'm challenging some
established thinking in many,many schools.
But time is precious.
Teachers' time, leaders' time,and most of all, students' time.
So let's make sure we're usingit and maximising every minute.
(22:59):
And that all sounds a bit heavy,but I want to leave you with
this encouragement.
Assessment doesn't have to beoverwhelming.
When you're clear about whatquestions you're trying to
answer, you can build anassessment schedule that gives
you the information you needwithout drowning in data that
you're not going to use orstealing precious instructional
(23:20):
time.
I know that Mildred comes inand says, "Well, if you don't do
the assessment, you're notdoing this reading business
right." Well, Mildred, go away.
We're not listening to you.
And if you haven't heard aboutMildred before, it's not an
actual person.
It's the voice in your headthat tells you that you're not
getting it done.
Also called imposter syndrome.
Give yourself permission toallow for common sense and work
(23:43):
together as a team to definewhat assessment can and can't do
for student outcomes.
Focus on what matters,identifying students who need
support, and ensuring that whatwe're teaching is actually
sticking.
Everything else is just noise.
Thanks so much for sticking withme to the end of this episode
(24:05):
of the Structured LiteracyPodcast.
I hope it gives you someclarity to focus on what truly
serves your students and yourschool community.
Keep listening until after Isay goodbye and have a little
dance party with a very fun songthat we include at the end of
every episode.
So don't press stop yet.
Keep listening.
(24:26):
Until next time, happyteaching, everyone.
Bye.