A Quick Guide to Foundational Literacy Assessments
In our work with schools and systems across the country, we’ve found that leaders and teachers often don’t get the support they need to deeply understand 1) the different kinds of student learning data they collect and 2) how to use those data to drive mastery of foundational reading skills.
That’s unfortunate because we know that student data can provide invaluable insight into the skills that students have and have not yet mastered—and, therefore, what areas of focus will help teachers make the most of their instructional time. Ensuring that teams have a strategic set of strong assessments in place and that educators know how, when, and why to administer them is the first step to empowering educators to use data to accelerate student outcomes.
In this quick guide to foundational literacy assessments, we’ll go over the purposes and uses of four main types of assessment, using the story of a 1st-grade student named Verna to explore how each fits together to make all the difference for her learning experiences and outcomes.
Here’s a little background on Verna and her 1st-grade teacher, Ms. Wilson:
Verna attended pre-K in 2019–20, but her mom pulled her out once the pandemic began in early spring. Wary of virtual learning, her family decided not to enroll Verna in kindergarten in 2020–21. Without that year of kindergarten under her belt, when Verna started 1st grade in the fall of 2021, Ms. Wilson knew that she’d have to keep an eye out for unfinished learning in her foundational literacy skills.
1. Universal screeners (aka benchmarks)
A universal screener is given to all students at intervals throughout the year (usually beginning, middle, and end). These assessments determine where students are on the right track and where they may be at risk for reading difficulties by measuring their performance against nationally normed proficiency benchmarks. They are often used to set learning goals for students (i.e., results on the beginning-of-year assessment inform goals/targets for performance on middle-/end-of-year assessments). Screeners work by isolating a few skills that predict student risk. There are two things that are especially important to know about universal screeners:
- They provide different kinds of information: A universal screener isn’t a universal screener isn’t a universal screener. Some tell you about students’ overall reading level. Some provide insight into mastery of a particular domain. It may sound obvious, but understanding what the test is measuring (i.e., the skills it is using to determine whether a student is below, on, or above grade level) is a critical first step toward ensuring your teams are gathering actionable data.
- No matter what they’re measuring, the information that universal screeners provide is limited: Because universal screeners provide a high-level snapshot of student performance, they don’t take much time, which is great in terms of being able to gather information quickly. However, it also means that screeners don’t provide much detail. The long and short of it is, universal screeners can tell us where a student may be having difficulty, but not why (e.g., a screener may identify that a student is not meeting their grade-level expectation for letter name/letter sound identification, but it’s unlikely to tell you which letters/sounds they have not yet mastered).
Here’s how Verna’s school used a universal screener to collect information about what she already knew at the beginning of 1st grade:
In the first few weeks of her 1st-grade year, the reading team at Verna’s elementary school assessed individual students using DIBELS, a reliable and valid universal screener. Verna’s scores revealed that she performed on grade level in some skill areas but significantly below grade level in letter-naming fluency, phoneme segmentation, and letter sounds. These results gave Ms. Wilson a good picture of where Verna’s gaps lay, though not what those gaps were precisely. To find out exactly what skills Verna had not yet mastered, Ms. Wilson knew she’d have to dig a bit deeper.
Diagnostics help add detail to the big picture that universal screeners offer. Diagnostics are given to students when we need more information about why they may be struggling in a particular area. Where a universal screener will tell you that a student is not on grade level for letter sounds, a diagnostic will tell you that the student has not mastered the short /o/. That level of detail is invaluable for educators in determining how to use their small-group time as well as in setting learning targets for individual students. For example, if a student has mastered all vowel sounds except the short /o/, a diagnostic would enable their teacher to jump straight to working with them on that sound rather than reviewing all vowel sounds.
In our work across the country, we’ve found that diagnostics tend to be underused, even though many curricula provide them in their packages, and, if not, there are open source diagnostics that schools can access (e.g., the PAST Test).
One reason that schools often choose not to use diagnostics is that they take a long time and they’re high effort: typically, they are administered one-on-one, educator and student. So, in the face of capacity limitations, it’s easy to find reasons not to administer them.
We always remind our partners that diagnostics provide a clear picture of what individual students need to move forward—and investing the time to narrow in on students’ particular roadblocks can help us advance their skills much more effectively and efficiently during instructional time. In my experience, the time it takes to administer these assessments is well worth the investment.
In Verna’s case, Ms. Wilson used diagnostics to get a better understanding of the picture painted by her universal screener results:
Ms. Wilson used a phonics screener from the curriculum to diagnose the exact letter names and sounds that Verna has and has not yet mastered. Ms. Wilson knew that phonemic awareness, as tested on DIBELS PSF, is a subset of phonological awareness. She administered a few phonological awareness tasks from the PAST Test to Verna in order to pinpoint her precise gaps. After these two diagnostics, it was clear that Verna needed support blending and segmenting phonemes in words and identifying the sounds for the letters “d,” “g,” “i,” “j,” “n,” “o,” “v,” and “w.”
3. and 4. Progress monitoring assessments and formative assessments
Ok! So, you used a universal screener to determine what area(s) a student has not yet mastered and a diagnostic to pinpoint the specific skills they need to work on, and you started targeting your instruction to help students master those skills. Now, how do you know if it’s working? There are two primary types of assessment that can give you insights:
- Progress monitoring assessments: This type of assessment tells you how students are progressing relative to their individual learning goals that were set based on their universal screener results. The results can help teachers understand if the current instructional approach is effectively moving students toward their goal(s) or if an adjustment to that approach is necessary because students are not moving toward their goal(s). Without progress monitoring, a teacher might have to wait months between administrations of the universal screener to understand students’ progress, which could cause them to lose valuable instructional time to approaches that aren’t working.
- Formative assessments: This type of assessment gives you insight into whether students mastered what was being taught in a particular unit or lessons. Formative assessments can be:
- Curriculum-embedded (i.e., some formative assessments are often included in curriculum packages).
- Teacher-created (i.e., it’s common for teachers to develop quick assessments—such as teacher-completed checklists, exit tickets, or notes jotted down on a sticky note—as checks for understanding; often these assessments are slightly shorter than their curriculum-embedded counterparts).
Though progress monitoring and formative assessments are slightly different, using them in combination can help you keep an accurate pulse on how students, both as a group and individually, are progressing and make informed instructional decisions for students.
Ms. Wilson was able to use these types of assessments to determine how to focus instruction for Verna:
Ms. Wilson reviewed Verna’s DIBELS progress monitoring data and was pleased with the progress she was making toward her goals. However, she did notice that the data from Verna’s weekly curriculum dictation test indicated that Verna was still struggling to represent the /w/ and /g/ sounds with the appropriate letter. Ms. Wilson made a note to make this the primary focus of her small-group work with Verna the next week and to do a quick white board task to assess where she is or is not moving toward mastery of those targeted skills.
Strategically using the four types of assessments discussed above can help anchor conversations in specific evidence of improvement or continued struggle, help educators isolate instructional strategies that are and are not working, monitor students’ progress over time, and make informed decisions about what to try next.
Still, we know that developing an assessment strategy isn’t easy. That’s why we outlined a step-by-step Early Literacy Playbook, including a handy dandy assessment inventory, that details specific ways leaders can implement best practices for student data that support their early literacy system. Get started today!