Understanding and Meeting Needs
Even a great plan will not work for all students; understanding and meeting needs will
By Emily Freitag
This post is adapted from an email originally shared on March 5, 2021. If you would like to receive future emails from Emily Freitag, you can sign up here.
Access the Takeaway 5 PLC reflection and discussion guide.
When you invite researchers and practitioners to talk about how to “rethink” intervention and ask what does and doesn’t work, you hear a lot about data.
“We should use data to drive instruction” is at once a universally respected truism in education and an indecipherably vague action. Every word of the phrase is subject to multiple meanings:
- Use: We can take many actions in response to data
- Data: Data comes in many forms
- To drive: The data we collect can determine the next steps we take
- Instruction: Data can inform many aspects of instruction
I knew the field was vague on assessment purposes and data practices before these conversations, but I hadn’t fully realized the range of possible misunderstandings or misuse. Over the course of these conversations, my perspective on how we can best use data to identify and support students fundamentally changed.
I came to worry more about the harm we cause based on the way we use academic data. On the road paved with good data-driven intentions, I heard many stories about how being assigned to the “Red Group” versus the “Blue Group” — knowing full well what these groups were all about — really stuck with people into adulthood. I heard people talk about how, even today as adults, they question how much they are capable of learning because of those groups. I heard stories about how teachers who suspected a student had an off day during MAP testing didn’t think the student needed a six-week remediation, but they were told to stick with the plan; then, they watched that young person disengage from school because they were so bored. Mostly I heard stories about the opportunity cost of spending incredible amounts of time on assessments and parsing assessment results without evidence that these processes led to any change in action or practice.
Yet, as I worried more about common learning data pitfalls, I also became intrigued by some very simple data practices that are both far more effective and far too uncommon. I am now obsessed with routines based on assignment completion data, because the correlation between assignment completion and student learning knocked my socks off. The practicality of team-based routines for flagging, understanding, and responding to student needs based on assignment completion data stood out as an actionable and highly relevant practice for this time. I saw firsthand how quickly a high school was able to adopt the practice of a 15-minute weekly team meeting to brainstorm what might be going on for students flagged for three incomplete assignments, and the immediate impact they saw on assignment completion and learning data once they reached out to those students.
I was reminded that some of our most valuable data is regularly overlooked. Student and family experience data — not just surveys but also day-to-day conversations — have so much to offer in helping us understand the “why” behind outcomes. Yet common data practices so often jump from assessment to solution without exploring what’s below the surface. Students know so much about the truth of their experience and their learning; in addition to its problem-solving value, student experience data has the potential to hold us accountable, in the truest sense of the word, to our mission as educators.
Finally, I came to see some fundamental flaws in how we respond to data. In particular, I was struck by how often we think the solutions to a learning need for an individual student requires an individualized solution. Across my conversations I kept hearing from people who live into and lead blended learning that learning is social, and that these individualized solutions can create for students a sense of isolation. Yet as I listened to a conversation about promising edtech startups the other day, every single pitch was based on the premise that it could “identify the need” and then “provide the student with the individualized solution.” The voices of my Rethinking Intervention colleagues echoed in my head and I wondered, haven’t we learned this lesson already?
In sum, these conversations confirmed that we need data. We need it because no plan, however great, will work for every student. We need data to know who the plan is not working for so we can get more data to understand why it isn’t working and provide a solution that does work. But we also need to examine closely held ideas about how we collect data and what we do with it, because some of our most ubiquitous practices may not be serving this purpose — they may be taking us further away from that very goal.
Here are a few resources relevant to this takeaway:
- The Curriculum Support Guide’s Key Action II.2 contains steps, guiding questions, and resources for creating an assessment and grading strategy.
- Here is a summary of University of Chicago’s 2005 study on ways to use assignment completion data to support student-centered problem solving.
- ANet’s 3 Principles for Assessment provides actionable advice to schools and districts on how to approach the use of assessments and data in the aftermath of the pandemic.
- This Learning Accelerator blog outlines what we really mean when we talk about “assessment.”
- Center for Assessment’s Scott Marion wrote this blog on what we can learn about student engagement and grading systems from this year’s failure rates.
- This STEM Teaching Tools brief provides guidance on developing formative assessments that fit a three-dimensional view of science learning.
What else would you add? I’d love to hear your thoughts by email or through our social channels using #RethinkingIntervention.