About this series
Although the cries for "evidence" are frequent in the education space, evidence can prove elusive to practitioners: Where is it? How sound is it? What does it tell us about real-life situations? This essay is the first in a series that aims to put the pieces of research together so they can be used by those charged with choosing which policies and practices to implement. The conveners of this project—Susanna Loeb, the director of Brown University’s Annenberg Institute for School Reform, and Harvard education professor Heather Hill—have received grant support from the Annenberg Institute for this series.
A new series explores what educators need to know
By Heather C. Hill & Susanna Loeb
Welcome to "What Works, What Doesn't."
Educators and policymakers want to make good choices for schools and districts. And research can help. For people in charge of schools and classrooms starting with "what the research says" can be critical in navigating the challenges of boosting student learning and creating environments where children thrive.
Research brings to bear facts that have been collected and analyzed in purposeful, systematic, and often public ways. Its power to rise above the anecdotal is why people in medicine, business, and every type of public policy increasingly refer to it.
Yet rarely does a single research study provide irrefutable evidence that one choice is better than another, because each study takes place with a particular set of schools, teachers, and students, and because each study tests a specific policy or program. And findings from single research studies often conflict; for instance, school turnaround policies may improve student achievement in one context but not in another.
In order to be useful, then, research evidence needs synthesis. Each study is a puzzle piece, telling us where different approaches to improving teaching or schools have reached their goals and where they haven't. Some also tell us why the approach worked or didn't work in a particular context. Only when brought together can the studies help us predict what factors are likely to be in play when we make one choice of practice or program or policy over another. While the flaws in research can be frustrating, once synthesized the evidence in many cases points us quite clearly to one choice over another. Unfortunately, those eager to learn what research has to teach often lack this sort of overview.
In this Education Week twice-a-month series, we will piece together the evidence on issues facing state and district policymakers, principals, and teachers. Look for topics such as having teachers examine student data, homework management, principal leadership-development programs, parent engagement, social-emotional learning, school finance, and STEM instructional improvement in these pages and online. (And we will be looking for you to suggest other topics by using #EdResearchToPractice on Twitter or submitting a comment to the online version of this essay and the others in this series.)
Policymakers and school and district leaders should be forewarned that some of the evidence we present—even for very popular programs and policies—will seem grim. Pruning back unhelpful practices creates room for new and better ones and breathing space for teachers and administrators who right now feel like they operate under the gun of too many (and sometimes competing) policies and programs.
In synthesizing research to see what practices are effective, we will care about both outcomes for students (such as achievement) and outcomes for classrooms (such as the quality of instruction)—we view classroom outcomes as important in their own right. Children deserve safe, caring, and intellectually stimulating classrooms, both because that's their home for up to six hours a day, and because classroom instruction is the primary in-school mechanism through which children learn. Thus, we will include studies, when possible, that improve classroom outcomes, even if we do not know their effects on student outcomes.
When we do write about child outcomes, we will consider more than state standardized test scores. While such scores are important—and often critically important to the policymakers and leaders charged with raising them—many other observable child outcomes shape later-life experiences or are significant, too. Engagement in school, for example, reflects the richness of students' day-to-day experiences and allows them to develop the capacities they need for later success. We can measure this engagement through both concrete actions, such as students' choices to stay in school or take more challenging classes, and more abstract measures like students' sense of connectedness to school, efficacy with difficult content, and hope for the future. Increasingly, studies address these types of important outcomes, and, where possible, we will give preference to work that sheds light on how to provide students with opportunities to develop in these ways.
A few notes about what you can expect from this project:
• We are aiming to provide accessible information. Principals, district leaders, state policymakers, teachers—anyone who makes decisions about the practices and policies we discuss here—should all find something relevant.
• We will try—and fail—at being comprehensive. While we will make an attempt to find every study on every topic, we will surely miss some, particularly unpublished reports and lesser-known published articles.
• We will give preference to studies with rigorous designs. By rigorous designs, we generally mean studies that examine the effects of a policy or program by comparing participants with a plausibly similar group of non-participants. (Plausible similarity may be achieved through random assignment of participants to the program or to a no-treatment comparison group, but it can be achieved in other ways as well.)
• Not all topics will be ripe for review. Emerging policy questions and programs, such as teacher anti-bias initiatives and personalized learning, may not have sufficient evidence to warrant a review. While we hope that we can find an evidence base on these and other similarly hot topics, we may have to leave some aside or address them later. When we do address questions without clear evidence, we will be sure to emphasize the resulting limits.
• We will not be experts in all the topics you want to know about. After 40-plus combined years in academia, however, we have a lot of practice reading empirical studies. If the question is timely and the evidence supports a synthesis, we will summarize it.
• We will be wrong some of the time. The only way we could be consistently correct would be to avoid addressing the complex issues that are most important for education improvement. When we are wrong (and even when we are correct), we invite discussion from other scholars, policymakers, and practitioners in the online spaces we have suggested.
• Expect some guest authors. We will occasionally hand over a review to a scholar with a particularly innovative and impactful study, new and thoughtful ideas, or long experience in a field.
If research is to fulfill its promise to education, researchers have to scoop up the puzzle pieces and put together a meaningful picture for policymakers and practitioners. We look forward to writing this series, and to engaging in dialogue with you as we move forward.
Heather C. Hill is a professor of education at the Harvard Graduate School of Education and studies teacher quality, teacher professional learning, and instructional improvement. Her broader interests include educational policy and social inequality. Susanna Loeb is a professor of education and of public affairs at Brown University and the director of the university's Annenberg Institute for School Reform. She studies education policy, and her interests include social inequality.