Last summer, I was on a call with a district superintendent who had just one request: “Give me something—anything—that shows that the product works for my students.” In my role as Newsela’s resident senior researcher, I am responsible for providing meaningful data to educators so that they can make informed decisions for their learners. Too often, though, we’ve been narrowly focused on finding the answer to one simplistic question: “Does it work?”
In his request, the superintendent had highlighted fundamental issues with efficacy as we know it today. I began to think that rather than searching for answers, the key lies in asking just a few more questions—questions that go beyond “does it work?”
Newsela just released its own results on efficacy, supported by third-party researchers, and we uncovered a few key takeaways. In the interest of moving the industry forward, we’re sharing the hurdles we encountered, as well as proposing a few simple questions that can guide educators when it comes to efficacy.
First, let’s consider the obstacles:
1. Our industry doesn’t have a consistent definition of “efficacy.”
Efficacy has taken several shapes and forms over the years, which has resulted in confusion across the industry. It often meant a several-year commitment to a study, only to get dated results. So when a product says they have proof of efficacy, what does that really mean?
In the current climate, proof of efficacy could range from teacher testimonials, word-of-mouth recommendations, to third-party efficacy studies. This fragmented context subsequently lowers the bar for what we consider an efficacious product. Inherent in old approaches to efficacy is the idea that efficacy is one-size-fits-all, but this thinking clearly isn’t working for teachers or learners.
2. Incentives aren’t aligned across stakeholders, which has led to lack of trust.
The efficacy process is fragmented across stakeholders, the cycles are notoriously slow, and the associated costs are sky-high. As such, vendors have occasionally undertaken their own research in the interest of quicker and cost-effective turnaround.
But how can educators trust research conducted by vendors who have skin in the game? The choice between dated research and vendor-sponsored research hardly seems ideal. Furthermore, with each of these players putting forth different definitions of efficacy, the very notion of efficacy continually gets undermined. As a result, we’re stuck in a cycle of distrust—despite each of these players wanting what’s best for learners.
3. Educators need to see themselves reflected in efficacy research.
Across all our conversations with teachers and administrators, we kept hearing one request: “Help me see how this product works for us, given our technology access and demographics.” This meant that simply asking “does it work?” wasn’t enough.
In a study (PDF) on education technology adoption and implementation conducted by Dr. Michael Kennedy, an associate professor at the University of Virginia, one administrator said: “If the product was developed using federal grant dollars, great, but the more important factor is the extent to which it suits our needs.” To wit, an efficacy study conducted in a large 1:1 school district is not likely to yield transferable takeaways for a small rural district lacking in technology access. So we have to wonder: what might research look like when it effectively tells the story of one’s own school or district?
These obstacles left us wondering how to move forward. Reading between the lines of the superintendent’s request, we worked alongside research organizations to design studies that would produce results that were not only credible, but also constructive. We put ourselves in the shoes of the superintendent and asked ourselves some simple questions to add more context to the question: “Does it work?”
Here are the three we used in our own methodology:
Question 1: What problem does the product solve for educators and districts?
Rationale: Without having a clear definition of the problem you’re trying to solve, you won’t know if you’re making progress in solving it. You should have an agreed-upon definition of success before you begin.
Question 2: Whom does our product best serve?
Rationale: Not every product is equally effective for every student in your school. Aligning your expectations with the product’s claims will help set the right goals.
Question 3: What does fidelity of implementation look like?
Rationale: At the end of the day, you’re going to want to know what you’re committing to (along with device access and usage frequency) so that you know where and how often to look to measure the impact.
Educators, researchers, and vendors all stand to benefit from this type of questioning. After all, we all want to unearth valuable data that brings us a step closer to helping students. Across two different studies conducted by WestEd and Empirical Education, we found that students who read Newsela regularly saw gains on their reading scores, and that this positive impact held true for students of all demographic subgroups. While we’re thrilled with these results, we also know that this is just the beginning. There is much more to be done, and many more questions to be answered. To that end, we’ll continue to partner with and forge ahead alongside both researchers and educators.
In our industry, stakeholders share a common goal: driving all learners towards positive outcomes. With that said, we recognize that the classrooms of the future are diverse, and there’s no such thing as one-size-fits-all. We owe it to students and teachers alike to keep asking tough questions—because efficacy results are only as good as the questions we ask.