Each year the edtech sphere grows larger, with more products, dollars and users. But research on the effectiveness of edtech tools seem to have lagged behind the rapid pace of the industry
That’s not to say that there is no high-quality research out there. But large, independent projects are sporadic and far too rare. In fact, in a 2012 review of research on educational games (PDF), researchers from the University of Connecticut had to expand their initial study inclusion criteria because they could not find enough quantitative analyses that featured the necessary elements of a pre-test, post-test, and control group.
When it comes to standards and curricula there is a fever pitch about field-testing and an insatiable craving for evidence. So why have educators, policy-makers, and entrepreneurs largely abstained from efforts to reach a consensus on if and how edtech tools should demonstrate their effectiveness?
One answer is that the goals, risks, and usage of educational technology put it in a nebulous gray area in terms of how it is viewed. Imagine that products for beneficial purposes, like health, fall along a spectrum based on the degree to which we demand evidence of their effectiveness. On one end is medication. It would be unfathomable for a medication to ever be made available without high quality clinical trials demonstrating a modicum of effectiveness. Medicine is dangerous, and therefore we need enough testing to protect people. On the other end of the spectrum would be a physical fitness tool like an ab roller. People might feel they wasted their money if it sits in the closet, but nobody would expect to see a published paper on its effectiveness.
Edtech tools fall somewhere in the middle. Students are not like sick people, who may be wholly reliant on a drug company to save their life, but the negative consequences of spending time with ineffective learning tools can be severe. A dearth of academic skills snowballs over time and can drastically lower the quality of a person’s life. So we do have a social responsibility to ensure a high standard of evidence for anything children use for learning.
The way these tools are used also puts them in a gray area with regard to the ease of evaluation. It’s easy to do a controlled study of a medication. Everybody takes the same pill in the same way. Conversely, it would be implausible to measure the effects of an ab roller because there’s no clear desired outcome and people may use it in different ways. It’s hard for anybody to know whether it “worked.”
Once again, edtech products fall somewhere in the middle of these two extremes. While a tool may not help every student, if it’s effective we should be able to see it across a large enough sample. At the same time, people may use these tools in different ways, and the tools may be helpful in ways we’re not expecting or measuring. So it’s not entirely unreasonable for certain companies to say it’s not feasible to conduct a controlled experiment.
One final issue that edtech research exists at the nexus of marketing and the advancement of general knowledge. The result is that it’s nobody’s priority. Independent researchers who pursue general knowledge are not enthused about doing work that could be used to sell something specific, and companies see no value in the knowledge generated by a negative finding.
In an ideal world, there would be an expectation that after hitting a certain threshold of usage an educational tool would be subject to rigorous evaluations. Can anything be done to jumpstart rigorous, high-quality research on education technology?
One model may be the non-profit sector. Non-profits also fear unflattering findings, but grant-making organizations increasingly require more evidence of program effectiveness. As a result, many non-profits hire evaluation firms (e.g. Mathematica, MDRC, etc.) to evaluate programs in cooperation with the organization.
Theoretically, these kinds of evaluations could even be performed by underemployed social science PhDs who are waiting in vain for professor jobs to open up. This research would not be up to the standard of what appears in a peer-reviewed journal, but it’s relatively rigorous and would be a huge step forward for most edtech products, especially those developed by small teams.
Another strategy is to focus on a product's users rather that its creators. One idea is to use government funds to incentivize schools to cooperate with researchers. Race to the Top funds are already tied to the adoption of a variety of different policies and practices, and a small fraction of the money could go toward increasing partnerships between schools and researchers. This could drastically widen the pool of classrooms in which new innovations can be tested.
Defining what “works” in education is more than a two-player game. And getting more stakeholders involved in the questioning and research process will help the industry arrive at more meaningful, and specific, measures of effectiveness.