Does it work? That’s the short but very complicated question asked of all education technology tools. Companies often couch their claims to efficacy behind terms like “personalized learning” and “Big Data.”
But how do we know if their products actually help kids learn? Do companies, educators and researchers measure and evaluate efficacy in the same way? Are we exposing students to experiments that will do more harm than good?
As the edtech industry matures, those questions become more frequent and the pressure to show proven results increases. Barbara Means and Jeremy Roschelle, co-directors of the Center for Technology in Learning at SRI International, have been leading studies on the use of technology to enhance learning.
Means and Roschelle explain how SRI approaches efficacy in education technology, starting with the concept. “There is a technical definition we get from the Institute of Education Sciences. They would say that it [an efficient tool] has a demonstrated impact under well-supported conditions,” says Means.
Demonstrating impact, continues Roschelle, requires a clear sense of what the expected learning outcome is and if the measures used are really showing improvements. “When you find that something is either effective or ineffective, you often have to ask on what measure and if that measure is the one we really care about,” he says.
“That's a tension for new technologies,” adds Means, acknowledging that current state standardized tests are not enough to tell whether an edtech tool is or is not working.
In a recent chat with EdSurge, Means and Roschelle detailed some of the most important features of efficacy in education, particularly in edtech tools. Most of them should be intuitive, both say, but very often they are not.
1) Efficacy starts with a clear purpose...
The first step to evaluating if a tool is effective is to have clarity on the outcomes that it is trying to achieve. And, as Means describes, there are a number reasons why a teacher may want to use an edtech product: to help students practice basic skills such as as arithmetic or reading, to offer them the opportunity to use tools that real historians or scientists use, or to expose them to the fun behind specific subject areas.
“It's important for teachers to be clear on the main purpose they have for using the tool and then try to make a judgement on whether there is a good match between the experience the technology provides and what their goal is,” says Means.
2) … And with right "active ingredients"
After making sure that the tool and purposes are aligned, the second step is to discriminate if the “active ingredient” that has been claimed for the product is based on solid pedagogic principles—the essence of why a tool may have positive impact on students' learning. “For example, giving kids feedback is well known as a really good technique. Learning from a lecture is not known to have the strongest effect in the world,” Roschelle says.
3) Efficacy depends on what is being measured...
Having set the ground with a clear purpose and solid pedagogic “active ingredients,” however, is no guarantee that a tool will effectively improve kids' learning. Roschelle notes that very often educators try to get from researchers a definitive answer about a tool. “They want a very strong promise that if they use a product, they will see this much percentage of improvement in their students.”
Instead of defining if a tool works for every student at all circumstances, he adds, a better approach is to understand if there is a particular setting in which the technology is successful. “Has this tool ever been able to prove once, anywhere, on any measure, that it is making a difference in students' outcomes?”
It's like a car, Means points out. “Can anyone say that this is the best car in the world?” she asks. “Probably not. Different people have different purposes and different outcomes and different needs. Learning technology is like this.”
4) … And depends heavily on time
Another key factor for efficacy is time. A particular tool might have a promising approach, but if not used for enough time, it might not lead to the expected results. “Time is a real killer. Will I use this enough times to make a difference? Technology, when used once or twice a semester for an hour, may not be that effective,” highlights Roschelle.
An additional struggle is classroom management, adds Mean. “If you are going to move your class to a computer lab in order for them to use [a tool], it's going to take out time to move from one place to another, to log on and get on the system.”
5) Efficacy requires systematic measurement
Finally, efficacy is not possible without accumulating evidence—positive and negative—over time. “It takes a while to make good measurements,” says Roschelle, who is optimistic about the use of technology to gather all this information. “That doesn't mean that technology solves the problem for us, but new capabilities to make it much more than ever before.”
When it comes to efficacy, the only definitive answer to the question “Is this edtech tool efficient?” is “It depends.” It depends on purposes, on pedagogic approach, on the chosen measurements, on time and on systematic collection of data. In times when we want the guarantee of “satisfaction or your money back,” the lack of a definitive answer might be uncomfortable. But that isn’t stopping Means, Roschelle and many other researchers from trying to find more conclusive answers.