After seven years and six million dollars, a newly released study funded by the Department of Education may give educators more fuel for their fire when choosing to light up blended learning models in schools. It won’t, however, provide the blaze some might have hoped for.
The study began back in 2006, long before blended learning became the household name it is today. John Pane and his team at the RAND Corporation set out to demonstrate the effectiveness of a long-standing product, Carnegie Cognitive Tutor Algebra. After two years and 17,000 students, the study leaves educators with some conclusions, but more questions about the role big research can play in advising their edtech strategy.
The Study
The study, titled “Effectiveness of Cognitive Tutor Algebra I at Scale,” is one of the largest randomized controlled studies done on blended learning to date. RAND researchers looked across a wide range of schools and states, in contrast to other studies that focused on single site implementation. Over 17,000 students across 147 high school and middle schools in seven different states were involved in the study for two years.
Researchers divided schools into control and treatment groups. The treatment group was prescribed Carnegie Learning’s Cognitive Tutor Algebra software in a blended learning model. Prior to implementation, teachers received four days of training. For the next two years teachers blended their classes, using the software two days a week and the rest of the week on group work and other non-software learning. The control group ran their classroom as usual (which did not include blended learning models).
The Results
While the study had all the ingredients to be groundbreaking--it was large, methodologically sound and executed by top researchers in the field--the findings aren’t significant enough to move mountains of funding into blended classrooms or towards purchasing Cognitive Tutor. However, they could inform some people about some of the work they do. Here’s how it breaks down.
The middle school treatment group made no gains in the first year. By the second year, with still no significant gains, their data was showing small positive gains but not enough to be conclusive. According to Pane, the test group of middle schoolers was smaller than the high school group, which could have affected results.
Bottom line for middle school educators: This probably isn’t the study for you. The data won’t tell you much about whether this intervention would work with your kids.
The high schoolers in the study surprised researchers in their first year by showing no significant gains, with data “trending negative.” However, into the second year, they increased their scores on the post-test administered by researchers. They made twice the gains between the pre and post tests as did the control group. Researchers hypothesize that the first year dip could have resulted from the acclimation cost of adopting new technology and the adjustment period teachers must go through when teaching new materials.
Bottom line for high school educators: This study might indicate that this product could be effective in your school, if you’re willing to undertake the acclimation cost and have the technology available to implement this blended model. In this case, it might be worth taking a deeper look at the results to see how similar schools fared.
Interpreting The Results
While the study leaves high school educators with some food for thought, the reaction to the study has been mixed for researchers and supporters of blended learning. For some, like the Clayton Christensen Institute, which has helped shape the taxonomy around blended learning, the study is a good start. Says Research Fellow, Julia Freeland: “we see these findings as an excellent first step in building a theory of what works, for who, and how it works.”
But Justin Reich, an edtech researcher at Harvard University, warns that educators should be wary about the conclusions from this study--especially claims made by Carnegie Learning marketers that the product “doubles math learning in one year.” He breaks down the study details in his blog post this week, concluding with a call for “broadly-accessible, readable summaries of important studies that help educators make careful decisions with scarce resources based on careful interpretation of existing evidence.”
Reich points out a common trap that even we at EdSurge can at times fall into. Evaluating studies like these is hard, and requires a great deal of expertise and time. In a phone interview with EdSurge, Reich points out that “we expect doctors to read medical studies in a way that we don’t expect teachers and educational leaders” to do the same. And yet, studies like these aren’t easy for practitioners to thumb through.
The Role of Big Research for Big Decisions
As districts and school leaders spend a great deal of money adopting edtech products, the need for data to drive their decision making has never been greater. Researchers like Pane are trying to give the industry better tools to make their decisions with. As he explains, “What I see in the media is that there is a lot of deployment of technology without a clear idea of how it will be used… hopefully [through big research] we will be building evidence that helps guide educators, that will be more beneficial from what everyone is doing now.”
So, what role can big research like this RAND study do for educators trying to make tough decisions about what to implement and where to spend their money?
Big research like this won’t be a silver bullet. Because random controlled studies simply explain the averages, there is a great deal about other factors, such as implementation, that are not explored. For this reason, there are very few cases where one study can inform all educators across the field. According to both Freeland and Reich, the value in these studies is about digging into the details.
“It’s tempting to give Carnegie the 'what works' stamp of approval. But without digging in on anomalies, it’s impossible to accurately predict the right circumstances for the product to consistently generate impressive boosts in student achievement,” Freeland explains. She advocates that researchers should do more than just the randomized controlled study to explore the anomalies and go beyond the averages.
Reich sees it as important for educators to be aware of the interpretations behind the findings, "Educators should be aware that in interpretation of findings, there are subtle incentives throughout the system for researchers, funding agencies, publishers, journalists--pretty much everyone--to tend towards a bias in highlighting positive interpretations of findings." Reich explains, “most studies like these are not going to be able to boil down to a simple slogan. We are going to have to provide people with some context for translating findings.”
Research like this can build a case over time. As more and more exhaustive studies like these are done, the patterns that emerge may demonstrate whether blended learning overall is effective. However, these efforts will likely involve more time and money--two resources that are often in short supply when it comes to education.