Learning management systems are big business, with a market size estimated to approach $8 billion by 2018. Many popular systems, like Blackboard, Joomla and PowerSchool, offer case studies highlighting the remarkable improvements made by individual institutions.
Yet the gusto of these pronouncements can’t cover up the fact that there is a lack of scientifically rigorous investigations into whether these systems are actually effective.
Until very recently, the majority of institutions that adopted online management systems were in higher education, and a general inability to evaluate what students learn in college made it difficult to know whether these systems actually improve learning. However, in recent years K-12 schools have begun using such systems, and the nature of annual standardized testing provides an opportunity to investigate whether these systems have their intended effect (at least at the K-12 level.)
A new study by Royce Kimmons of the University of Idaho attempts to answer a series of important questions regarding whether online management systems improve learning. First and foremost, is adopting an online system associated with an increase in standardized test scores? Beyond that, Kimmons also sought to examine whether the effect of the systems differed by the type of system (general or education-specific), the cost of the system (free or purchased), the source-code licensing (open source or proprietary), or the specific system used (e.g. Blackboard vs. PowerSchool).
Kimmons' analysis was based on data from websites for all 732 schools in a certain state over the course of the 2011-12 and 2012-13 school years. (For research purposes the state is presented as anonymous, but details in the paper strongly suggest the state was Idaho.) To determine if a school adopted a particular system Kimmons wrote a computer program that would scan each page of a school’s website for various keywords associated with online management systems. Overall, the program identified 931 uses of 23 different systems across the 732 schools. Kimmons then looked at how system adoption was associated with a school’s test scores.
The accuracy of the program was tested by having human researchers search a limited number of the school websites. Kimmons' program was able to identify about twice as many systems as the human researchers, but after further investigation the humans were able to confirm 97% of the information provided by the program. In addition to potentially uncovering more information that humans, scanning websites instead of surveying school officials prevented the analysis from being influenced by officials who may have been ignorant of what the school was using or motivated to hide or highlight the use of a particular system.
So what did Kimmons find? Adopting an online management system had an extremely small, albeit statistically significant, positive effect on test scores. Specifically, adoption of a system explained 1%-2% of the year-to-year variance in test scores. Though there are surely benefits to such systems not reflected in test scores, given their potential cost it seems hard to justify their adoption with a mere 1%-2% increase in performance.
The analysis also found that the adoption of general systems led to more improvement than education-specific systems, but that cost or whether the system code was open source had no effect. In terms of specific systems, adopting Joomla had a positive effect in both years of the study, Blackboard and SchoolStream had statistically significant effects in the second year, and the effects of PowerSchool, Drupal, Edmodo, and Wix all approached significance in at least one year. Again, it is worth repeating that these “statistically significant positive effects” were extremely small—they generally accounted for about 1% of the difference in test scores.
It is hard say anything too conclusive about the results, and the conclusions people draw will likely adhere to their prior beliefs. One uncontroversial takeaway is that adopting online management systems is unlikely to hurt achievement, which was a possibility given that the funds used on the systems may be pulled from other areas. On the other hand, it also seems clear that injecting new technology into school or classroom management is not a panacea, and officials or policymakers would be mistaken to present it as such. If school districts have already made up their minds about moving forward, the results also suggest that they may be better off using free, open-source systems.
The standard caveat of this being a single study of a single state is particularly pertinent, and things may have looked quite different if Kimmons had been able to examine outcomes other than test scores (e.g. graduation rates, instructional strategies, etc.). More work must be done to confirm and extend these findings, and it is also likely that the development of additional tools or information-technology infrastructure that comes with the passage of time may strengthen or weaken the impact of online management systems.
Perhaps the most important takeaway from Kimmons’ study is that, while the work is hard, it can be done, and thus it is time to move past the point where testimonials on company websites are deemed as a legitimate way of determining whether these systems work. In the coming years more and more K-12 schools will be moving their management online, and it is crucial to learn which ways of doing it are most effective.