Picture yourself in a clearing in the middle of a forest. Although densely packed with trees, there are many possible paths before you. In your hand, you have a paper that describes some of the different paths: Some will take you over rocky terrain and test your physical strength; others are shaded and dark and will tax your ability to navigate. Others are delightful--but may circle around and around, never helping you reach your goal. Still others are some combination of these and even more exotic terrains. The paper is no map. It's up to you to figure out what paths you'll use to reach your destination. How will you decide?
Forgive the fanciful description--but somehow that classic role-playing game came to mind when we began reading the latest meta-study on the emerging research on the role of games in education. Bottom line: there's still a lot to be learned, but researchers may be beginning to put some better definitions around "games" and "game mechanics" that will make future research more structured, consistent and easier to digest.
Here's how: In the summer of 2012, SRI International and GlassLab, funded by the Gates Foundation, MacArthur Foundation, and others, began a four-part project, Research on Assessment in Games, which explored the following questions:
- How much research is there to validate the outcomes on student learning?
- How to use evidence-centered design to create new game-based assessments
- How to validate those assessment tools
- How to measure their outcomes on student learning.
GlassLab is concurrently testing, prototyping and refining these ideas in SimCityEDU, the school version of the iconic city-building game.
In May, the SRI research teams released two meta-analyses of existing research articles on simulations and game-based learning. One report, "Digital Games for Learning: A Systematic Review and Meta-Analysis," combed through five major academic journal databases (ISI Web of Sciences, Proquest, PubMED, Engineering Village, IEEE Xplore) and found over 60,000 articles with the words "game" or "games" in the title or abstract. Over 58,000 were weeded out at the title level (for instance, studies on game theory), and after further screening, only 77 were met the criteria for having quantifiable outcomes and effect sizes.
Douglas Clark, one of the report authors, admitted he was surprised by the "dichotomy" between studies where authors described the games extensively but did not offer much information about statistics and methods, and those that focused heavy on the quantitative analysis but gave no description about the nature of games being studied.
The other report, "Simulations for STEM Learning," went through a similar search-and-filter process through three databases (ERIC, PsycINFO, Scopus) and found 40 eligible studies out of 2,392 results. This smaller number of results, said report author Cynthia D'Angelo, could be attributed to the narrower focus of the query, and less variability in the style of research being analyzed. "Educational games research is relatively new," she said. "But the simulation field is further along, and the studies are similar to each other in terms of implementation."
"In order to make the kinds of comparison a meta-analyses makes, you need to have certain kinds of data consistent across all studies," she added.
In their preliminary overview of the 77 game studies in the "Digital Games for Learning" report, the authors found "evidence that relative to other instruction condition, digital games showed significant positive positive effects on science, math, and literacy outcomes but no evidence of significant effects on general knowledge, social science, engineering, or psychological outcomes." Diving a bit deeper, they found that games that were more "sophisticated" and had "interface enhancement" led to better literacy outcomes." In contrast, those with rudimentary structures showed significantly greater effects on math learning outcomes than those situated in virtual contexts for exploration." One possible reason was that
The report on simulation and STEM outcomes found that "simulation treatments were shown to have an advantage achievement over non-simulation instruction." D'Angelo noted these studies relied more on "performance-based and constructivist assessments." The report summary brief also added that "very few of the measures used to assess student learning in the simulations were technology-based."
These initial findings suggest some supporting evidence that games lead to better student outcomes. But, in order to be helpful for future researchers and developers, further work is needed to drill deeper into the specific features of games and simulations, the characteristics of classrooms where the studies took place, and what kinds of assessments were used. These findings will be released in follow-up reports scheduled for release by the end of the year.
Clark hopes these meta-analyses will drive future research to be more consistent and transparent when explaining both the methodology behind the experiment and the design of games themselves. "It's no longer useful to ask, 'Are games good or bad?' Rather than looking at media comparison studies, we really need to focus on the value-added designs and features."