Thanks to the Internet, we’re fortunate to have something that all prior societies have lacked: A near-infinite number of people willing to explain why you’re doing something the wrong way.
But despite our wealth of naysayers, there is still significant value in knowing how to catch your own mistakes as they’re happening. Making accurate real-time evaluations about what you’re doing improves performance, and that could ultimately produce more time, money, and positive social interactions.
When it comes to solving math problems, students often fail to catch their own mistakes. In many of these situations the problem isn’t a lack of knowledge. Students understand the concepts involved in arriving at the correct solution; they just fail to recognize a certain error.
One explanation is that students still haven’t developed proper monitoring skills. For such students, attempting to solve a problem and monitor their work at the same time creates cognitive overload. It’s too difficult for them to reason about the solution and carefully monitor the automatic calculations they’re doing.
Might there be a way for students to develop and improve their monitoring skills? Some research suggests that students have an easier time when it comes to finding the mistakes of others, and so monitoring a third party could be a good way to help students develop monitoring skills. But there are social and logistical problems with having students monitor each other. Even adults are made uncomfortable when they have to work with somebody peering over their shoulder.
Could technology provide a solution? Monitoring the work of a computer avatar would avoid the social and physical complication of students observing each other, and it would also allow the process to be personalized.
A new study by Sandra Okita of Columbia University’s Teachers College takes a look at whether or not such a technological tool can be effective. In two experiments she examined 4th, 5th, and 6th graders from two low-income New York City schools as they worked on sets of math problems. The basic structure of both experiments involved one group of students who worked through problems on their own (the control group), and a second group of students who encountered a dinosaur named “Projo” with whom they took turns solving the problems. Thus, the latter group could observe and potentially stop and correct the actions being taken by somebody else.
The first experiment featured 40 students working in a learning environment called “Doodle Math.” The second experiment involved just 22 students, though they spent more than twice as much time working on problems as students in the initial experiment. The latter experiment used an environment called “Puzzle Math” that tended to have sub-problems that made up a larger puzzle. In both experiments the learning environment tracked student activity with a log file.
Okita examined the frequency and accuracy with which students corrected their own work (experiment 2). For the problems done by Projo, she also looked at whether experimental group students were more likely to correctly evaluate Projo’s work (i.e. correctly say if it’s right or wrong) than control group students were to correctly solving the same problem on their own (experiment 1). In addition, in both experiments students completed pre- and post-tests to provide a measure of whether they improved their skills over the course of the experiment.
The results suggest that technological tools can potentially be an efficient and effective way to teach students monitoring skills. Students who monitored Projo had more cases of self-correction on problems they did themselves, and those corrections were more accurate than the corrections of students in the control group. Students in the experimental group also did better that those in the experimental group on Projo’s problems (i.e. correctly evaluated Projo more often than control group students solved the problem.)
In both experiments students who reviewed Projo’s work also showed more improvement on the post-test relative to students in the control group, although in experiment 1 the difference did not quite reach statistical significance, and in experiment 2 the gap was only statistically significant when it came to problems focused on calculations rather than rules.
The findings are based on a small sample and they’re far from conclusive, so more work must be done to establish whether tools like Projo actually accomplish what they set out to do.
But the study provides a good illustration of how technology can fill a very small but very important niche. There’s a specific skill the conventional classroom tends to miss--real-time monitoring--and a highly focused computer program has the potential to teach it extremely efficiently.
Often there’s an all-or-nothing framing when it comes to technology in the classroom. Okita’s research is another reminder that even in situations where a large-scale blended learning environment is unfeasible or undesired, there may be a place for limited technological tools that focus on teaching neglected skills. Add up all these types of piecemeal advances and you start to make a real difference.