A fundamental lesson of childhood is the importance of swallowing your pride and asking for help. It’s a lesson that proves to be valuable in the classroom—studies have linked help-seeking to effective learning strategies, improved problem solving, and higher academic achievement.
Technologies today allow for the nearly omnipotent distribution of help; this is a key feature that can’t be replicated by a single person. But this only makes a difference if the help is good enough. For those designing a computerized learning tool, understanding the process and outcomes of help-seeking is critical.
But where is the line drawn for “good” help—and when does help hurt? At least one study suggests that “expedient” help-seeking—requests for others to do the work or provide an answer without an explanation—is associated with declining achievement.
Another question relates to whether help-seeking patterns have different effects at different skill levels. For example, would a computer-generated hint be more or less helpful for somebody with a relatively low level of knowledge about the topic? It’s also unknown how prior knowledge influences more complex patterns—for example, asking for help when you probably could have solved the problem on your own.
A new study led by Ido Roll of the University of British Columbia attempts to discover some general patterns regarding timely computerized help. Roll and his colleagues used a study design that involved three steps:
- observing what a student did on a specific step of a problem,
- determining whether or not the student effectively sought or avoided help at that step, and
- analyzing the consequences of the student’s help-seeking behavior on subsequent learning.
The design allowed the researchers to analyze the effects of help-seeking within each student, which permitted them to better establish causation and control for potential differences between students. The study used a program called Geometry Cognitive Tutor, and a total of 38 students worked through over 6,000 problem steps in which an error was made and the student had an opportunity to improve.
The crux of the study was something called the “Help-Seeking Model” (HSM), which is an algorithm used to classify behaviors. At each problem step HSM analyzed situational characteristics, including student skill level, time spent on the problem, and recent behaviors, and classified each action as
- desired help (seeking help in a situation where the problem likely couldn’t be solved without assistance),
- help abuse (requesting additional hints without taking the time to properly think about previous hints), or
- inappropriate attempt (trying to solve the problem when help-seeking or further thought would be better.)
The researchers were able to look at how performing one of these behaviors on a given step influenced learning. The Cognitive Tutor also calculated how likely it believed students were able to solve each step. This allowed the researchers to effectively slice the data based on whether the student was deemed high-skill, medium skill, or low-skill on that particular step.
The most important finding is one that may seem obvious—abusing help is bad. The tendency to grab too many hints too quickly was associated with poorer learning across all skill levels. This suggests that designers need to be wary of making help too widely available.
A second, counter-intuitive finding, relates to students who were deemed low skill (i.e. unlikely to solve) on a particular step. The researchers found that desired help-seeking did not have a positive effect on steps in which students were deemed low-skilled. Furthermore, inappropriate attempts—guessing at the answer before being deemed ready—did have a positive effect on these problem steps.
This seems to suggest that when knowledge is low, students may learn more from failed attempts than from help. Alternatively, it may simply be that the help offered by the program is geared toward some higher minimal skill level, and thus it failed to help the low-skilled students. More work must be done to examine the interaction between help and current knowledge.
Overall, the study leaves most questions without a firm answer, but the study is important for its vision of a world in which we can know the exact contexts where help-seeking is beneficial. While certain companies conduct user testing to gain this knowledge about their own products, it would be a boon for the edtech industry if researchers could learn about general tendencies that hold true across different content and different tools. This knowledge would give designers a solid foundation on which to begin building the help functions in their programs.
It may be a laborious process involving studies like Roll’s, which use a single tool and a single subject. But slowly we can move toward a world where tools offer, restrict, and rescind their help at exactly the right time and in exactly the right way.