A researcher tested a new weight loss supplement. She gave it to 200 overweight adults and discovered that their average weight, as measured by a precise weight scale, declined after two months of taking the supplement. She concluded that the supplement promotes weight loss.
Is that claim accurate, or faulty?
Ask that question to a group of adults, and many are likely to tell you that the finding checks out. After all, it’s based on an experiment that used a “precise” scale, studied a decent number of subjects and measured change over time.
But the correct answer is: The researcher doesn’t have enough information to support her conclusion. Her experiment didn’t include a control group of people who didn’t take the supplement, which means she can’t accurately say for sure whether the pill caused the weight loss, or some other factor did.
This is the kind of subtle distinction that critical-thinking skills should help illuminate. Yet many people—adults and children alike—struggle to discern this type of fallacy in claims that seem, at first glance, pretty reasonable.
It turns out that cultivating critical thinking skills can be difficult, even though many educators believe “that’s the point of what we’re training our students to be able to do,” says Ben Motz, a research scientist in the Department of Psychological and Brain Sciences at Indiana University.
Perhaps education has been missing a key ingredient when it comes to teaching students to detect faulty reasoning: practice. That’s the hypothesis that Motz and other psychology researchers from Indiana University tested in a study whose findings they believe point to a promising method for strengthening critical-thinking muscles.
The research aimed to test and improve participants’ ability to identify the following common fallacies, which can lead people to draw inaccurate conclusions from information and data:
- random chance
- lack of control
- correlation is not causation
- overgeneralization
- experimenter bias
- confirmation bias
Each group of study participants started the experiment by taking a pre-test and receiving training about critical thinking. But only one group spent time actually putting that training into practice, through an exercise that asked them to read passages about scientific claims—like the one in the first paragraph of this article—and answer multiple choice questions about possible problems in the logic. A second group took a different type of quiz instead, while a third group didn’t do either activity.
At the end of the experiment, all participants took a post-test measuring their ability to identify fallacies. All three groups saw improved performance, but the group that got the extra critical-thinking practice “had significantly higher gains”—three times larger improvement compared to their initial pre-test, according to Motz.
This suggests that training alone may do a bit to improve people’s critical-thinking skills—but that practice in pattern recognition can make a bigger difference.
“You can’t just be told, ‘hey, this is how you evaluate the information.’ You really have to be exposed to scenarios that practice it,” Motz says. “You need to see people mess up and know how they messed up—and why.”
The researchers are still teasing out nuances in the results, but so far, the fallacy that tripped participants up the most seems to be confusing correlation with causation.
The research was supported by a grant from the Reboot Foundation, which advocates for and works to improve critical-thinking skills. An article about the research is currently a pre-print, which means it has not yet been peer-reviewed and published in a scientific journal. The data is publicly available online.
The authors hope that the study inspires more educators to incorporate critical-thinking training-and-practice exercises into their courses.
“We ought to have an enormous, crowd-sourced test bank of critical thinking items,” Motz says. “And people could include them in lots of different disciplines.”
That would fit into one of the researchers’ broader goals: “to empower other educators to be able to conduct experiments in their classes in a relatively simple way,” says Emily Fyfe, a study author and an assistant professor in the Department of Psychological and Brain Sciences at Indiana University. “I think this is a perfect topic that people will be very interested in.”