Students may feel relief as they hand in an exam, but the moment marks the beginning of stress and anxiety for instructors and teaching assistants, who may need to grade hundreds of tests in a short time.
Arjun Singh knows the feeling well. While earning a PhD in computer science from the University of California, Berkeley, Singh was one of a handful of teaching assistants tasked with grading—by hand—hundreds of computer science exams throughout a semester. The trained programmer knew there had to be a more efficient way. And so he began building a prototype for an automated online grading tool that today has grown into his company, Gradescope.
Today, Gradescope announced that it has raised $2.75 million in a funding round led by Reach Capital. GSV AcceleraTE and Ironfire Ventures also participated, as well as the startup’s existing investors K9 Ventures, Freestyle Capital and Bloomberg Beta.
Gradescope is hardly the first group to give automated grading a shot. Other efforts such as MOOC platform edX’s Enhanced AI Scoring Engine have similarly tried to use automation to reduce the time educators spend on grading. Yet many have been met with skepticism that such tools can provide as accurate or personal feedback as a human can.
Singh’s company takes a slightly different approach. The first is that Gradescope is avoiding essay grading at the moment (though Singh says that may change in the future), and instead largely focuses on applying artificial intelligence to grade responses that include numbers, a line of code, or short text responses.
Second, the tool is not designed to fully-automate grading, meaning instructors aren’t out of the process completely. To use Gradescope, a grader scans exams or assignments into the platform and each question is given a rubric. The grader makes note of what the correct answer is, and the tool groups together what it thinks are also correct answers. The same is then done for common incorrect answers, and the grader can assign feedback to go out to multiple exams at once. (For outlier responses, graders can go through and manually provide feedback.)
In short, Singh says faculty will still look at nearly all of the answers—even the ones that the machine grades. “You grade one and [the tool] grades the rest, and the instructor looks at all of it but it’s faster than writing feedback to every question,” explains Singh. “You save time in applying the feedback.”
The tool is isn’t trying to replace the human element in the grading process, and that’s what Phillip Conrad, a lecturer in computer science at UC Santa Barbara, likes about it. Conrad says the tool allows him to quickly scan through the correct answers that it identifies, give partial credit to questions that were commonly missed and explain why, and then spend more time on the exams of students who showed more signs of struggle.
Gradescope doesn’t catch everything, though, Conrad adds, and as he checks over the system he routinely finds some answers marked correct when they were not, or vice versa—which he then has to score correctly. To him, it’s still worth it because it cuts down on the time spent reading the correct answers and by automating some of the feedback that can be widely applied.
“If a student crosses out an answer then does the correct one, it has trouble with that,” says Conrad. “But instead of looking at all 100 [exams], I’m looking closely at 15.”
Gradescope offers a free version of its grading platform for educators, and also offers paid plans that give access to analytics as well as the AI grading features. Pricing for the tool ranges from $1 per student per course for a basic account (no AI grading or analytics) to $5 per student per course for groups of instructors at an institution. The company also has a paid plan for institutions and says it is already be working in New York University, Oregon State University, Harvey Mudd College and more.