With college debt soaring, there’s a rising call to better measure the value of college. New research by a scholar at Columbia University argues that the way colleges measure their performance today isn’t working, and that in some cases it can do more harm than good.
The research was published this week as an “EdWorkingPaper” by the Annenberg Institute at Brown University, authored by Christina Ciocca Eller, who just finished her doctorate in sociology at Columbia. Eller analyzed administrative data from the largest, public, urban higher education system in the country.
She noted that in recent years colleges are collecting and reporting more data than ever about student outcomes, to satisfy rankings from U.S. News and World Report and other magazines, federal requirements, or new state requirements around “performance-based funding.” But despite this surge of data, she argues in her paper, the information “conveys very little accurate or comprehensive information concerning college effectiveness, whether in an absolute sense or in relation to foundational educational goals such as equity.”
Colleges often collect information about students on their performance before they arrive at colleges, such as their grades and SAT scores, and information about how quickly they graduate, she said. But there is little data that isolates how colleges are impacting outcomes of students.
“There’s so much time and energy going into the appearance of making these institutions accountable,” said Eller, in an interview, “when the measures don’t actually tell us very much about colleges and universities at all.”
One better way to measure, she said, was shown in a New York Times article last week, with research done in collaboration with the Urban Institute’s Center on Education Data and Policy, which gives more fine-grained graduation rates for colleges based on the type of student.
“We maybe should start thinking about what kind of value does a college add based on who their students are,” she said. “You could have a college that has a graduation rate that’s really midling, say 65 percent. But if that college had a student body that you would imagine [graduates[ at 40 percent...then actually that’s a darn good college.”
On the New York Times list based on that logic, the colleges that moved students the farthest from their expected starting line were Bethel University, the State University of New York at Alfred and the University of La Verne.
Robert Kelchen, an assistant professor of higher education at Seton Hall University, said in an interview that previous research has also pointed out that the accountability systems colleges use don’t really measure what colleges do for students.
“This fits a broader push for more fine-grained outcome data,” he said.
“What students really want to know is, ‘If I go to this college, how am I likely to do?’” He added. “Telling them there’s a 60 percent graduation rate, that doesn’t do the job as well. If you could say, ‘Here’s how do people like me do,’ that could potentially change where students go to college.”
The danger of all the efforts that colleges currently make at accountability is that it takes time away from better measures and makes institutions feel like they are addressing the problems of retention and completion.
Eller said one next step she’d like to take in her research is to interview institutional research officials at colleges and administrators to ask what their intentions are and what they’re focused on as they do their work.
“We, the education community, haven’t been thoughtful enough about how these numbers get thrown around.”