What does a “quality” education mean? What should it enable learners to do? From getting a job to getting into graduate school, programs make a variety of claims about how they help students. But as cases like Corinthian Colleges show, there can be a wide disconnect between marketing language and actual outcomes, making it hard for students to do apples-to-apples comparison between different options.
In an effort to close the gap between claims and facts, Entangled Solutions has issued a near-final draft of a new set of “quality assurance” standards for how programs report and communicate student outcomes. The San Francisco-based education consulting firm hopes these guidelines will be adopted by schools from traditional universities to bootcamps.
Crafting the standards is a “Task Force” of 25 members from think tanks (including education policy wonk Rick Hess from the American Enterprise Institute), colleges (University of Texas), coding bootcamps (Galvanize), investment banking (Tyton Partners), and accounting firms (Ernst & Young).
The draft is available here (PDF), and anyone can comment through July 31. The Task Force will review and adjust the standards based on the feedback, and aims to publish a final version by the end of the year. This committee will then create a separate nonprofit that will be the “safekeeper and guardian of these standards,” according to Michael Horn, a Principal Consultant at Entangled Solutions.
Horn points to the accounting industry as a model for how this nonprofit can economically support itself. One option is to charge accounting firms a membership fee to apply the standards in their audit reports of schools. Another option is to provide professional development to help schools implement the standards. However the revenue stream flows, Horn says his group will not financially benefit from the nonprofit: “It won’t be entangled with Entangled.”
The quality assurance standards offer explicit details on how schools and programs can define and report on learning outcomes, completion rates, placement, earnings and learner satisfaction. It calls for “independent, objective, and externally validated” assessments as a more credible form of learning evidence than grades or GPA, which it deems to be “too subjective.”
The document gets into gory details on other metrics, such as how completion rates for full-time, part-time and transfer students should be measured, and under what circumstances a student can be exempt. It calls for separating placement rates between students who go on to pursue further studies, or those who land a new job. (And for those who got a new job, did they join a new company, continue with their previous one or start their own?) The report also recommends four measures for reporting earnings data.
These nuances may seem esoteric to the lay reader, but are important to ensuring that every student is accounted for. Particularly irksome to Horn are programs that “cherry pick from a smaller denominator of students” to boost their graduation percentages. “Schools are not playing by the same rules in terms of who’s being included in the math,” Horn tells EdSurge.
In addition, “one of the biggest challenges known about government statistics on graduation rate is that it only applies to first-time, full-time students,” he continues. “But these are not the biggest chunk of students in higher ed, so those numbers are almost meaningless.”
In calculating graduation and completion rates, “many higher-ed institutions play games with the numerators and denominators, and try to define and report only on certain subset of students,” says Rick O’Donnell, CEO of Skills Fund, a member of the Task Force. He believes “schools should report on 100 percent of students.”
The standards also aim to address questionable claims from bootcamps, particularly the “99 percent job placement” rates. For example, if a bootcamp hires its own graduates—as Fullstack Academy and Hack Reactor have done—should that count as having landed a job? If so, should that be clearly communicated to potential future students? In the early days of the bootcamp wave, such data came from self-reported surveys administered by the providers. These methods no longer passes muster.
Entangled is not alone in its efforts to create a new set of standards for how programs report learning outcomes. Concerns over marketing language from bootcamps spurred the creation of the Council on Integrity in Results Reporting (CIRR) standards, which 15 providers have adopted. General Assembly, one of the biggest operators, is not a party to CIRR but issued its own outcomes report, reviewed by KPMG.
The Task Force’s quality assurance standards are designed to be used for all higher-ed programs, not just bootcamps, reminds Horn. “Our standards are intended to be flexible and apply across all postsecondary institutions,” he adds. “There’s been too much pressure on the job placement as the only outcome. Our stance is: Just make the claims on where your graduates went, and state what your purpose was in serving them on day one.”
The report also offers examples of “ideal evidence” that each program should be able to show in order to verify its claims. Earnings data, for instance, could come from federal data via the Social Security Administration or the U.S. Department of the Treasury. Employers can also provide documentation attesting to one’s employment and wages.
Yet these are tall orders. Even the report’s authors admit collecting this level of evidence “will likely be a difficult task.” Asks O’Donnell, rhetorically: “How often do you report back to your college how you’re doing?”