Can College Predictive Models Survive the Pandemic?

Opinion | Edtech Business

Can College Predictive Models Survive the Pandemic?

By Renee Teate     Apr 16, 2021

Can College Predictive Models Survive the Pandemic?

Though many are eager to forget 2020, data scientists will be keeping the year top of mind as we determine whether the pandemic’s impact makes 2020 data anomalous or an indication of more permanent change in higher ed. As we develop new predictive models and update the existing ones with data collected in the last year, we will need to analyze its effects and decide how heavily to weigh that data when trying to predict what comes next.

Beyond dramatic change in the number of students who applied and enrolled last year, even familiar data from application materials have become less available, making it tougher for colleges to anticipate how applicants and returning students are likely to behave. Due to the difficulty students had taking the SAT or ACT during the pandemic, many institutions have gone test-optional. Scarcer exam data and high variation in the number, type and timing of applications and enrollments have made the familiar annual cycles of higher ed operations less predictable.

Admissions officers and enrollment managers are asking themselves several questions. Should they expect things to return to “normal” pre-COVID patterns this year or permanently alter their expectations? Should they change admissions or scholarship criteria? Should they throw out the predictive models they trained on past data after an unprecedented year? And if they keep existing processes and tools, how can they work with data scientists to recalibrate them to remain useful?

I believe predictive models still offer a lot of value to universities. For one thing, models trained on past data can be especially useful in understanding how reality differed from expectations. But the last year has revealed just how important it is that we fully understand the “how” and the “why” of the predictions these tools make about “who” is most likely to enroll or may need additional services to help them succeed at an institution.

What Models Got Wrong, and Right

When assessing models I built pre-COVID-19, I found the pandemic catalyzed trends and correlations that the model had identified in past data. Essentially, it made sound predictions, but didn’t anticipate rate and scale.

One example is the relationship between unmet financial need and student retention. Students who have need that is not covered by financial aid tend to re-enroll at lower rates. That pattern seems to have continued during the pandemic, and models often correctly identified which students were most at risk of not enrolling in the next term due to financial issues.

Yet in the context of the crisis, the models also may have been overly optimistic about the likelihood of other students returning. As more families’ financial futures became less certain, financial need that was not addressed by loans, scholarships, and grants may have had a larger impact than usual on students’ decisions not to re-enroll. That could help explain why overall retention rates decreased more sharply in 2020 than models anticipated at many institutions.

A model that generates retention likelihood scores with a more “black box” (less explainable) approach, and without additional context about which variables it weighs most heavily, provides fewer valuable insights to help institutions address now-amplified retention risks. Institutions relying on this type of model have less of an understanding of how the pandemic affected the output of their predictions. That makes it more difficult to determine whether, and under what circumstances, to continue using them.

Just because a predictive model performs well and is explainable doesn’t mean, of course, that it and the system it represents are exempt from deep examination. It’s probably a good thing that we must take a harder look at our models’ output and determine for whom models are and aren’t performing well under our new circumstances.

If wealthy families can better “ride out” the pandemic, students from those families might enroll closer to pre-pandemic rates. In turn, models predict their enrollment well. But families for whom the virus presents a higher health or economic risk might make different decisions about sending their children to college during the pandemic, even if their current status hasn’t changed “on paper” or in the datasets the model uses. Identifying groups for which models’ predictions are less accurate in hard times highlights factors unknown to the model, which have real-world impact on students.

Challenging Algorithmic Bias

It’s even more vital to identify those people whom models overlook or mischaracterize at a time when societal inequities are especially visible and harmful. Marginalized communities bear the brunt of the health and financial impacts of COVID-19. There are historical social biases “baked into” our data and modeling systems, and machines that accelerate and extend existing processes often perpetuate those biases. Predictive models and human data scientists should work in concert to ensure that social context, and other essential factors, inform algorithmic outputs.

For example, last year, an algorithm replaced U.K. college entrance exams, supposedly predicting how students would do on an exam had they taken it. The algorithm produced highly controversial results.

Teachers estimated how their students would have performed on the exams, and then the algorithms adjusted those human predictions based on historical performance of students from each school. As Axios reported, “The biggest victims were students with high grades from less-advantaged schools, who were more likely to have their scores downgraded, while students from richer schools were more likely to have their scores raised.”

The article concluded: “Poorly designed algorithms risk entrenching a new form of bias that could have impacts that go well beyond university placement.” The British government has since abandoned the algorithm, after massive public outcry, including from students who performed much better on mock exams than their algorithmically generated results predicted.

To avoid unfair scenarios that affect the trajectory of students’ lives, predictive models should not be used to make high-impact decisions without people with domain expertise reviewing every result and having the power to challenge or override them. These models must be as transparent and explainable as possible, and their data and methods must be fully documented and available for review. Automated predictions can inform human decision-makers, but should not replace them. Additionally, predictions should always be compared to actual outcomes, and models must be monitored to determine when they need to be retrained, given changing reality.

Ultimately, while 2020 exposed hard truths about our existing systems and models, 2021 presents an opportunity for institutions to recognize flaws, tackle biases and reset approaches. The next iteration of models will be stronger for it, and better information and insights benefit everyone.

Learn more about EdSurge operations, ethics and policies here. Learn more about EdSurge supporters here.

More from EdSurge

Get our email newsletterSign me up
Keep up to date with our email newsletterSign me up