Gather student data, make predictions about their learning—and perhaps their future. For years education companies have tried to apply technologies to better understand students and tailor their learning experiences, or support instructors who can intervene when human help is needed.
Today the latest buzz revolves around machine learning, which education technologists claim can support more precise tools. But there’s much more to machine learning than theory and hype. And what it takes to make these products effective, and how to boost student learning equitably and ethically, remains an ongoing debate. EdSurge re-opened the conversation on Wednesday with a group of educators and education technology entreprepreneurs at a meetup in the Big Apple.
Speakers quickly contextualized the technology with the shift in how widely available data is today.
“We have a discussion at Pearson about this shift in society from a digital desert to a digital ocean,” said John Behrens, vice president of advanced computing and data science at Pearson. “Before the digital world, if you wanted data on students, you had to stop the student, instrument or test them, then go on your way. In a digital ocean, you don’t have to stop that instruction, there is an interplay. The data is emerging naturally through homework, through games and play.”
Panelist Janel Grant, a software engineer and former teacher, pointed out that this vision of machine learning in education is rarely what plays out in the classroom. “There is a big disconnect I felt as a teacher between the technology we receive versus the tech we wanted,” she said.
Education technology isn’t a new idea or industry, moderator and EdSurge Managing Editor Tony Wan reminded the audience. So how does machine learning really change the status quo? (Or does it?)
Panelist Andrew Jones, a data scientist at Knewton, admitted that despite the hype, machine learning is still relatively limited in how it’s been applied, at least in the eyes of some users. “Most of what’s in the market now comes across as fancy homework or fancy textbooks,” said Jones. “To move beyond those labels is a much bigger challenge, one that we on the data science team worry about constantly. It’s the holy grail.”
Pearson’s Behrens thinks automated essay grading is one area that machine learning is starting to make progress. “In that space, we are mapping the features of essays against different labels of those essays,” he said.
Al Essa, vice president of analytics and R&D at McGraw-Hill Education, believes that computers are getting better, faster and cheaper at making predictions based off of student data. But those predictions are still only one factor in what’s needed to get a full picture about learning. “When you have to make a complex decision, we make predictions. But robust decision-making is understanding causality.” said Essa.
In Grant’s perspective, machine learning still has a ways to go before it can make a demonstrable impact in teachers’ work. As a former classroom teacher herself, she said that one of the biggest areas where machine learning can be most useful is not at the student level, but by helping teachers cut down on time-consuming tasks, like organizing a classroom library by reading level. “Machine learning would be great for leveling my classroom library. I have to read the book and determine what level is. Predictive modeling can do that,” said Grant. “These [applications for machine learning] might seem small to a big company, but to an educator, it’s huge.”
For predictive analytics in higher education in particular, which have been used to help determine a student’s likelihood of success, Essa said: “There are lots of models out there that try to predict who will drop out or fail a course. And there are a couple of issues with that.” He continued, “It can be highly contextual—maybe that model is applied in only one course. There is no single model that will work across the board. It is an area in need of quite a bit of development.”
Much of the ongoing challenges around machine learning comes back to questions around bias ingrained in the systems and the algorithms embedded in the technology. “Data is easily gotten, but it has a lot of bias in it,” said Behrens.
For all the talk about data and learning, Essa offered this blunt assessment: “Pretty much all edtech sucks. And machine learning is not going to improve edtech.” So what’s missing? “It’s not about the data, but how do we apply it. The reason why this technology sucks is because we don’t do good design. We need good design people to understand how this works.”
To close out the event, panelists shared what questions parents, educators and students should be asking about machine-learning technologies in education. “Who owns the data and how will it be used against me?” said Grant. “It’s sad to say, but it’s the first question that needs to be asked. Why am I using this, why is it useful and who owns that information?”
Jones said that users should feel more empowered to ask companies directly how their algorithm works. “What is this algorithm optimizing for and why?” he suggested. “I wish people asked us this more. Asking what is being optimized for, and why, can give you a sense of [whether a tool]is focused on student outcomes, or whether it is about getting a prediction that’s right more often.”
At the end of the night, an audience member asked what the future of virtual assistants might look like in the classroom. Grant remains skeptical of the technology’s potential to wholly replace teachers in the instructional process. “I don’t see a child sitting in front of an Alexa and being taught, because there is a whole other set of cues they need to learn. I don’t see machine learning reaching that point.”
Jones agreed: “I’d rather see machine learning reach the level of chalk.”