The Education Department Outlines What It Wants From AI

Artificial Intelligence

The Education Department Outlines What It Wants From AI

By Daniel Mollenkamp     Jun 2, 2023

The Education Department Outlines What It Wants From AI

This article is part of the collection: How AI Is Impacting Teaching and Learning.

OpenAI, the company behind ChatGPT, predicted last year that it will usher in the greatest tech transformation ever. Grandiose? Maybe. But while that may sound like typical Silicon Valley hype, the education system is taking it seriously.

And so far, AI is shaking things up. The sudden-seeming pervasiveness of AI has even led to faculty workshop “safe spaces” this summer, where instructors can figure out how to use algorithms.

For edtech firms, this partly means figuring out how to prevent their bottom line from being hurt, as students swap some edtech services with AI-powered DIY alternatives, like tutoring replacements. The most dramatic example came in May, when Chegg’s falling stock price was blamed on chatbots.

But the latest news is that the government is investing significant money to figure out how to ensure that the new tools actually advance national education goals like increasing equity and supporting overworked teachers.

That’s why the U.S. Department of Education recently weighed in with its perspective on AI in education.

The department’s new report includes a warning of sorts: Don’t let your imagination run wild. “We especially call upon leaders to avoid romancing the magic of AI or only focusing on promising applications or outcomes, but instead to interrogate with a critical eye how AI-enabled systems and tools function in the educational environment,” the report says.

What Do Educators Want From AI?

The Education Department’s report is the result of a collaboration with the nonprofit Digital Promise, based on listening sessions with 700 people the department considers stakeholders in education spread across four sessions in June and August of last year. It represents one part of a greater attempt to encourage “responsible” use of this technology by the federal government, including a $140 million investment to create national academies that will focus on AI research, which is inching the country closer to a regulatory framework for AI.

Ultimately, some of the principles in the report will look familiar. Primarily, for instance, it stresses that humans should be placed “firmly at the center” of AI-enabled edtech. In this, it echoes the White House’s previous “blueprint for AI,” which emphasized the importance of humans making decisions, in part to alleviate concerns of algorithmic bias in automated decision-making. In this case, it is also to mollify concerns that AI will lead to less autonomy and less respect for teachers.

Largely, the hope expressed by observers is that AI tools will finally deliver on personalized learning and, ultimately, increase equity. These artificial assistants, the argument goes, will be able to automate tasks, freeing up teacher time for interacting with students, while also providing instant feedback for students like a tireless (free-to-use) tutor.

The report is optimistic that the rise of AI can help teachers rather than diminish their voices. If used correctly, it argues, the new tools can provide support for overworked teachers by functioning like an assistant that keeps teachers informed about their students.

But what does AI mean for education broadly? That thorny question is still being negotiated. The report argues that all AI-infused edtech needs to cohere around a “shared vision of education” that places “the educational needs of students ahead of the excitement about emerging AI capabilities.” It adds that discussions about AI should not forget educational outcomes or the best standards of evidence.

At the moment, more research is needed. Some should focus on how to use AI to increase equity, by, say, supporting students with disabilities and students who are English language learners, according to the Education Department report. But ultimately, it adds, delivering on the promise will require avoiding the well-known risks of this technology.

Taming the Beast

Taming algorithms isn’t exactly an easy task.

From AI weapons-detection systems that soak up money but fail to stop stabbings to invasive surveillance systems and cheating concerns, the perils of this tech are becoming more widely recognized.

There have been some ill-fated attempts to stop specific applications of AI in its tracks, especially in connection to the rampant cheating that is allegedly occurring as students use chat tools to help with, or totally complete, their assignments. But districts may have recognized that outright bans are not tenable. For example: New York City public schools, the largest district in the country, got rid of its ban on ChatGPT just last month.

Ultimately, the Education Department seems to hope that this framework will set down a more subtle means for avoiding pitfalls. But whether this works, the department argues, will largely depend on whether the tech is used to empower — or burden — the humans who facilitate learning.

Learn more about EdSurge operations, ethics and policies here. Learn more about EdSurge supporters here.

Next Up

How AI Is Impacting Teaching and Learning

More from EdSurge

Get our email newsletterSign me up
Keep up to date with our email newsletterSign me up