In 2018 we witnessed a clash of titans as government and tech companies collided on privacy issues around collecting, culling and using personal data. From GDPR to Facebook scandals, many tech CEOs were defending big data, its use, and how they’re safeguarding the public.
Meanwhile, the public was amazed at technological advances like Boston Dynamic’s Atlas robot doing parkour, while simultaneously being outraged at the thought of our data no longer being ours and Alexa listening in on all our conversations. For better or worse, advanced technologies like artificial intelligence has captured the public’s, policymakers’, and big business’ imaginations.
I see four AI use and ethics trends set to disrupt classrooms and conference rooms. Education focused on deeper learning and understanding of this transformative technology will be critical to furthering the debate and ensuring positive progress that protects social good.
1. Companies will face increased pressure about the data AI-embedded services use.
As the public learns more about how and where AI is embedded in their personal lives they will take greater interest and care both in how and where they use those platforms. This will create greater demand for transparency in data use.
We’ve already seen this shift begin broadly with the European GDPR opt-in policy, and we’ve seen how an outraged public creates corporate awareness in explaining how and where customers’ data is being stored and used. As researched by Daniel Solove, if people have an objection to certain uses of data, they want a right to say no. But the average person doesn’t want to micromanage their privacy.
Companies can circumvent issues by sharing more information about their AI, educating consumers about the services they offer and how AI is used and giving people the option to remove their data. These concerns also provide educators with an opportunity to discuss how AI systems work with their students.
2. Public concern will lead to AI regulations. But we must understand this tech too.
In 2018, the National Science Foundation invested $100 million in AI research, with special support in 2019 for developing principles for safe, robust and trustworthy AI; addressing issues of bias, fairness and transparency of algorithmic intelligence; developing deeper understanding of human-AI interaction and user education; and developing insights about the influences of AI on people and society.
This investment was dwarfed by DARPA—an agency of the Department of Defence—and its multi-year investment of more than $2 billion in new and existing programs under the “AI Next” campaign. A key area of the campaign includes pioneering the next generation of AI algorithms and applications, such as “explainability” and common sense reasoning.
Federally funded initiatives, as well as corporate efforts (such as Google’s “What If” tool) will lead to the rise of explainable AI and interpretable AI, whereby the AI actually explains the logic behind its decision making to humans. But the next step from there would be for the AI regulators and policymakers themselves to learn about how these technologies actually work. This is an overlooked step right now that Richard Danzig, former Secretary of the U.S. Navy advises us to consider, as we create “humans-in-the-loop” systems, which require people to sign off on important AI decisions.
3. More companies will make AI a strategic initiative in corporate social responsibility.
Recently, large tech organizations, amid bias and privacy concerns, began launching initiatives to inform and educate about the benefits of AI. Google invested $25 million in AI for Good and Microsoft added an AI for Humanitarian Action to its prior commitment. While these are positive steps, the tech industry continues to have a diversity problem. The problem is deep-rooted, and solving it will require the industry to focus on the pipeline early—i.e., with children. More corporations will recognize this and begin investing more money and time into encouraging young people, especially girls and minorities, to learn about AI and its potential for good.
4. Funding for AI literacy and public education will skyrocket.
Ryan Calo from the University of Washington explains that it matters how we talk about technologies that we don’t fully understand. News cycles about killer robots and one-sided conversations about AI replacing humans at work have been encouraging and stoking fear for years.
This year we will see a greater effort to rebalance public discourse beginning with and reaching us through the media. Initiatives like the AI and Open News Challenge will reach us through education programs designed for media around how AI works. AI courses through traditional universities like MIT, along with online courses through organizations like Coursera, will aim to prepare more people for jobs in AI. And simplified curriculum for lay audiences as young as eight, led by nonprofits, will help communities as well as primary and secondary educators teach basics to nurture curiosity and encourage lifelong learning as the industries and technologies using AI expands.
For educators and curriculum developers, referencing approaches taken to explain climate change will help determine how to talk about AI and explain what’s at stake with our rhetorical choices.
This year the broader societal implications of AI will lead to the launch and growth of many efforts across education, corporations and government. These initiatives will translate into AI curriculum and training programs, public awareness campaigns and increased regulations and demand for accountability from AI companies and researchers. Expect public awareness of AI, its ethical concerns and the opportunities it has to do good to be top of mind.