Saya made headlines and history at her first substitute teaching gig, but she’s no ordinary teacher. She is a remotely controlled robot, first introduced to Japanese students in 2014 and has since been programmed with facial expressions, responses and minor movements.
Though social robots, like Saya, have been greeted with enthusiasm by students, researchers are taking a more critical stance regarding the ethics of introducing them as teachers. In fact, a new research study suggests the the deceptive nature of social machines could negatively impact learning.
“They are not going to provide an adequate replacement for human attention, and any suggestions that you could replace a human teacher with a robot are misguided,” says Dr. Amanda Sharkey, a researcher from the United Kingdom who specializes in the ethics of social robots. Her recently published paper sheds light on the potentially dark side of computerization—the process of replacing humans with computers. Sharkey focuses on the use of robots in social settings, looking into various scenarios including elderly care, home child care, and most recently as classroom teachers.
Sharkey’s research flags two significant ethical violations: the invasion of privacy and the possible harm deceptive relationships can have on a child’s social development. She calls the robots “deceptive” because their human-like responses and characteristics can “trick” children into believing real bonds are formed, compromising their emotional health and privacy. Children, often known to trust and befriend innate objects, can find themselves even confiding in social robots, risking situations where a robot may lack the emotional intelligence to provide adequate responses. In addition, robots can store data and information about children. In the age of data breaches, any personal information stored in computers is at risk.
However, Dr. Takuya Hashimoto, the researcher who ran the two-year experiment with Saya, noted that children were not likely to confuse robots with humans. In fact, the students were more attentive and more excited to do activities with Saya because they knew she was a robot. He also noted that Saya’s limitations in movement and response made it obvious that she wasn’t human. However, he still had reservations that robots could make good teachers. “In my opinion, it is difficult for a robot to manage or control children's behavior, particularly in [a] long-term class,” he said. “So, in our experiment, experimenters and human teachers also participated in the class to control the children’s behavior, but I think a robot can be [an] ‘assistant teacher’ to help human teachers.” When robots become more sophisticated, this perception and understanding could change.
Sharkey notes that robots can be useful for assistive responsibilities in classrooms, such as a support tool for teaching English. She points out the effectiveness of a robot used to teach English to students in Korea and the students’ willingness to share their mistakes and learn with the machine instructor. However, Sharkey emphasizes the need to distinguish between jobs performed well by humans and those done by machines, stressing that robots should only be used to offer quality educational opportunities to students that people cannot provide. At the moment, robots lack the emotional intelligence needed to manage a classroom, deal with the complex emotions of children, serve as role models, assess the needs of students, and build meaningful relationships with them.
As technology charges into the classroom at full force and computerization threatens to upend different professions and sectors of our economy, teachers can take comfort in the fact that it may be a long time before we have a robot intelligent enough to take their positions. Even when sophisticated robots are constructed, the ethical questions surrounding their implementation might just be compelling enough to keep education human.