This article is taken from the monthly journal Sciences et Avenir – La Recherche #902 of April 2022.
On the alley of the largest high-tech exhibition in Las Vegas (USA) CES 2022, which took place in January, Ameca welcomes visitors. A shy smile, a glance catching passers-by, nods… The attitude wouldn’t be surprising if Ameka wasn’t a robot. Created by British company Engineered Arts, this android is specially designed to create subtle and subtle facial expressions. Equipped with a camera in each eye and a microphone in his ears, Ameka turns to face his interlocutor. If this robot is just a static bust, its human imitation is worrying.
This technical achievement is not isolated. A laboratory at Columbia University in New York (USA), for example, has developed Eve, a humanoid head covered in synthetic skin (blue!), capable of making the same real-time expressions as a person making a face. The erasing of the boundary between man and robot gave rise in 1970 to the well-known concept of the “Valley of the Weird”. It refers to the discomfort felt when confronted with an anthropomorphic artificial being due to an imperfection incompatible with its human nature.
In the meanders of the “strange valley”
In 1970, the magazine appeared Energy, the formula “Valley of Strange”. According to Japanese roboticist Masahiro Mori, it means a paradoxical phenomenon: the more a robot looks like a person, the more discomfort it creates for people. In 2020, two researchers from Emory University in Atlanta (USA) found that discomfort does not occur immediately. Androids are immediately considered “human”. But this feeling decreases over time (between 100 and 500 milliseconds) for most humanoid robots. It is this change in perception during observation that contributes to the discomfort. In 2018, a cognitive research group at the University of California at Berkeley (USA) used videos of mechanical robots, humanoids, and humans performing manual tasks. Researchers have recorded certain brain waves, called N400s, that correspond to a negative response to a stimulus. When only a brief excerpt was offered, the humanoid scenes evoked nothing remarkable. On the other hand, participants who were projected videos continuously ended up with an N400 peak. The fact of skipping the video creates an expectation in the observer regarding the gestures of the robot – he anticipates the “biological” gestures corresponding to the human form. However, it is not. Hence the discrepancy that causes concern.
Admittedly, in everyday life, we are still far from these movie scenes that demonstrate smooth interaction with android machines. On the contrary, a Japanese hotel in early 2019 split off from robots helping customers due to a technical fiasco, just like a store in Edinburgh (Scotland) in 2016. a crossroads of psychology, neuroscience and technology that goes far beyond mere notions of acceptance or rejection. As theoretical as they are, this work is critical to the design of future machines. Especially when it comes to their appearance.
The “look” of a humanoid would bother a person
Because the purely humanoid version of the robot does not have the single defect of creating discomfort. It can also be counterproductive. In a study presented in February 2021, cognitive scientists from the University of Helsinki (Finland) demonstrated, for example, that for the same decision made by a machine, the more human it looks, the more immoral its choice will be considered. . In another experiment, a team at the Italian Institute of Technology in Genoa asked 40 participants to play a “chicken game” against an iCub robot, in which the participants race a car – here, virtual – against each other, with the loser being the first to deviate from the trajectory to avoid collisions. “In this experiment, the behavior of the robot did not depend on its view of the participants.“explains Marven Belcade, an expert in robotics and cognitive sciences, co-author of this study. Thus, there is no need to watch the machine for it to fail.”However, the concentration of the players was broken when the robot looked at them.”
More precisely, ignoring the robot’s gaze requires extra effort to focus on its own game strategy, which cannot be said about less frequent eye contact. The practical implications of this experience are immediate. “In industries where a human and a robot cooperate, the look of the second can interfere with the task of the first.,” says Marven Belcade. In this project, the researchers did not try to find out if the results would have been different with a less humanoid robot than the iCub.
But some studies clearly show that appearance is not always a criterion for explaining human reactions. In 2019, a project by psychology researchers from the Louis-Maximilian Universities in Munich (Germany) and Radboud (Netherlands) showed that people can feel empathy for robots to the point that they hesitate to sacrifice them in a critical situation, whether the machines are anthropomorphic or no. The key variable lies rather in the description of the affects, state of mind and feelings of the robots. At the Massachusetts Institute of Technology (Cambridge, USA), Media Lab researchers studied the behavior of 34 family groups when they encountered five voice assistants (Jibo, Google Home, Amazon Echo, the first version of the Amazon Echo Show, and a modified Amazon Echo Spot). He shows that the more the latter have reactions that go beyond a simple response to a command—they brighten up like a Jibo or an Echo Spot—the more the participants give them personality, interact as human beings, and attribute competence to them. them. Sylvie Boro, who specializes in consumer psychology at the Toulouse business school, has shown that if voice assistants are mostly feminized, then this is no coincidence. Users find these interfaces more convenient and trustworthy. “We didn’t expect that she confesses before adding: it also enhances the perceived competence of the chatbot.”
Moreover, these observations were made even without actually showing the participants an assistant, but simply by describing the interface, the genre of which was not explicit, but was derived from its name (Olivier, Olivia). “However, trusting machines more because they are feminine also means giving them the opportunity to influence us, warns Sylvie Boro. Let’s not forget that there are companies behind these interfaces.“Which may well put forward their products and services under the guise of help. Perhaps it is here that life will live in harmony with machines: knowing why they are there, who designs them and for what purposes. To see beyond their appearance in short, like people .
Gender bias in cars
In May 2019, a UNESCO report pointed to the trend of manufacturers to feminize voice interfaces. This choice, according to the organization, creates several problems: it disperses the model of female self-expression, such an entity is placed in the submission report and, more subtly, becomes the embodiment of technical bugs and other imperfections of the service. UNESCO recommends the development of gender-neutral interfaces. She also recommends that voice assistants inform the user in advance that they are just machines.