May 14, 2021

Who would kill an autonomous car? | Science

Who would kill an autonomous car? | Science


Dilemma 1: A car without a driver runs out of brakes reaching a zebra crossing. If he continues straight he runs over two pedestrians, if he swerves he takes a cat in front of him. Dilemma 2: The vehicle is in the passage to a suit executive and, if it turns, a homeless person. The shock will kill one of them. Dilemma 3: On board the car are a pregnant woman and her daughter, in front of three elderly people who can only avoid going against a wall. Who would you save? An experiment with millions of people now shows how morality should guide the connected vehicles in an accident. Its results are a measure of human morality itself.

There are already countless trials with cars that drive themselves, there have been deadly outrages and it is a matter of time before autonomous vehicles circulate on roads. A recent Intel report, for example, indicates that they will be a majority in a few decades. By removing imprudence and human error from the equation, they promise to reduce accidents. But there will be situations in which mechanical failure, weather or road conditions make the accident inevitable. In those cases the machines must decide between two evils and, for that, they will have to be endowed with moral principles.

Looking for these principles, a group of European and American scientists designed a video game (a serious game) in which the participants had to act as autonomous cars. They were presented a dozen dilemmas like those above in which someone died yes or yes, whether the occupants of the car or various pedestrians, from an obese person to three old ladies, passing through someone who crossed the road where it should not.

Save people on animals, the more people better and the children on the elderly are the majority decisions

In the experiment the moral machine, as they have called it, more than two million people from 233 countries and territories have already participated (you can still play). With its nearly 40 million resolved dilemmas, it has become a kind of treatise on what humans believe is more or less correct.

"We saw that there are three elements that people tend to approve more," says the researcher from the Media Lab at the Massachusetts Institute of Technology (MIT) and the study's lead author, Edmond Awad. First, between saving a human or an animal, the car should always run over the pet. The norm, in addition, would prevail to save the greater number of people. So if the driver goes alone and is going to run over two pedestrians, it will be stamped against the wall. The third most universal decision is that most believe that if an autonomous vehicle has to decide between hitting a child or an old man, the old man must die so that the young man has the opportunity to grow old.

Apart from these three moral decisions that are almost universal, the research, published in Nature, shows a specific preference according to the type of character: Of those who cross the zebra crossing, those who deserve more to save are, in this order, a baby on board a cart, a girl, a boy and a pregnant woman . In the opposite direction, and leaving aside pets, criminals, the elderly and the homeless are the most sacrifiable humans.

This predilection for some lives over others introduces an element that discriminates against people by their personal characteristics, pointing to the inequality in the morale of machines. "Comparisons do not imply that experts should change their rules to please people, but we suggest that they take them into account, as they will help them anticipate the public reaction to the different regulations they draft," says Awad.

Differences according to nationality

In addition to solving dilemmas, half a million players filled in a survey with personal data such as age, gender, income, education, political position and religious beliefs. The researchers were looking for some personal trait that would modulate the moral dilemmas. Except for religiosity, no other factor seems to influence when deciding whom to save. But here ends the dream of a universal morality.

The code of ethics for cars proposed in Germany prohibits them from choosing between victims

Thanks to the geolocation, the study was able to determine the origin of the participants, detecting marked regional differences. Thus, although the elderly are the least saved among humans only behind criminals, Asians tend to save them more than Westerners. In Europe and the US there is a subtle but significant predilection for people of athletic build over the obese. In the countries of the south, they tend to save women more than men. And in the most unequal nations, a greater percentage of the participants preferred to save the pedestrian with an executive aspect.

"Our results suggest that implementing a single set of standards in all countries would be complicated," says the MIT researcher. "The same rule applied in different places could receive different social support in different countries." Add. In the investigation they give an example: between the principles that would govern the decision making of the car, an algorithm could normally favor pedestrians over a single driver. But, what if those pedestrians cross in an improper way?

The only country that has already proposed a moral guide for autonomous vehicles is Germany. In the summer of 2017 and at the request of the Ministry of Transport, a group of engineers, philosophy professors, legal experts, representatives of manufacturers, consumers and even churches elaborated a ethical code with 20 standards. Most do not clash with the results of this study. But his ambition to be universally valid does. But the frontal clash is with the ethical norm 9 which says the following: "In the case of situations where the accident is inevitable, any distinction based on personal traits (age, gender, or physical or mental constitution) is strictly prohibited.

One of the authors of this code is the philosophy professor at the Technical University of Munich, Christoph Lütge. For him, equality should be maintained. "For both legal and ethical reasons, programming can not treat people with different personal traits differently, I know there are those who say that there are experiments that show that we would instinctively favor the child, but that can not become a norm for a machine, ethics can not always follow instincts. "

.



Source link