It's not about science fiction. For decades, many scientists have predicted an explosion of artificial intelligence at some point in the 21st century. This artificial intelligence could become something unique, something enormously powerful that could be imposed on human intelligence. Is it time to establish solid ethical criteria?
In 1985, Judith Jarvis Johnson revealed in the magazine The Yale Law Journal (the publication of the Yale Law School) some questions that had been hovering since the beginning of the 20th century in human knowledge. This renowned jurist exposed the complexity that an artificial intelligence would have to find the solution of a complex human dilemma. Imagine that a person drives a car without brakes; before him the road bifurcates in two: in the way of the left there are five people, in the one of the right there is only one. Is it lawful to kill one person to save five others? Let's go with the second part of the dilemma: a young man goes to a clinic for a routine checkup; In that clinic there are five patients waiting for organ transplants. In order to live, two of them need a lung, another two need kidneys and the fifth one requires a heart. Curiously, the young man who has gone for the checkup has the same blood group as them, which makes him the ideal donor. We repeat the question: Is it lawful to kill one person to save five others?
A human being would have to clear the unknowns of the equation according to ethical criteria. In any of the cases, quantitatively we talk about killing one person to save another five. However, almost everyone would agree that it is preferable to run over that person to save the other five. There are many purely human factors that could justify this decision, such as the absence of physical contact (it is not the same to kill a stranger as someone we deal with and know personally). However, it is difficult for a machine to calibrate this type of decision, that is why it is so important that artificial intelligence can be endowed with an ethics, or code of values, that conditions, in our image and likeness, the essence of its actions .
Following this reasoning, there are theorists who postulate themselves in catastrophist beliefs. This is the case of the philosopher Nick Bostrom, who argues that an advanced artificial intelligence could have the ability to cause human extinction, since their plans may not involve human motivational tendencies. However, Bostrom also raises the opposite situation in which a super intelligence could help us solve tedious and constant problems in humanity, such as poverty, disease or the destruction of the planet.
A business ethic
The infinite complexity of a system of human values means that artificial intelligence does not find friendly motivations in human ways of proceeding. However, it is ethics that holds most organizations. The common understanding and acceptance of our cultural schemes implies the operation of complex ins and outs of the human psyche. By incorporating Artificial Intelligence into the processes of organizations, it is important to equip this new technology with values and principles. And, within organizations, the developers of this technology are the people who really have to work, being aware of the moral and ethical implications of their work.
The problematic raised is materializing in the Association on Artificial Intelligence, created by Elon Musk and Sam Altman, in which the main technological leaders are invited to identify ethical dilemmas and prejudices. Its primary objective is to establish rules of the game, based on a framework of moral behavior, where Artificial Intelligence can develop in representation of humanity.
According to a report carried out by SAS in 2017, 92% of companies consider it a priority to train their ethics technicians, and 63% have committees in these matters to review the proper use of artificial intelligence. It is, therefore, a necessary issue and possible solution. Therefore, some companies are beginning to train people who in turn are training machines. It is, after all, an inclusion of ethics in the algorithms that govern artificial intelligence. For example, the granting of a loan should not be made according to criteria of sex, age or religion. It is important that the machine learning It is nurtured by universal principles of respect, freedom and equality. Culture is one of the vehicles of our survival: it is essential that we continue to train and report according to basic principles.
Roboética: principles for the 21st century
Today, humanity is immersed in an unprecedented technological revolution. The ethical concern for the creation of new types of intelligence requires an exquisite moral criterion of the people who design these new forms of technology. The algorithm must be able to discern and recognize failures when they focus on social actions with repercussions that a human being had previously made. The concept is clear: the code can not harm people or companies.
As a result of this concern, the European Parliament issued a report on robotics in 2017 called Ethical Code of Conduct and, recently (December 2018), has published the first draft of the Ethical Guide for the responsible use of Artificial Intelligence. 52 experts have scrutinized and squeezed the corners of the problem, focusing on the human being always under the light of the defense of fundamental rights.
They are moral standards aimed at humans, at the creators of technology. The principles are the following:
- It must be ensured that the AI is centered on the human being.
- Attention must be paid to vulnerable groups, such as minors or people with disabilities.
- You must respect the fundamental rights and the applicable regulation.
- It must be technically robust and reliable.
- It must work with transparency.
- It must not restrict human freedom.
Large organizations, companies and governments are focusing on the problems that may arise in the matter of the ethics of artificial intelligence to draw common considerations, practices and frameworks for the future. It is important to reach an agreement where it is possible to conceptualize and, above all, regulate the derived practices. After all, technology is one more step in our evolution … And its code of zeros and ones must be a reflection of our genes.
Miguel Ángel Barrio is head of Entelgy Digital
DeepMind was the Artificial Intelligence company acquired by Google in 2014. This company reached great popularity because it created the first program to win a professional Go player. The Go is a very complex game, much more than chess, because the number of possibilities it poses is simply huge. DeepMind's AlphaGo program was able to win 4-1 to European champion Fan Hui.
Recently its creators decided to face 40 million games of a computer game whose goal was to collect fruit. The new ingredient ?: free will. The result was that as he progressed and learned, DeepMind became highly aggressive. Faced with this fact, the developers modified their algorithms so that cooperation was the goal. The result? The levels of success were greatly improved when DeepMind collaborated with other agents. Artificial intelligence can reflect the best of humanity; you just need to learn it.