June 24, 2021

"Artificial intelligence forces us to revise our idea of ​​justice" | Trends

"Artificial intelligence forces us to revise our idea of ​​justice" | Trends


Can I say more about this? "And" I have extended too much "are two of the phrases with which David Weinberger (New York, 1950) ends many of his answers. PhD in Philosophy from the University of Toronto, baptized by The Wall Street Journal as a "marketing guru", he shows his passion for matters he knows very well thanks to a long career that he has developed in parallel with the evolution of the internet, a tool that in his opinion has led us to "the best moment in history" of humanity to become a wise man or a perfect idiot. " If some of his expressions are reminiscent of Woody Allen, it is no coincidence: between 1976 and 1983 he was one of the writers that was introduced in the director's mind to create the stories of the comic strip Inside Woody Allen. Although now Weinberger prefers to dissect other brains, that of machines that learn by themselves, in order to study how they think and the possible biases of their decisions.

Already in 1999 he was co-author of Cluetrain Manifesto, defined as a manual on online marketing and which addressed new ways of communicating and sharing knowledge and impressions on the Internet. Almost 20 years later, analyzing how these conversations have evolved, he is optimistic: "Much of what we are achieving is positive, although it is hard to listen to some global conversations that we can not be proud of and that are the result of the stupidity and the privileges of some. In addition, there are manipulated, discontinuous or offensive conversations that would be necessary to eradicate, but even so I do not want to underestimate the capacity of the internet as a communication tool ".

This manifesto was followed by other solo publications on technological trends, but he has also worked as a university professor, columnist, vice president of marketing, adviser of presidential campaigns on the Internet, co-director of the laboratory of innovation in libraries from the Harvard Law School … An extensive and varied curriculum to the present, at which time he is focused on his work as a researcher of the Berkman Klein Center for Internet & Society of the Harvard Law School.

  • Machines that play with their own rules

The activity of Weinberger tries to give answers to how technology is changing human relations, communication, knowledge and society. This is the reason why he will be one of the speakers in the next Forum of the Culture of Burgos, which will be held in the Spanish city from 9 to 11 November. In this meeting, you will reflect on the following question: will we be able to control artificial intelligence (AI)? Weinberger anticipates EL PAÍS RETINA that there are two "exciting" aspects when investigating the advances of machine learning (the usual machine learning) of the machines: the new sets of rules created by the AI ​​itself and the redefinition of the concept of impartiality.

Regarding the first issue, technological advances such as artificial intelligence applied to the Internet of things are allowing the machines not only to connect with each other, but also to create their own systems to communicate and determine how the elements that make up each other affect each other. such systems. "We are relying more and more on machines that draw conclusions from models that they themselves have created and that are sometimes beyond human understanding because their rules conceive the world in a different way to our way of thinking," he says.

Thus, the technologist wonders what it would mean to us if those models with which machine learning understands the world turn out to be more accurate or true than our own way of analyzing how the world works. "It is a long debate, although we are already resorting to machines that think differently from us because they calculate faster or because their answers tend to be more accurate, although we can not explain how they achieve it," says Weinberger.

This nuance of "the inexplicable" connects with the second issue of AI that today occupies the mind of this philosopher: impartiality. "The conclusions of the systems created by the machines may not only be repeating the biases that humans introduce, but could even amplify them," he says. In his opinion, our first responsibility is to find out why automatic learning reaches partial diagnoses, although "it is not very clear that we will always be able to detect the point where they have made a mistake, precisely because sometimes we do not know how they understand the world and these models will be more and more complex ".

According to Weinberger, it is necessary to continue working on this concept, which is already generating a debate that is very enriching: "I admit that AI can amplify injustices in society and that it is possible that this is something very difficult to avoid, for what is an urgent problem. But personally I may be more interested in what humans are learning about our own concept of fairness thanks to our work with AI. "

According to the philosopher, if the person in charge of a system of machine learning he wants this to be fair, first he must tell him exactly what kind of impartiality he should take into account in order to make his calculations, so during the next few years we will attend innumerable debates about what seems right and unjust to us. "It is no longer about achieving a fair AI, but the AI ​​itself is doing a lot for us because it forces us to review the different ideas of justice that people have," he concludes.

.



Source link