July 23, 2021

Why it can be dangerous for an algorithm to decide whether to hire you or grant you a credit | Technology

Why it can be dangerous for an algorithm to decide whether to hire you or grant you a credit | Technology



In 2014, Amazon developed an artificial recruitment intelligence that He learned that men were preferable and began to discriminate against women. A year later a user of Google Photos realized that the program labeled their black friends as gorillas. In 2018 it was discovered that un Algorithm that analyzed the possibility of reoffending one million convicts in the USA it failed as much as any person without special judicial or criminal knowledge. Decisions that were previously taken by humans today are taken by artificial intelligence systems. Some related to the hiring of people, the granting of credits, medical diagnoses or even judicial sentences. But the use of these systems carries a risk, since the data with which the algorithms are trained are conditioned by our knowledge and prejudices.

"The data is a reflection of reality. If the reality is prejudiced, the data also, "explains Richard Benjamins, Ambassador of big data and artificial intelligence from Telefónica, to EL PAÍS. To avoid that an algorithm discriminates certain groups, he argues, we must verify that the training data do not contain any bias and during the testing of the algorithm analyze the ratio of false positives and negatives. "It is much more serious an algorithm that discriminates in an undesired way in the legal domains, loans or admission to education than in domains such as film recommendation or advertising," says Benjamins.

Isabel Fernández, managing director applied intelligence Accenture, he gives as an example the granting of mortgages automatically: "Let's imagine that in the past most of the applicants were men. And the few women who were granted a mortgage passed criteria so demanding that all met the commitment to pay. If we use this data without further ado, the system would conclude that today, women are better payers than men, which is only a reflection of a prejudice of the past. "

However, women are often harmed by these biases. "The algorithms are generally developed because mostly white men between 25 and 50 years have decided so during a meeting. Based on that basis, it is difficult for the opinion or perception of minority groups or the other 50% of the population such as women to arrive, "explains Nerea Luis Mingueza. This researcher in robotics and artificial intelligence at the Carlos III University assures that underrepresented groups will always be more affected by technological products: "For example, female or children's voices fail more in speech recognition systems".

"The data is a reflection of reality. If reality is prejudiced, the data also "

Minorities are more likely to be affected by these biases as a matter of statistics, according to José María Lucia, partner in charge of the center for artificial intelligence and data analysis. EY Wavespace: "The number of cases available for training will be lower." "In addition, all those groups that have suffered discrimination in the past of any kind may be susceptible, because when using historical data we can include, without realizing it, this bias in training," he explains.

This is the case of the black population in the US, according to the senior manager in Accenture Juan Alonso: "It has been proven that in the same type of lack as smoking a joint in public or the possession of small doses of marijuana, a target does not stop him but someone of color does." Therefore, he argues that there is a higher percentage of black people in the database and a trained algorithm with this information would have a racist bias.

Google sources explain that it is essential to "be very careful" when granting power to an artificial intelligence system to make any decision on their own: "Artificial intelligence produces responses based on existing data, so humans must recognize that not necessarily they give impeccable results. " Therefore, the company is committed to the fact that in most applications the final decision is made by a person.

The black box

The machines end up being many times a black box full of secrets even for its own developers, who are unable to understand what path the model has followed to reach a certain conclusion. Alonso maintains that "normally when they judge you, they give you an explanation in a sentence": "But the problem is that this type of algorithm is opaque. You are facing a kind of oracle that is going to give a verdict. "

"People have the right to ask how an intelligent system suggests certain decisions and not others and companies have a duty to help people understand the decision process"

"Imagine that you are going to an open-air festival and when you get to the front row, the security officers will throw you out without giving you an explanation. You will feel outraged. But if you explain that the first row is reserved for people in wheelchairs, you will go back but you will not get angry. The same goes for these algorithms, if we do not know what is happening, a feeling of dissatisfaction may occur, "Alonso explains.

To put an end to this dilemma, researchers working in artificial intelligence demand transparency and explanation of the training model. Large technology companies like Microsoft defend several principles to make responsible use of artificial intelligence and drive initiatives to try to open the black box algorithms and explain why their decisions.

Telefonica is organizing a challenge in the area of LUCA -Its data unit- in order to create new tools to detect unwanted biases in the data. Accenture has developed AI Fairness and IBM has also developed your own tool that detects bias and explains how artificial intelligence makes certain decisions. For Francesca Rossi, director of ethics in artificial intelligence at IBM, the key is that artificial intelligence systems are transparent and reliable: "People have the right to ask how an intelligent system suggests certain decisions and not others and companies have the duty to help people understand the decision process. "

.



Source link