Tue. Apr 23rd, 2019

Who watches that the algorithms are not racist or sexist? | Trends

Who watches that the algorithms are not racist or sexist? | Trends

Imagine that you could know the chances of a newborn being abused during their first five years of life. Would it be worthwhile to invest resources to have that information? Certainly, it would be helpful for social assistance services: having a list of children at higher risk would allow monitoring their situation.

Sponsored Ads

Advertise Here

That vision was the one that drove the development in New Zealand of a system that, based on 132 variables (age of the parents, mental health, criminal history, recipients or not of aid ...), gave a score of the possibilities that the newcomers had born of suffering abuse. The program was made public in 2014 and stopped the following year, thanks to research that showed that the system was wrong in 70% of the cases.

The engine of that program was an algorithm, that is, a recipe or instruction set that is applied to a set of input data to solve a problem. You may think that the closest you are to an algorithm in your daily life is when you search on Google. Or when Spotify discovers a band that fits their musical tastes. None of that: we swim among them. An algorithm often decides whether one deserves an interesting job offer or not. Or if it is convenient for a bank to give it a loan.

Algorithms are everywhere. The example of New Zealand puts two weighty issues on the table. First, not only do private companies use them: public institutions also turn to them to make relevant decisions. Second, the algorithms are wrong, and those mistakes can ruin your life.

The math and activist Cathy O'Neil says that algorithms are "opinions locked in mathematics". Depending on who builds these models, which variables take into account and with what data they are nurtured, will give one or another result. "We usually think that the algorithms are neutral, but it's not like that. The biases are structural and systemic, they have little to do with an individual decision, "explains Virginia Eubanks, professor of Political Science at the University of Albany (New York) and author of Automating inequality, a book that delves into the socioeconomic biases of algorithms with a significant subtitle: How technological tools profile, control and punish the poor.

Eubanks speaks in his book of the aforementioned New Zealand system and of many others that work in the United States. One of them determines who of the 60,000 homeless of Los Angeles have the right to receive or not receive some public assistance; another, supposedly designed to provide social benefits in an objective manner in Indiana, had to be closed when it was discovered that it encouraged cuts in the Administration, which coincidentally went through budgetary constraints, by leaving out taxpayers who qualified for aid.

The mathematician Cathy O'Neil worked in the financial industry ... until she jumped into the Occupy Wall Street movement.

There are other more striking examples. In Allegheny County (Pittsburgh, PA), an algorithm from the Office of Children, Youth and Families tries to predict the future behavior of parents to avoid abuse or mistreatment. He does this by speculating what the subjects analyzed are likely to do based on patterns of behavior shown by similar individuals in the past.

All this from public data, which, as demonstrated by the author, is already an important socioeconomic discrimination (in the US, those who resort to the public system are those who can not afford the private). Public school, housing office, unemployment service, county police ... People who relate to these institutions are poorer than rich, of course. And in the United States, among the poor there is an overrepresentation of blacks, Latinos and other ethnic minorities.

The algorithms are prejudiced: they are computer scientists and they, housewives

Virginia Eubanks has been researching for years the effects that certain algorithms have on the lives of the poorest.

Rich families, Eubanks explains, may also be dysfunctional, but the Allegheny County system would not recognize it: detoxification clinics or psychiatrists, for example, are not within the public system and therefore do not compute for the algorithm. "Is it correct that the system punishes disproportionately the most vulnerable? Would the wealthiest tolerate their data being used in this way? Obviously not, "the New Yorker bursts indignantly. "We must reflect on what we are doing, what it says about us as a society that automates the decision of whether parents take good or bad care of their children," he adds.

"At the highest levels of the economy, it is human beings who make the decisions, even though they use computers as useful tools. But in the intermediate levels, and especially in the lowest levels, a large part of the work is automated, "writes O'Neil in Weapons of mathematical destruction. If a graduate of Stanford Law School is told in an interview with a prestigious law firm that his name is on the system associated with an arrest for creating a meth lab, the interviewer will laugh, thinking that the machine will He has made a mistake, and will continue with the interview. If an algorithm determines that a parent is not responsible enough, the risks it runs are anything but anecdotal.

Frank Pasquale is concerned about the opacity of the algorithms and databases of the large world corporations.

For those who live in the United States, there are algorithms that classify you as a solvent citizen or not based on the probability of returning a loan, calculated from your credit history, income level and other data. Your score will determine the interest rate offered by banks, which in the case of a mortgage can mean thousands of dollars a year. It can even affect their job opportunities, while there are companies that do not trust those who have debts.

The sophisticated models that weigh the credit rating of people are opaque. "This tool is too decisive in the success or failure of people to function wrapped in secrecy," says Frank Pasquale. This lawyer, professor of Law at the University of Maryland, published in 2015 The black box society. The secret algorithms that control money and information, a book that explores the opacity of the algorithms that most affect our lives.

The leaks of Edward Snowden showed that the NSA uses data from companies such as Google or Facebook to monitor citizens. And these companies, which enjoy a quasi-monopolistic position in the market, know almost everything about us. "We may not be able to stop collecting data about ourselves, but we can regulate how they are used," says Pasquale. "There are companies that do a certain drawing of people and on it structure opportunities for each individual. For example, we know that there are certain real estate, financial or medical products that are offered to the most vulnerable people with sometimes fraudulent messages, "he explains.

Joy Buolamwini has denounced and identified the racist biases of certain algorithms that she herself had suffered.

"We are building a vast 3D representation of the world in real time. A permanent record of ourselves. But what is the meaning of this data? ", Wondered in 2015 the digital media professor Taylor Owen in His article The violence of the algorithms (in Foreign Affairs). The answer is disturbing. Many of the failures that algorithms have applied to social issues have to do with a basic error: algorithms work on probabilities, not with certainty. Often both are confused.

If clear rules are not established around the use of data, soon we can take some unpleasant surprise. For example, once companies have accumulated vast amounts of information about the health of their employees, O'Neil asks in his book, what will prevent them from developing health qualifications and using them to filter candidates for a job ? Many of the substitute data you can use in that task, such as steps taken in a day or sleep patterns, are not protected by law and it is legal to use them. If companies reject applicants for their credit rating, it makes sense that they do so also for their health.

But let's not advance events. What we know today is that Algorithms prove to have racial and gender biasor when they are entrusted with the selection of personnel. Given the rush of curriculums that many of the large American multinationals usually receive, it is very common for each company to develop CV readers to do a first filtering. And these screens are not neutral. A group of researchers from MIT sent in 2002, when these systems were not yet widespread, 5,000 CVs to job offers published in newspapers. Half of the invented profiles had typically white names, such as Emily Wash, and the other half typically black names, such as Jamaal Jones.

Lorena Jaume-Palasí wants to create a guide that helps developers to detect gaps and ethical conflicts.

The results were significant: the white profiles received 50% more calls. Gender biases are also common in the selection of personnel. O'Neil explains in his book how the introduction of screens in the auditions of musicians multiplied by five the female presence in the orchestras that opted for this practice.

In a already classic article signed by a group of professors at Boston University, the authors showed that the systems of machine learning they have sexist biases because in the most common data source, that is, the Internet, there are many associations of concepts that lead the machine to establish correlations such as housewife-she or genius-him. Or put another way: the algorithm reproduces the biases that actually exist in the registers.

The American computer engineer of Gangan origin Joy Buolamwini realized one fine day when she was studying at MIT that a robot she was working on was not able to recognize her face. Months later the same thing happened with a social robot that he tried during a visit to Hong Kong. When he put on a white mask, things changed.

Chance? It turns out that both systems had used the same facial recognition software based on machine learning. "If your face deviates too much from the patterns you have given the system to learn, it will not detect you, as it happened to me," he says. in a TED talk posted on YouTube that accumulates more than a million views. "Algorithms, like viruses, can propagate biases on a massive scale and at an accelerated rate," he adds in the recording. These types of mistakes are too common.

Without going any further, Google labeled a few years ago three young black women as gorillas. The researcher decided to take action on the matter: she founded the Algorithmic Justice League to denounce the biases of the algorithms, serve as a speaker for people to expose cases of abuse and develop codes of good practice in the design of these systems. In Europe we also have organizations that monitor the algorithms.

Two of the most important have been founded by Lorena Jaume-Palasí. Philosopher of training, the Mallorcan has long been interested in the ethical dimension of automation and digitalization. She is a co-founder of AlgorithmWatch, an organization based in Berlin (where it resides) that analyzes the ethics of algorithmic processes. He recently left the NGO and has founded another, The Ethical Tech Society, which focuses more on the social relevance of automatic systems. "The conversation we have had for years about algorithmic systems is based more on fear than on real risks. It was necessary to create an NGO that would launch normative positions based on facts, "he explains.

Many of the errors could be detected and easily solved. As the teams that develop the algorithms are not interdisciplinary (they do not understand law, sociology or ethics), they create systems that, in their point of view, are very well made. Although they operate on fields in which they are not trained and as complex as education, health or other public services. "The metrics to evaluate these systems are developed by the same people who created them, so you have a vicious circle: they are measuring what they think they have to measure," he says. Jaume-Palasí wants to develop standards of good practices, help technology to fulfill its purpose and not hinder. In The Ethical Tech Society, he works on new methods to evaluate these systems. "I have developed an ethical penetration test. Hackers use penetration tests to understand the integrity or vulnerability of a system; they help me to see if there is any kind of gap or ethical conflict. "

Will it help to prevent the development of more algorithms such as those denounced by Eubanks? Maybe. Are we safe in Europe from such algorithms, present in the US and, superlatively, in China? No. "I have found an alarming case in Spain, a system used by state institutions," says Jaume-Palasí. "I'm negotiating with them to stop him. If you do not listen to me, you will know about it. "


Source link

Leave a Reply