Biased algorithms are fixed (you don't) | Trends

The saying goes that no camel looks its hump. This is also true when we talk about decision systems based on machine learning, which, of course, ignore their shortcomings. A set of algorithms modeled to recognize faces has, in principle, no way of realizing that it is classifying the faces of people of certain ethnic minorities worse. It is up to us to fix it, and we are working on it.

But the saying is not invented thinking about algorithms. That invisible hump is the portrait of a purely human defect. What happens when bias is in us? “Changing algorithms is easier than changing people: the software of computers can be updated; the network of our brains has proven to be much less ductile, ”says researcher Sendhil Mullainathan in a column that recently signs in The New York Times.

Mullainathan, a professor of Behavioral and Computer Science at the University of Chicago, knows both humps well. He analyzed ours more than fifteen years ago and published a study on the algorithms last October. His conclusion is that black boxes are not exclusive to machine learning systems. “Humans are inscrutable in a way that algorithms are not. The explanations for our behavior are changing and are built after the facts, ”he explains.

Let's start at the end, technological revolution through, with the most recent work of the professor at the University of Chicago. In this study, Mullainathan evaluated the performance of an evaluation system designed to determine the level of disease and allocate the corresponding resources to each case. Outcome? The number of patients of color selected for additional care was reduced by more than fifty percent compared to white patients who were assigned the same level of risk.

The source of this imbalance, explains the professor, is in the data used to measure this level of illness: the cost of health care. "As society spends less on patients of color than whites, the algorithm underestimated the real needs of black patients." According to the estimates of the authors of the research, this bias could have affected some one hundred million people in the United States alone.

From the point of view of Miquel Seguró, professor of the UOC and author of the book Life is also thought, the myth of neutrality and justice in calculation has brought us here. “As a reason it comes from ratio in Latin, and that means calculation, we believe that the calculations are in themselves perfect, unalterable and compact, ”he says. "The algorithm is a way of trying to get closer to reality to have a photograph or a kind of control around a disparity of situations and cases that will always escape maximum control."

Are Emily and Greg more employable than Lakisha and Jamal? This is the title and the central question of the study that Mullainathan published in the American Economic Review in September 2004. After sending fictional curricula to different job offers, they verified that white names received 50% more calls for interviews than African-Americans. The phenomenon, in addition, was common to all industries, occupations and size of companies.

Here there are no algorithms, but inscrutable humans. “To discover what interest we have and try to photograph objectively, neutrally and aseptically everything we can think or desire is fine as a program, say, to generate knowledge. But I don't know if it is attainable in itself, ”reasons Seguro.

Looking back, Mullainathan agrees that the ability to make unfair decisions - and their potential to generate damage - is a feature we have in common with algorithms, but stresses that there ends the list of reasonable similarities: "A difference between both studies is the work that was necessary to discover the bias. "

In 2004, he needed months of work to develop the curricula, send them and wait for the answers. This year summarizes it as a simple "statistical exercise": "The work was technical and repetitive, without requiring stealth or resources." And the same with the solutions. For the algorithm, there is already a prototype tool that would have to neutralize the bias detected in the system. In the case of humans, change takes more time. "None of this seeks to belittle the obstacles and measures necessary to correct algorithmic bias, but compared to the intransigence of human bias, it seems quite simpler."


Source link