When Neutah Opiotennione, a 54-year-old Washington DC resident, discovered that Facebook was hiding some ads about financial products for being an older woman, she decided that she should do something about it. Last October, Opiotennione joined other users of the social network to file a class action lawsuit for the discrimination they suffer due to the advertising tool used by the platform.
The plaintiffs claim that Facebook allows advertisers to discriminate by age and gender now to offer advertising of certain financial services, such as bank accounts, insurance, investments or loans, since these types of ads do not appear to women and the elderly with the same frequency as the rest of the users. According to the lawsuit, Facebook persists in this type of discrimination, even though the company said a few months ago that it would take action.
It is not the first time that the social network is criticized for this reason. Last March, the federal government accused the company of discrimination in advertising related to housing. "Facebook is discriminating against people based on who they are and where they live," said Secretary of the Department of Housing and Urban Development, Ben Carson, in a statement.
The situation jumped after an investigation published by the American media ProPublica In 2016, he found that a Facebook tool could be used to exclude black and Latino users from being shown certain real estate ads, something that is not allowed by law. Under Federal Fair Housing Law, it is illegal to publish an advertisement "regarding the sale or rental of a home that indicates any preference, limitation or discrimination based on race, color, religion, sex, disability, family status or national origin "
Under Federal Fair Housing Law, it is illegal to publish an advertisement "regarding the sale or rental of a home that indicates any preference, limitation or discrimination based on race, color, religion, sex, disability, family status or national origin "
The social network then announced that it would take measures in this regard, but, a year later, a new research The American media showed that control measures were not yet fully effective.
Now, the plaintiffs claim that the company continues to allow financial services companies to filter ads by age and gender, which would be a violation of California's civil rights law. "The Internet is not a place where people can be discriminated against because of their age or gender, particularly in financial services opportunities," he told Reuters Peter Romer-Friedman, plaintiffs representative. "It would be as if General Motors refused to offer women or the elderly the same characteristics in a car as men or young people."
Machines that learn to be racist and macho
The demand has occurred in the middle of a debate on how the current North American legislation protects citizens not only from direct discrimination, such as that produced in the case of Facebook ads, but also from that generated by artificial intelligence , whose segregation effects are similar, but whose origin is more difficult to trace.
Current US legislation states that even unintentional exclusions, such as those generated by artificial intelligence, can be punished, so that a housing rental application whose algorithm excludes minorities could be considered illegal. However, last August a reform was proposed so that only those that directly introduce discriminatory parameters into the algorithm were considered punishable.
The problem, according to experts, is that the non-inclusion of certain parameters in an algorithm does not prevent the result from being discriminatory, as a study recently showed that an algorithm used to analyze the health risks of millions of patients in the US systematically discriminates against the black population, although the system did not know the race of those affected.
Also the recent controversy of the Apple Card, whose algorithms can offer 20 times less credit to a woman that a man in a similar economic situation, is an example of how the absence of certain input data does not imply that the result is not discriminatory, since Goldman Sachs, bank with which Apple has taken out his credit card, has ensured that The data entered in the system does not include the gender or marital status of the users.
According to a study published earlier this year in arXiv On the problems generated by the algorithms, this output discrimination can occur because “the data is a product of many factors, from the historical context in which they were generated to the particular forms of measurement error they contain”, so they may contain implicit biases that are not easy to identify with the naked eye.
In this sense, a study published in 2017 in the journal Science He showed that when an artificial intelligence learns a language, it ends up assuming human biases that are implicitly included in the language, reproducing behaviors that can be sexist and racist.
The problem is not only in the more less biased characteristics of the input data, but in the fact that the machine learning system itself can also generate biases, since “it includes a series of options and practices, from the methodology of evaluation until the definition of the model, which can lead to unwanted effects ”.
“Machine learning is not only based on the input sample, but decisions are made based on these data,” says Liliana Arroyo, a researcher at the ESADE Institute for Social Innovation. "As it is somewhat automated we believe they are neutral, but the reality is that algorithms are ideology."
The importance of how to legislate on these algorithms, which tend to reproduce the biases of human beings, is vital, since it is these who decide not only what videos a user might like, but also whether or not you are offered a bank credit, a home or medical insurance. “The algorithms do what we have always done, but in an automatic way and on a much larger scale, so the impact they have is much larger,” concludes Arroyo.