Wed. Apr 24th, 2019

How to put the tendentious algorithm at the waist | Trends

How to put the tendentious algorithm at the waist | Trends


Do you deserve a loan? Are you the perfect candidate for the job you aspire to? Are you in a position to pass passport control? Every day there are more algorithms taking the decisions that govern our lives. And every day they jump more alert about the limitations of their criteria and the lack of transparency in relation to their processes of 'thinking'.

Sponsored Ads

Advertise Here

Ensure that the predictions of these systems do not they end up damaging groups at risk of exclusion It is also the responsibility of those who are creating them. The scientific community is already looking for ways to make the reign of machines a little less sinister.

a way to alleviate tyranny of these mindless decision systems is to increase transparency in their decision processes. By introducing mechanisms to question the results, we shed light on the black boxes that conceal their reasoning and we can detect their potential biases in time.

In 2017, the Association for Computing Machinery (ACM) published a manifesto in defense of algorithmic transparency and accountability. "Even well-designed computer systems can manifest unexpected results and errors, either because they have failures or because their conditions of use change and invalidate the assumptions on which the original analytical was based," they warned. In the same statement, the ACM defined seven principles necessary to know the algorithms as ourselves.

  1. Consciousness. The creators of these systems must be aware of the possibility of biases in their design, implementation and use.
  2. Access. Regulators should favor the introduction of mechanisms so that individuals and groups negatively affected by algorithmic decisions can question and rectify them.
  3. Accountability. The institutions must be responsible for the decisions of the algorithm, although they can not detail how they have been taken.
  4. Explanation. It is necessary to promote among institutions that use algorithmic systems the production of explanations about the procedures and the specific decisions that are made in it.
  5. Origin of the data. The data used for training must be accompanied by a description of its origin and mode of collection.
  6. Auditability. Models, data and decisions must be recorded so that they can be audited when damage is suspected.
  7. Validation and testing. Institutions should establish routine tests to evaluate and determine if the model generates discrimination.

Another approach to prevent algorithms from becoming machines amplify injustices It is to adjust your reward systems to measure success in your tasks based on more benign parameters. This is what Google is trying with its drawing generator.

Scribbles, traced through a trained artificial intelligence with drawings made by humans they presented themselves again before a flesh and blood audience whose only mission was to react to them: if they smiled, there was a positive response, if they frowned or seemed confused, a negative response was recorded.

By adding this variable to the data used to train the neural network responsible for generating the drawings and converting their task to optimize our happiness, we obtained an improvement in the quality of the illustrations. "The feedback Implicit social communication in the form of facial expressions can not only reveal users' preferences, it can also significantly improve the performance of an automatic learning model, "the Google researchers say. paper that accompanies this project.

  • How to fail in the attempt

Much of the cause of this mess is the data. If your data is skewed, your results will be skewed. Yes the society they portray is uneven, the results will be unequal. On paper, a possible answer to this problem of representation, would be to eliminate the sensitive characteristics that decompensate decisions. Stijn Tonk, of GoDataDriven, did the test with a system of allocation of financial credits and verified that the way to the equanimous algorithm is something more complex.

"Our classifier did not have access to attributes of gender and race and still ended up with a model cut off against women and people of color", advance. The reason for this is that the problem of the data is deeper than these differences. "Our dataset It is based on the 1994 census, a time when wage inequality was as serious a problem as it is today. Unsurprisingly, most of the best paid data are white men, while women and people of color appear more frequently among low-income groups. As a result, we end up with unfair predictions, despite having eliminated the attributes of race and gender, "explains Tonk.

That the solution is not easy, insists the expert, does not imply that it is not necessary to find a way to implement it: "Making fairer predictions has a cost: it will reduce the performance of your model, but in many cases it is a relatively low price in exchange for leaving behind the skewed world of yesterday and predicting our path to a more just tomorrow. "

.



Source link

Leave a Reply