Can fair artificial intelligence be guaranteed?


The effects that AI application algorithms can have on people and society are potentially greater

EFE Madrid

Can fair artificial intelligence be guaranteed?
Can algorithms be configured that respect human rights? Are there ethics in robotics? These are some of the questions that a webinar that the Universitat Oberta de Catalunya (UOC) will hold virtually tomorrow will try to answer.

The UOC researcher Joan Casas-Roma has recalled that two decades have passed since Gianmarco Veruggio coined the term roboethics, a field of applied ethics that studies the positive and negative implications of robotics to prevent the misuse of robots and products and services based on artificial intelligence (AI).

"As AI is in more and more sectors of our lives,
the effects that AI application algorithms can have on people and society are potentially greater«, Casas-Roma has warned.

The expert gives as an example what happened during the covid pandemic in the British educational system, which used an automated system to, through student data, predict the grade he estimated they would have obtained in an exam that, due to confinement, could not be carried out.


Under the slogan "Fuck the algorithm", students took to the streets en masse when the prediction of the algorithm was published, which turned out to be, in the opinion of the teachers, very inadequate.

“Unfortunately, there are many examples of how, if ethics are not taken into account, AI can make serious mistakes. Some are related to
biases and injustices in machine learning techniques«, according to the researcher.

Casas-Roma cites the case of an automated system used in the selection processes of a multinational company that made unfavorable decisions towards female candidates because the data used to train the system already showed a
gender inequality.

"Those of us who have to incorporate ethical codes are the people who program the machines and make decisions with the data they provide us," said UOC professor Anna Clua.

"Machines don't think. Make", emphasizes Clua, for whom the use of AI must be ethical by definition, “whether in the applications of our mobile phones or in the algorithms with which hospitals screen emergencies. Recognizing the rights of people and complying with laws, such as data protection, is a sine qua non condition for using AI from all fields.”

Recommendations for the good use of algorithms

Recently, the Information Council of Catalonia (CIC) has published some recommendations for the proper use of algorithms in the newsrooms of the media that are in tune with the
code of ethics of the journalistic profession.

The field of ethical AI has become an important area of ​​research and it will be discussed at the UOC's data science webinar, which will be held tomorrow, Wednesday, under the title "Frequently asked questions in artificial intelligence".

Based on different case studies, various experts will review the ethical principles for trustworthy AI and reflect on how
apply technology without violating human rights.

According to Casas-Roma, among the lines of research that are focusing more efforts is the treatment and processing of data to prevent an AI system based on
machine learning extract biased and unfair correlationsfor example, through demographic data.

Another line that is being explored is being able to follow, understand and assess the decisions made by an AI system.

'Black box' effect

It is the field known as explainable AI or explainable AI (XAI), "which seeks to avoid the 'black box' effect in which, with specific input data, an AI system makes a specific decision, but
without a human outwardly being able to understand what reasoning process It has led the system to make that decision, and not a different one“, explained Casas-Roma.

They are also studying creating artificial moral agents, which would incorporate "a way to identify, interpret and assess the moral dimension of AI decisions so that they are ethically acceptable," the expert concluded.

Source link