October 29, 2020

Artificial intelligence looks at justice but raises ethical doubts | Innovation


A woman appears at the entrance table of a court in Buenos Aires, Argentina. It’s 10 a.m. on a cold Monday in August 2018. She is accompanied by her three children, who sit on the floor to play, while her mother talks with the employee. “I come to present an amparo to collect the subsidy for street situation”, the woman slides with some shame. After a few questions, whose answers go to a paper form, the employee asks for your identity card and enters the office. The lady decides to sit and wait. She is tired and knows that it will be months before this little snowball that she has just pushed becomes the possibility of sleeping indoors.

What is unknown is that, if your file is not resolved in the first and second instance, the Public Prosecutor’s Office will intervene as a last resort. And there, unlike the previous instances, the causes are resolved in minutes. How is it possible? Why they work with a computer system that uses artificial intelligence (AI). His name is Prometheus.

When a justice officer of the said ministry takes the file in their hands, they should only answer, speaking or writing, the questions of a chat like WhatsApp. And, in exactly four minutes, you will have obtained the opinion, in addition to the relevant statistics for the case and links of interest to illustrate the decision. Then, the lawyers of the organism only review the procedure, print and sign. They will have completed in half an hour a job that usually takes months.

A model that interests in Spain

In October 2019, authorities of the modernization area of ​​the Spanish Ministry of Justice visited the offices of the Public Prosecutor’s Office of the City of Buenos Aires. The objective was to know Prometea, the computer system that is used in the Argentine capital to solve cases of diverse subject matter but simple resolution: minor infractions, traffic accidents or social policies, among others.

Sofía Duarte Domínguez, general director of the body formerly called Modernization of Justice of Spain – in January 2020 became the Digital Transformation of the Administration of Justice -, made statements in this regard in the Argentine press: “We have studied everything about Prometea , we know that it is a fabulous system and we want to see if we can take it to Spain. Even the [entonces] Secretary of State for Justice, Manuel Dolz, gave us a carte blanche to move forward with this, which is, without a doubt, the future of justice. ”

The subject should not take us by surprise. A few days before the visit of the Spanish delegation to the Buenos Aires organization, the professor of Law and Political Science of the Open University of Barcelona (UOC) David Martínez explained, in an article published by The vanguard, that AI could well be used in Spain in cases of “easy legal response”, which would be able to decongest the traffic of judicial files. Although Duarte Domínguez stresses that the digitization of the entire Ministry is fertile soil to automate justice, she warns that one of the main obstacles to the process lies in the resistance of judicial workers, who believe that computer science will take away the job.

In favor of automating justice

Martínez’s observations are in line with what some Argentine experts think, committed to the task of doing intelligent justice. This is the case of the judge of the Supreme Court of Justice of the province of Mendoza, Mario Adaro, who applies Prometea daily and recently participated in the first Ibero-American Summit of Artificial Intelligence, at the headquarters of MIT (Boston). “The AI ​​has a capacity for processing information in large volumes that shortens bureaucratic deadlines to a negligible extent because, usually, a greater number of causes and few decision makers, more time per case,” he says to THE COUNTRY RETINA. “Using automatic processes, the judge has greater capacity for analysis.”

The Deputy Attorney General of Buenos Aires, Juan G. Corvalán, created Prometea after having detected that, in half of the cases involving judicial personnel, most of the time is used to verify personal data, information that is repeated , etc. Adaro illustrates it with the example of tax causes, pointing out that “they are serial sentences, of a large volume, where decisions are grouped into clear sets and everything is quite mechanical and predictable. By using AI for these types of problems, Prometea makes the amount of errors in data loading, typing and redundancy decrease significantly, ”says the Mendoza judge.

“AI can process information in large volumes, which shortens bureaucratic deadlines”

The origin: United States

There are three emblematic cases of AI application in justice, in addition to Prometea. The most famous is the Compas (Correctional Offender Management Profiling for Alternative Sanctions) program, which is used in several United States. This is a software that has been used since 1998 to analyze, according to the criminal record of a defendant, his chances of reoffending. The program raises a questionnaire to the accused. Once this answers all questions, the system calculates the risk of recidivism, so the judge defines, for example, whether or not to grant parole while the judicial process is completed.

Compas rose to fame with the 2013 Loomis case. Accused of running away from the police and using a vehicle without authorization from its owner, Eric Loomis dropped six years in prison and five years on parole because Compas estimated a high risk of recidivism . Loomis appealed, claiming that his defense could not refute Compas’ methods because the algorithm was not public. The Wisconsin State Supreme Court dismissed the appeal. Years later, in 2018, it was learned that the system analyzes 137 aspects of each accused. But, when comparing the level of success between the predictions of Compas and those of jurists of flesh and blood, it was found that the level of success of AI is not superior, or even, serious errors are evident.

“Statistical averages say something about common behavior patterns in a collective. They do not describe individual profiles and are unable to capture the uniqueness of the human being, ”explains Lorena Jaume-Palasí, ethics and technology expert and founder of Algorithm Watch and The Ethical Tech Society. “With this we can understand groups with a slightly more architectural look, but we also incur the risk of putting individuals in standards that do not fit.”

To clarify whether it is feasible to judge someone criminally using AI, one must understand what criteria the algorithm acts on (which claimed Loomis’ defense). Jaume-Palasí argues that, after all, Law is an algorithm that has been applied long before there was information technology. “[Con el caso Loomis] they have all set their sights on the computer system and were scandalized by racism, but Compas allowed us to learn about the biases the judges have, because the system was created by humans who had been working and deciding with those biases that the program later showed. ”

“Statistical averages help to understand groups, but are unable to capture the uniqueness of the human being”

Is Prometea as Compas?

In addition to his position in Justice, Juan G. Corvalán is director of the Laboratory of Innovation and Artificial Intelligence of the Faculty of Law of the University of Buenos Aires. In 2017, he created the Prometea software together with his collaborators.

Corvalán highlights, among the qualities of the system, that “Prometea does not use black box AI techniques, or what is known as deep learning; that is, the entire process of the algorithm is open, auditable and traceable. ” Compas, on the other hand, applies two neural networks whose operation is unknown because “it was developed by a private company that holds the intellectual property rights of the algorithm”.

Argentine software, says Corvalán, only reproduces the procedure of the country’s magistracy. “Prometea’s predictions are based on the analysis of the history of what the judges have decided, it is they who train the system. For example, in the Constitutional Court of Colombia [país en el que también se aplica el programa] it is the magistrates themselves who carry out the permanent adjustment of Prometea’s predictions, with our technical assistance, of course. ”

Databases and biases

There is no AI worth without data. And when talking about data, the ghost of biases appears, such as the racism of which Compas has been accused. Numbers build a discourse of objectivity that sometimes prevents questioning decisions. “The algorithms are nothing more than opinions enclosed in mathematics,” Cathy O’Neil wrote in her famous Weapons of Mathematical Destruction.

“What the algorithms undoubtedly allow is to standardize the decisions. That is to say, to standardize criteria so that two different answers are not given to the same problem ”, says Pablo Mlynkiewicz, a graduate in Statistics and former head of the General Directorate of Information Sciences of Buenos Aires. “But of course, for that to translate into a real breakthrough in justice, the database must have representation of all groups. If not, there will be mistakes. ”

Mlynkiewicz agrees, in this way, with Jaume-Palasí and Adaro, highlighting a strong point in favor of the automation of judicial proceedings: they avoid giving two different answers to the same problem. That is, it provides plot consistency in the judgments. Even being the most critical with these systems, the Mallorcan philosopher admits that automating judicial processes based on statistics can help correct mistakes that justice today refuses to accept. “We have known for some time that the judges and the judicial system we know are not very consistent. Being able to make traceability and statistics of judicial decisions thanks to AI is not bad, ”he emphasizes.

The robot judges in China

In October 2019, the Internet Court was presented in Beijing, defined as an “online litigation center”. According to the official information, it is a platform in which the parties load the data of the problem to be solved and the AI ​​does the rest: it looks for jurisprudence, analyzes the subject, contrasts evidence and gives judgment.

The system does not have large technical differences with that of Estonia, where there is also a strong commitment to the automation of justice: there is no human intervention in the whole process. But between the two countries there is a great distance in democratic standards. In the small Baltic country, considered the most advanced in the world in digital matters, the director of the project is the young Ott Velsberg, who intends that the lawsuits submitted to the digital court do not exceed 7,000 euros as the amount claimed for damages.

Everything flows there, because it is a society with high standards in civic matters. But when talking about the Asian giant, things charge another tenor. “The development of virtual or cyber judges in China has followed the same line as the Social Credit System: from the bottom up,” explains Dante Avaro, a specialist in the Chinese government control model, referring to the controversial scoring mechanism of citizens launched by Beijing to determine whether or not they are reliable. “Both began at the beginning of the new millennium. In the case of AI in justice, it was experienced in cities like Shandong, then in Hengezhou, Beijing and Guangzhou. The objective was to bring efficiency to judicial processes in electronic commerce, virtual payments, cloud transactions and intellectual property disputes, ”he illustrates.

The detail is that, in the hands of a non-democratic State that intends to order society working transversally a scoring that Avaro calls “citizen traceability”, the application of AI in justice is dangerous because it is linked to the Social Credit System and the Yitu Dragonfly Eye facial recognition system. “A huge state surveillance device is being built,” concludes Avaro

.



Source link