The European Union It studies hardening the conditions it imposes on artificial intelligence developers, in an attempt to make technology use in an ethical way. The European Commission, the executive arm of the EU, finalizes a series of new rules applicable to “high-risk sectors” such as health and transport, and suggests that the block has to update the safety and liability laws. This follows from the draft of a White book about artificial intelligence to which the Bloomberg agency.
The European Comission He plans to present this document in mid-February, although, according to the US agency, it is very likely that the final version will change. This document is part of a larger EU effort to be able to compete with the United States and China in artificial intelligence, a technology that more and more is behind all the innovations They are released to the market. However, the European authorities want to differentiate themselves in a more ethical use in aspects such as, for example, user privacy.
There are few voices that object, on the contrary, that strict data protection laws such as those prevailing in the EU (in addition to legal and political fragmentation) could hamper innovation around AI. EU officials argue, on the contrary, that a harmonization like the one they are going to propose will boost development.
The new president of the European Commission, Ursula von der Leyen, promised when she took office that her team would present a new legislative approach on artificial intelligence during the first 100 days of her term, which began on December 1. For this, he entrusted the task to the Competition Commissioner and responsible for the sector, Margrethe Vestager, to coordinate these efforts. A spokesman for the Brussels-based Commission declined to comment on Bloomberg about the document, although he argued that “to maximize the benefits and address the challenges of artificial intelligence, Europe must act as one and define its own path, a human path. The confidence and security of EU citizens will therefore be at the center of the EU strategy. “
The EU, for example, intends to compel Member States to appoint authorities responsible for monitoring the application of any future rules governing the use of AI, according to the document. In addition, Brussels is also considering new obligations for public authorities around the deployment of facial recognition technologies, as well as more detailed rules on the use of such systems in public spaces.
The white paper thus suggests prohibiting the use of facial recognition, both by the administration and by the private company for a sufficient time until their risks are adequately calibrated. In the draft, the EU defines these high-risk tools as “artificial intelligence applications that can have legal effects, effects for the individual or legal entity, or represent a risk of injury, death or significant material damage to the individual or the person legal. “
Obviously, artificial intelligence is already subject to several European regulations, including rules on fundamental rights around privacy, non-discrimination and product safety and liability laws. But these rules may have become obsolete and not completely cover the specific risks posed by the new technologies, according to the Commission in the document, reports Bloomberg.
However, watch your citizens with artificial intelligence is not something exclusively Chinese. A research group report Carnegie Endowment for International Peace, figure in at least 75 countries that are actively using AI tools such as facial recognition for surveillance. Spain is among them, as well as Germany, France or the United Kingdom.
The EU strategy for artificial intelligence has been based on previous work coordinated by the Commission, including reports published in the last year by a committee of academics, experts and executives. The EU rules are only valid for the European territory, obviously. However, they usually have a level of linkage, since companies are not usually for the work of creating software that can be banned in a market of more than 500 million people.
At the request of the European Union itself, last April a committee of experts published the guidelines for the development of artificial intelligence, based on three pillars: legal, ethical and technical. In total, the document includes seven requirements that, among other things, seek to ensure that systems are at the service of human beings, are safe and transparent, safeguard privacy and avoid discrimination. It is, in short, the framework required to differentiate itself from the models that China and the United States champion, according to reports Zigor Aldama.
However, consultants like McKinsey warn that the traditional gap that separates the EU from its two main competitors in the internet economy is widening with the development of artificial intelligence. “Only two of the top 30 digital companies, 25% of the startups of AI, and 10% of digital unicorns are European, ”they stress. And they add that “if Europe developed AI proportionally to its digital weight in the world, it could add 2.7 billion euros to its wealth in 2030.” A figure that could increase to 3.6 trillion if it were up to the United States, which has a smaller population.
One of the reports published last year described a set of seven key requirements that AI systems should implement to be considered reliable. These include incorporating human supervision, respecting privacy, traceability or avoiding unfair biases in decisions. Another of those documents described the investment policies and recommendations for the EU and its member states, how to restrict the development of automated lethal weapons and pointed out that new rules on unjustified follow-up through facial recognition or other biometric technologies should be considered. .
The CEO of Alphabet Inc., Sundar Pichai, is scheduled to make a public appearance in Brussels next week to give a speech to a group of experts on the responsible development of AI.