The European Commission has published its first draft of ethical principles "for a reliable artificial intelligence", a provisional text that establishes an "Artificial Intelligence centered on the human being", through a series of guiding principles such as supervision by human beings and Respect for privacy and transparency, reports Europa Press.
The text published this week is the first version of an initiative in which the group of High level of experts in Artificial Intelligence (IA) of the European Commission, composed of 52 independent experts from academia, business and civil society.
In the first place, the EU has established a series of basic rights in the development of Artificial Intelligence that are addressed to all actors in the sector, among which it stresses that AI "should be developed, deployed and used with an ethical purpose" .
Five basic rights
Through what has been known as AI focused on the human being, the EU has argued that these technologies must be based on several fundamental rights, summarized in five: doing good, not doing evil, autonomy of humans, justice and that their actions are explainable.
These principles are applied in a general way, but especially in situations with vulnerable groups, such as children, the disabled and minorities, as well as employees and consumers. The European Commission has recognized that "although it can bring benefits to individuals and society, AI can also have a negative impact".
Measures for the development of AI
Regarding concrete measures to develop an AI of trust, the draft of the European Commission has pointed out that these systems must be capable of rendering accounts – that is, they must be responsible – have a design for all people and respect the autonomy of the human being.
Among the rest of the basic principles of development is also the absence of discrimination, the need for human supervision to always be possible, and respect for their privacy.
The text also points to the need for technologies to be robust, safe and transparent. AI systems must thus ensure the traceability of their actions and decisions, as well as be realistic about their capabilities and limitations.
With regard to business, the EU recommends the use of deontological codes on AI, and advises that those that develop or test this type of technology do so through human teams with diversity and that facilitate external audits.
Final publication, in March 2019
The final ethical guide of the European Commission on Artificial Intelligence will be published in March 2019, and until then a period has opened in which the responsible group of experts will receive suggestions, until January 18, in order to draft the final version.
This plan follows the announcement made by the European Union on December 7 of a declaration of cooperation between the member states in relation to the IA, accompanied by an investment of 7,000 million euros of the programs Horizon Europe Y Digital Europe.