June 1, 2020

Sundar Pichai: “I have no doubt that artificial intelligence has to be regulated; the question is how ”| Technology



The CEO of Alphabet and Google says “that history is full of examples of how the virtues of technology are not guaranteed” and that these lessons teach us that we have to “be alert to what can go wrong,” he says in an article of opinion for the newspaper Financial times. The executive admits there is “real concern” about the potential negative consequences of AI, from the so-called deepfakes to the “vile uses” of facial recognition. “There has already been work in this direction to address these concerns, but there will inevitably be more challenges than any company or industry can solve on its own.”

Pichai recently assumed all power after early December Larry Page, co-founder of Google and until now CEO of Alphabet, left the position and left it to Sundar Pichai, who will add this position to Google’s CEO. Also, Sergey Brin, also founder of Google, left his post as president of Alphabet, a position that will also disappear from the company’s structure.

In this article already written as the first executive of the technology giant, Pichai recalls that both the EU and the US “are beginning to develop regulatory proposals.” Last Friday, in fact, a white paper prepared by the European Commission was leaked in which raises the possibility of prohibiting the use of facial recognition technology in public places for a period of up to five years in order to advance the development of solutions that mitigate the risks involved.

“To get there, we need agreement on key values. Companies like ours cannot just build promising new technologies and let market forces determine how they are used. ”

Google, therefore, published its own principles to ensure the ethical development of artificial intelligence, says Sundar Pichai in the article. “These guidelines help us avoid prejudices, perform rigorous safety tests, design with privacy in mind and make technology accountable to people,” he says. “But the principles that remain on paper do not make sense,” he explains. Therefore, we have also developed tools to implement them, such as testing AI’s decisions to be fair and conducting independent human rights assessments of new products. ”

There will inevitably be more challenges than any company or industry can solve on its own.

The Indian manager states that they have “gone even further” and have made these tools tools and related open source code widely available, which will allow others to use AI forever. “We believe that any company that develops new AI tools must also adopt guiding principles and rigorous review processes.”

.



Source link