September 20, 2020

"I would not rule out that some country use 100 million genomes to create a 'supersoldier' ​​| Trends

"I would not rule out that some country use 100 million genomes to create a 'supersoldier' ​​| Trends

By profession, futurologist. This could be Gerd Leonhard (Bonn, Germany, 1961), much to his dismay because he does not like the term. After dedicating himself for decades to the entertainment industry, this musician and composer ended up refocusing his career almost by accident on writing and lecturing. In his last book (Technology versus humanity. The future clash between man and machine, The Futures Agency), the German explores the limits of technology and its relationship with the human being. Leonhard was one of the keynote speakers at enLIGHTED, the side event organized by Telefónica at the South Summit 2018.

Should we fear machines?

We will have to do it if we let them grow until they look like us. We should also fear machines that learn to do what we do, even if that is inevitable. Having some fear is useful, but I think we have at least 50 years left before the machines come to reason and think in a similar way to humans. It will be then when we should fear them. Meanwhile, we must address the social, educational and labor changes that are already causing.

Do you think then that we will get to see a general artificial intelligence?

Computers will have an almost unlimited power. There will be data everywhere and intelligent machines, although they will not have emotional intelligence. Will they ever have it? It can, but that is very, very far away. What we should focus on now is how our machines are changing our culture and what we should stop them from doing. Companies are exploring without restriction what can be automated and what can not. The question would be to decide what we should not let it be automated to remain what we are.

How should we manage these 50 years that, at least, remain until a general AI arrives?

We need a moratorium on the development of technology as a weapon, as already exists for example with nuclear weapons. Artificial intelligence, genome editing and genetic engineering have much to contribute to the world, but we must ensure that they are not used for evil. These technologies are now out of control, companies can investigate what they want. The genome can help us eradicate cancer and the world will surely be better with AI, but it is up to us not to cross the red line.

I propose the creation of a Digital Ethics Committee, a global organization that is dedicated to thinking professionally what we should and should not do and trying to get us to agree on simple rules of operation. For example, if we automate, we pay a tax, so that we can make people learn another job. Nor do we have to use machines as weapons or let them control themselves, for example to kill. I think we could all agree on that.

We have about ten years to decide how we want the future human being to be "

It is true that companies do not have restrictions right now in terms of development and research. How should the limits be set?

The biggest problem we have today is that large companies and internet platforms have more power than the oil companies or banks ever had. Microsoft was a powerful company, a quasi-monopoly, but Google, Amazon or Facebook, Alibaba, Baidu or Tencent are literally ruling the world. The future is yours, not the people's. The energy and financial sectors were regulated; his is a completely virgin and free terrain. That is extremely dangerous, because how do you regulate something that means that someone stops earning tens or hundreds of billions? If we want them to self-regulate, we must undo part of what has been done. That's what ethics is about. Maybe what they do is not illegal, but it is unethical. In the end both things attempt against humans.

We are already seeing that the algorithms have biases. What else can we expect from them?

The problem is that companies focused on complex technologies, such as the Internet of Things (IoT, in its English acronym), test their developments and, if you operate commercially, continue forward. The problem is that everything that is connected is transparent, and therefore you also become transparent. And nobody cares about that, because it's not part of the business. We need to make companies responsible for what they do. If you develop products in the IoT, you must be responsible for their security. It is a great technology, but in the hands of an autocratic government it can serve to have a whole country watched, as it happens in Turkey. We must make sure we have good rules. Every politician should pass an ethical examination, a kind of driver's license of technology. You have to make sure that public officials understand what exactly needs to be done to build a good future.

We can not sit down to see how China develops a genome editing program and then try to negotiate with them "

To what extent do you think that robots and technology in the broad sense will change us as humans?

The changes go fast, but not so much. The machines are not that smart. We do not have robots in our bloodstream. And that will not be approved next year … but maybe in five. We have time, but we can not sit back and wait for China to develop a genome-editing program and then try to talk to them to reconsider. It would be a very bad idea. All this is already happening. We have about ten years to decide how we want the human being of the future to be. Most people want to be human, and the goal of life is happiness. Technology, by itself, will not make us happier.

In about 50 years, in the event that a general AI is developed, to what extent will we be human?

By then we will have to draw a thick line between the machines and us to control them, if we can still do it. We will be like a kind of theme park: we will be protected and the machines will do all the work. If we can get to that point, we will really live on a different planet. It is difficult to know which rules will govern it. But it is highly unlikely that we can control a general artificial intelligence. Because the first thing the computer will do is make sure it exists, and you have already foreseen the possibility that someone can unplug it.

We must focus on being very human. I would prefer my son to go to India for three months and to interact with many people other than to take an MBA "

Is it realistic to think of international agreements to regulate AI? Decades ago that we have to fight against climate change and the first agreement was signed two years ago …

Humans respond to bad situations, we do not change things voluntarily. Indeed, it has been seen with climate change: let's eat shit in the next 20 years. With the AI ​​we may attend a big accident, such as the use of 100 million genomes to create a superhuman, a kind of super soldier. That could cause a huge amount of problems in terms of deaths. And then we would mobilize. We may even see a war over genetic engineering.

You mentioned China before. Are you thinking about that country?

Yes. It is quite probable that in seven or eight years we may have an international incident in which the Chinese government must be forced to stop the development of AI. But, fortunately, people respond when things happen. After Fukushima we saw that we did not want nuclear energy.

How should today's children, who will really live through all these changes, be prepared to face them in conditions?

Children must learn that technology is not the savior of humanity, but a tool. The skills that machines will take to acquire are innate for us, that's why we must stimulate them: passion, understanding, intuition, imagination, creativity … We must focus on being very human. The future is not in knowing how to do a job well, but in inventing it. And that is what we should teach the children. The problem is that this is not learned in schools, but in real life. I would prefer my son to go to India for three months and to interact with many people other than to take an MBA.


Source link