July 29, 2021

"The leaders of the big technology think themselves smarter than the others" | Technology

"The leaders of the big technology think themselves smarter than the others" | Technology



Jonathan Zittrain is director of the Berkman Klein Center for Internet and Society at Harvard University, whose work focuses on the consequences of the Internet. Zittrain (Pittsburgh, USA, 1969) is critical of the present of the great technology and believes that today technological changes are experienced with a "vague sense of perplexity," he says. In 2008 wrote the book The future of the internet. And how to stop it. If I wrote a sequel I would title it "Well, we tried it", believe now Zittrain.

Question. What has gone wrong?

Answer. First, there is not a constant stream of new companies, of people in basements inventing new things, but the guys who arrived first have stayed and become strong. Second, having available networks and very cheap sensors means that data can be collected at all times. Third, with tools that are not especially new but now have the data to be much sharper, You can make judgments about people and discover how to intervene in their lives in ways that were previously unknown.

P. Not everything has been terrible in these years.

R. At least some things that worried us did not happen. For example, with copyright.

P. Napster, the web to exchange files, died.

"Today they offer me a loan at a 25% interest per week because they identify me as someone emotionally sensitive"

R. For example. It was interesting to see in the copyright war On the one hand between the thesis "with Napster everything is uncontrollable and all content industries go bankrupt" and on the other hand "everything will be restricted by technology, nobody will be able to make a copy or lend anything". It is interesting to see that in these 10 years this war has been diluted. Now for some money or to look at an ad, you can see what you want. So some of our concerns about the freedom of the mind, which in the end is how you could describe the copyright, they did not happen.

P. Among the problems, there is privacy.

R. It's something that 10 years ago did not bother me so much: that a company knew that I had a dog and showed me ads of dog food instead of cats seemed painless. But today they offer me a loan at a 25% interest per week because they identify me as someone emotionally sensitive, who just lost money, and they can find out when to attack and when they attack they only do it against me.

P. Are we sure this happens?

R. Absolutely. 10 years ago I supported the position of the big platforms when they said 'we are only windows to the Internet, do not blame us if you do not like what you see'. There is still some truth, but more and more are not mere windows. They have become so big that they decide what fragments they teach. We look for something less specific through questions and we spend more time receiving unsolicited advice from Siri, Google Assistant or Alexa, who basically present themselves as "friends", they tell us which is the best way from here to the cinema, and when they say "better" we do not know if it is because it passes next to the pizzeria that offers something to take me to your door.

P. It could already be happening.

"Twice a week the car will appear in your house and take you to a 'sponsored adventure'

R. It is useful to think what would happen if the online business model were to go live. The great challenge of autonomous cars is to achieve one that drives well. But the time will come when they will have to ask themselves how to pay. If you use the online economy model it would be like this: you can get on an autonomous car, it's free, but maybe you stop at a fast food restaurant before you reach your destination and wait there for 10 minutes. You can still sit in the car or go out, stretch your legs and buy a hamburger. If, on the other hand, you are in a hurry, that's fine, no problem, but to keep your subscription, twice a week the car will appear in your house and take you to a 'sponsored adventure'. These are things that today have no analogy in the real world, but artificial intelligence and big data they will make them possible. Do we want that world? I think it's fair to ask us those questions now, instead of saying, 'Let's build it and we'll see.'

P. He has raised this question: What happens if the police issue a search warrant for someone who goes inside an autonomous car: the car must close doors and go to the police station or not?

R. It is good to ask yourself that. It is a way of showing us that this is not just a replacement for drivers, but opens up new possibilities. In that sense it's exciting. But how do we govern all this? It could be what I called in my book "the principle of procrastination." The theory was not to try to solve each problem from now on, but to let technology develop and solve it then. For me it was then a good beginning, but now it is seen that it is very difficult because the distance between is too early to know and it is too late to do something is very brief. At least we should have a public debate where these options are seen. We would see that by affecting the real world suddenly a lot of barriers to regulation that we saw on the internet fall.

P. The platforms already decide if we see more or less misinformation. Any remedy?

"That a platform determines what 2 billion people experience is too big a responsibility"

R. The answers that seem most obvious are medicines so strong that they can be more dangerous than the disease. That a platform determines what 2 billion people worldwide experience is too big a responsibility, no matter how noble they try to be. We need to change the premise of the question. It could be something as dramatic as bursting the company and creating a lot smaller, it could be that the company opened layers of its operation so that anyone could write a recipe to create a newsfeed -What we see-, on Facebook. It does not depend only on Facebook what is on my screen. That could lead people to choose recipes that reinforce their opinions, which could be a danger, but it would be less than the danger that we all see the world in the same place.

P. And with its algorithms.

R. The technology should be loyal to people and not offer a burger and chips every 5 seconds. I have worked on a theory with Jack Balkin of Yale about 'information fiduciaries'. It's an exaggerated word, but it means that when you entrust so much information to one of these platforms, they must be loyal to you. If your interest conflicts with yours, yours must dominate. That means they can show you ads, but not ads that hurt you.

P. He is director of the Berkman Institute. It is dedicated to observe internet. When you go to Silicon Valley, how do they treat you?

R. I do not have the feeling that the leaders there think that I have something to tell them that they have not thought of before. They sit in a meeting room and think they're smarter. But they know they have a persistent public relations problem that is growing in a possible wave they would like to avoid. They see European regulation as something that can force them to see that regulation can exist and not be catastrophic. They look for ways to get ahead of these issues rather than wait to mess up and be regulated. Then they can realize that we do not need a new function in their software but that a new political, cultural, and legal dimension is needed for what they do. I have seen them more open to ideas and suggestions in the last six months than in the previous five years.

P. To the big companies?

R. Yes, the great technology companies have a hangover as a result of their expansion. They also now have a lot of money and want to be able to convert that wealth into a better environment and some of them actually tell themselves that they want to improve the world.

.



Source link