September 26, 2020

Robots with remorse? | Technology

Robots with remorse? | Technology



The researcher Pablo Lanillos and Tiago, the robot that is recognized in the mirror.

Tiago looks at himself for the first time in the mirror. After several movements with his only arm it confirms: "it's me", says loud and clear the black and white robot of 1.50 meters high. Two years after beginning his investigations with the project Selfception, the scientist Pablo Lanillos has made the androids capable of recognizing themselves.

This humanoid identifies itself after making several random movements with the arm in front of a mirror. It is possible because Lanillos has adapted a mathematical model based on the functioning of the human brain: "People have a fixed model and our brain acts very fast to distinguish our movements," says the researcher. "But that the robot is recognized is not pre-reflective (that is, knowing that I am me with all my history), but by repetition of movements," he adds. However, if we program an automaton with the same appearance as Tiago, who makes the same movements at the same moment, the android becomes confused and believes that he himself is reflected in a mirror. "Although the replica makes a slight different movement, Tiago distinguishes that it is not him instantly," says Lanillos.

The project Selfception It is inspired by sensory-motor theories of psychology and neuroscience and is based on the fact that the development of cognitive abilities first passes through the learning of our body. Since we are in the womb, we generate a map that relates each action with a sensory response (at that moment we begin to build the base to relate to the world). The program helps to understand the human brain and improve the capabilities of machines when interacting with people, in addition to improving automatic learning algorithms.

Karl Friston, neuroscientist expert in brain imaging and professor of University College of London (UCL) He affirms that it is essential to investigate why humans act in one way or another and reduce the uncertainty about the causes of our sensations to gather evidence of our own existence. "The internal or generative models of the self must include 'where I am' and 'how do I move' and a good example of this active inference is to physically move to generate evidence to confirm the hypothesis that 'I am here' and 'I did that 'as the Lanillos team demonstrates, "he says.

The project Selfception is inspired by sensorimotor theories of psychology and neuroscience and is based on the development of cognitive abilities first through the learning of our body

The Spanish engineer and doctor in Artificial Intelligence by the Complutense University of Madrid (UCM) has developed this project thanks to a Marie Curie scholarship, destined to foment the formation and the development of outstanding researchers in subjects of innovation. He has done it at the University of Munich, (Germany) and has tested his theories in Pal Robotics, a leading company in bipedal androids technology based in Barcelona.

The company was commissioned to launch the first autonomous bipedal humanoid in Europe (which was capable even of playing chess). Francisco Ferro, its founder, created the company so that "science and technology improve the lives of people," he says. They work with robots that serve the industry, neuroscience research and even assist older people at home. They are currently programming Talos. A 1.85-meter robot that could help in aircraft manufacturing tasks that a man can not perform many hours for being an arduous job. Will join the Airbus staff when you are ready.

Several investigations of recent years have studied whether robots can be recognized in the mirror. Nico, the robot created by Brian Scassellati's team in the Massachusetts Institute of Technology (MIT) he looked at himself in the mirror in 2007. The difference is that Scassellati only used the variable related to movement (the robot was recognized because it moved). However, Lanillos has achieved that the automaton is identified by two variables: movement and vision. "The robot is programmed to realize that if an arm comes up, the limb will appear in a certain position and foresee that movement," explains Lanillos. It no longer means just moving, but the android does so with consistency to what you expect to see and changes in your vision.

The Spanish scientist is convinced that this is a first step for the robot to be able to relate to the environment and make decisions. Think of an autonomous car. The vehicle generates an action on the road that brings a consequence for the passengers. If the androids anticipate what can happen, it will put the safety of the travelers in the foreground and decide if the best option is to swerve to one side or the other. The next step will be for the automaton to recognize itself because it moves its entire body, not just its arm. "There can not be a robot that interacts with humans that does not know at all times where their body is," concludes Lanillos.

.



Source link