Every time Stephen Hawking He wanted to communicate, he had to move his cheek from side to side. He did it on a cursor that allowed him to write words that, in real time, were pronounced by a voice synthesis program. Amyotrophic lateral sclerosis (ALS), which he suffered for several years, caused a significant deterioration in his motor functions. So this system managed to make computers accessible to people with disabilities. However, he was still far from transforming thoughts into speech. Therefore, the latest project of neuroengineers of the University of Columbia (United States) is a real revolution: translate thoughts into an intelligible speech and recognizable would end all the limitations that these people find in their day to day.
Taking into account these perspectives, published in the journal "Scientific Reports", by monitoring someone's brain activity, technology can recreate the words that a person can hear with unprecedented clarity. This advance, which harnesses the power of voice synthesizers and artificial intelligence, lays the foundation for people who can not speak, such as those who live with Amyotrophic Lateral Sclerosis (ELA) or recover from a stroke, regain the ability to communicate with the outside world. "Our voice helps us connect with our friends, family and the world around us, so losing the power of voice due to an injury or illness is so devastating," he explains. Nima Mesgarani, principal investigator at the Cerebral Mental Behavior Institute Mortimer B. Zuckerman of Columbia University.
The expert believes that they have shown that, "with the right technology, the thoughts of these people can be deciphered and understood by any other listener." Decades of research have corroborated that when people talk and even imagine themselves talking, revealing patterns of activity appear in their brain. A different pattern of signals also appears when we hear someone talking or imagine that we are listening. The team of researchers turned to a "vocoder", an algorithm that can synthesize words after receiving training in recordings of people who speak. "This is the same technology used by Amazon Echo and Siri's Manzana who gives verbal answers to our questions, "explains Mesgarani.
To teach the device to interpret brain activity, Mesgarani teamed up with Ashesh Dinesh Mehta, a neurosurgeon at the Northwell Health Physician Partners Neuroscience Institute, who treats patients with epilepsy, some of whom must undergo regular brain surgery. "Working with Dr. Mehta, we asked epileptic patients who had already undergone brain surgery to listen to the phrases uttered by different people while measuring brain activity patterns," says Mesgarani, who noted that these neuronal patterns trained the "vocoder"
The team plans to test the most complicated words and phrases later and even aspires to have their system be part of an implant similar to those used with some patients with epilepsy who translate the user's thoughts directly into words. "This would give someone who has lost his ability to speak, whether due to an injury or illness, a new opportunity to connect with the world." Then, the researchers asked the volunteers to listen to speakers reciting numbers between 0 and 9, while they recorded the brain signals that later went back to the "vocoder". Mesgarani's team then tested with more complex words and sentences. The results showed that It is possible to reproduce verbally, with an accuracy of 75%, what we think. Researchers will perform the same tests on the brain signals emitted when a person talks or imagines talking. Ultimately, they expect their system to be part of an implant, similar to those used by some patients with epilepsy, which translates the thoughts of the bearer directly into words.