Scientists have successfully developed a speech neuroprosthesis for the first time to help a man with severe paralysis communicate. The researchers at UC San Francisco created a technology that allows him to communicate in complete sentences. The system translates signals from his brain to the vocal tract into words that appear on a screen.
Over a decade of research by UCSF neurosurgeon Edward Chang, MD, led to this groundbreaking discovery. His work created a technology that would allow people with paralysis to communicate in alternative ways. The study appeared July 15, 2021, in the New England Journal of Medicine.
Dr. Chang, the Joan and Sanford Weill Chair of Neurological Surgery at UCSF, Jeanne Robertson Distinguished Professor, and senior author on the study said this:
“To our knowledge, this is the first successful demonstration of direct decoding of full words from the brain activity of someone who is paralyzed and cannot speak. It shows strong promise to restore communication by tapping into the brain’s natural speech machinery.”
Strokes, neurodegenerative diseases, and accidents cause anarthria – the loss of ability to speak – in thousands of people annually. However, researchers believe their technology could help these people communicate more naturally and efficiently in the future.
How communication neuroprosthesis helps seizure patients
Prior research in communication neuroprosthetics aimed to help patients communicate with spelling only. These approaches involved typing out letters one at a time in text, relying on signals to move the arm for typing. On the other hand, Chang’s study focuses on translating signals that control vocal tract muscles for speaking words. Chang says this method allows for more natural, fluid communication.
Dr. Chang noted that spelling-based approaches using typing, writing, and controlling a cursor are considerably slower and more laborious.
“With speech, we normally communicate information at a very high rate, up to 150 or 200 words per minute. Going straight to words, as we’re doing here, has great advantages because it’s closer to how we normally speak.”
Over the past ten years, Chang’s research involved seizure patients with normal speech at the UCSF Epilepsy Center. They underwent neurosurgery so doctors could understand the cause of their seizures. The patients had electrode arrays placed on their brains to track electrical activity. Before surgery, they volunteered to have these recordings analyzed for speech-related activity.
Next, Chang and colleagues in the UCSF Weill Institute for Neurosciences mapped the brain patterns associated with vocal tract movements responsible for speech. The team then worked on translating the brain activity into speech recognition of full words. This responsibility fell on David Moses, Ph.D., a postdoctoral engineer in the Chang lab and one of the lead authors of the new study. He created new techniques to decode the patterns in real-time along with statistical language models to increase accuracy.
This study proved fruitful, so Chang wanted to replicate it in people with paralysis. However, the team realized they faced more difficult challenges this time around. The technology worked well before, but they weren’t sure how it would perform in a person with vocal cord paralysis.
Dr. Moses said this:
“Our models needed to learn the mapping between complex brain activity patterns and intended speech. That poses a major challenge when the participant can’t speak.”
Not to mention, the researchers didn’t know the condition of the brain signals controlling the vocal tracts of the patients. Would they still function in people who haven’t been able to use their vocal muscles for years?
“The best way to find out whether this could work was to try it,” said Moses.
The study showing how speech neuroprosthesis helps paralyzed people communicate
For the study, Chang collaborated with colleague Karunesh Ganguly, MD, PhD., an associate professor of neurology. Together, they launched “BRAVO” (Brain-Computer Interface Restoration of Arm and Voice). The first volunteer for the study, referred to as BRAVO1, is a man in his late 30s. Over 15 years ago, he suffered a severe brainstem stroke that caused massive damage to his brain, vocal tract, and limbs.
Since his stroke, he’s had minimal mobility of his head, neck, and limbs. He communicates using only a pointer that presses letters on a screen.
For this study, the participant worked alongside researchers to formulate a 50-word vocabulary. Chan’s team would monitor and recognize the words from brain activity utilizing advanced computer algorithms. The vocabulary includes simple words such as “good,” “water,” and “family.” The algorithms can then use these words to create hundreds of sentences to help BRAVO1 express his thoughts.
After that, Chang surgically implanted a high-density electrode array over BRAVO1’s speech motor cortex. After BRAVO1 fully recovered, the team recorded 22 hours of brain activity over the course of 48 sessions. The recording sessions lasted several months. In each session, BRAVO1 made several attempts to say each vocabulary word. At the same time, the electrodes recorded the signals from his speech motor cortex.
How the speech neuroprosthesis converts words from brain signals
Other lead authors of the study employed custom neural network models to translate the attempted speech into text. Forms of artificial intelligence, the networks helped record subtle brain patterns to detect speech attempts. Then, the system identified which words BRAVO1 was trying to say.
To test the models, the team gave BRAVO1 short sentences comprised of the 50 vocabulary word bank. Then, they asked him to attempt saying them a few times. When he attempted to speak, the words decoded from his brain activity got recorded on a screen.
Next, the team began asking him questions like “How are you today?” or “Would you like some water?” BRAVO1’s responses then appeared on the screen. He said, “I am very good,” and “No, I am not thirsty.”
The speech neuroprosthesis decoded words from brain activity at a rate of around 18 words per minute. Impressively, it had a 93 percent accuracy (75 percent median). The language model Moses created also played a role in its success. The system had an autocorrect function similar to modern smartphones and speech recognition software, which increased accuracy.
“We were thrilled to see the accurate decoding of a variety of meaningful sentences,” Moses said. “We’ve shown that it is actually possible to facilitate communication in this way and that it has potential for use in conversational settings.”
Final thoughts on the speech neuroprosthesis that improves communication in paralyzed people
In the future, Chang and Moses will perform a follow-up trial to include more volunteers with severe paralysis. However, the team is working on expanding the words in the vocabulary and improving the rate of speech for now.
While the study only included one participant and a 50-word vocabulary, both scientists called it a huge success. Finally, Moses said, “This is an important technological milestone for a person who cannot communicate naturally and it demonstrates the potential for this approach to give a voice to people with severe paralysis and speech loss.”