There's new hope for people who have lost their ability to speak.
In two separate cases, scientists have successfully used brain implants and machine learning to give patients back their voice after theirs was taken; one by a stroke, the other a result of amyotrophic lateral sclerosis (ALS). Together, the results represent hope for a new way for people living with paralysis to communicate with the world around them.
"I want patients … to see me and know their lives are not over now," writes Ann, who experienced locked-in syndrome following a stroke in 2005. "I want to show them that disabilities don't need to stop us or slow us down."
In recent years, great strides have been made in brain interface technology, but it's not a one-size-fits-all solution.
Electrodes are used to record a person's neural activity while they think about performing a certain task or action. These recordings are then used to train hardware or software to perform that task; for example, a prosthetic arm will bend in response to a person thinking about bending their arm.
Each person's brain activity is different, though, so training the machinery to decode their neural signals has to be done anew for every patient. Considering language is itself incredibly complex, it's no mean feat to accomplish a brain interface, or neuroprosthetic, that can translate a person's thoughts into spoken words.
Neurosurgeon Edward Chang of the University of California San Francisco and his colleagues were responsible for restoring Ann's speech, while neuroscientist Frank Willett of Stanford University and his colleagues restored speech to Pat Bennett, who lost the ability to speak due to motor neuron disease ALS; the same condition that affected the late physicist, Stephen Hawking.
"Imagine," Bennett writes, "how different conducting everyday activities like shopping, attending appointments, ordering food, going into a bank, talking on a phone, expressing love or appreciation – even arguing – will be when nonverbal people can communicate their thoughts in real time."
Both teams employed a similar methodology. Electrode arrays were implanted into each patient's brain – 128 electrodes in Bennett's and 253 in Ann's.
They each then underwent the painstaking process of thinking about speaking different words and sentences.
Ann's repertoire consisted of 1,024 words, but she also thought about making facial expressions. In addition, the AI was trained, not to recognize the words, but phonemes – the basic sound units that make up words. This dramatically reduced the number of units the AI needed to comprehend.
The team used this data, and recordings of Ann speaking prior to her stroke, to create a virtual avatar that speaks in her voice.
Ultimately, through her avatar, Ann was able to communicate almost as fast as the people around her.
"When I was at the rehab hospital, the speech therapist didn't know what to do with me," she writes. "Being a part of this study has given me a sense of purpose, I feel like I am contributing to society. It feels like I have a job again. It's amazing I have lived this long; this study has allowed me to really live while I'm still alive!"
Bennett, on the other hand, underwent about 100 hours of training that was also based on phonemes, repeating sentences randomly chosen from a large dataset. The error rate of the system after this training, on a vocabulary of 50 words, is just 9.1 percent, and Bennett's speech is decoded at a rate of about 62 words per minute.
The error rate with a vocabulary of 125,000 words is 23.8 percent, but the researchers note that this is the first time such a large vocabulary has been tested with this kind of technology. The results, all agree, are extremely promising.
"These initial results have proven the concept, and eventually technology will catch up to make it easily accessible to people who cannot speak," Bennett writes.
"For those who are nonverbal, this means they can stay connected to the bigger world, perhaps continue to work, maintain friends and family relationships."