Implants with AI Can Give People with Paralysis Their Voice Back - News - Сhernovetskyi Fund

Chernovetskyi Charity Fund

Implants with AI Can Give People with Paralysis Their Voice Back image

Implants with AI Can Give People with Paralysis Their Voice Back

7 months ago
News

American scientists are close to restoring speech to people silenced by brain injuries and diseases. They're achieving this breakthrough by employing a unique interface coupled with a self-learning neural network.

What's the trend?

Traumatic brain injuries, neurological disorders, strokes, and other health difficulties have deprived many people of the ability to talk.  Even if they understand the language and know what they want to say, their physical limitations prevent them from doing so.Researchers in California have introduced innovative "brain-computer interfaces" (BCIs) capable of translating brain signals into spoken words. For two individuals who couldn't communicate on their own, these devices enabled them to 'speak' four times faster than any previous technology.

Who the Scientists Restored Speech To

The first patient is Pat Bennett, 68 years old. A little over 10 years ago, she was diagnosed with "amyotrophic lateral sclerosis." As the disease progressed, she lost the ability to move the muscles necessary for clear speech. Currently, she can type with her fingers, but this process becomes more and more challenging for her.The second patient is Ann Johnson. In 2005, she suffered a stroke that left her completely paralyzed. She is now 47 years old. She uses an assistive device to spell out words at a rate of 14 words per minute. For comparison, the average speaking rate is 150 words per minute.

Both Bennett and Johnson regained the ability to "speak" at an average rate of 60-80 words per minute thanks to the new BCI." These interfaces use a combination of brain implants and trained computer algorithms to convert thoughts into text. As a result, a new record has been set for BCI, surpassing the previous record of 18 words per minute. 

The system for Bennett was developed by a team of specialists at Stanford University, while the system for Johnson was created by scientists at the University of California in San Francisco.  Both teams published their research in the journal Nature on August 23. 

How the Interface for Pat Bennett Was Trained

In 2022, four sensors were implanted into the outer layer of Pat Bennett's brain. They were connected to the skull through golden wires, which simultaneously linked her to a computer. Over 25 sessions lasting 4 hours each, the patient attempted to repeat sentences from a large dataset. By analyzing brain activity during these sessions, a computer algorithm reconstructed how Bennett's brain looked when she tried to articulate each of the 39 basic phonemes (sound units) in the English language. Now, when Bennett wishes to speak, the Brain-Computer Interface (BCI) sends its predictions based on brain activity to a language model, which forecasts words and displays them on a screen.  With a limited vocabulary of 50 words, the error rate using BCI is 9.1%. When expanded to 125,000 words, this rate increases to 23.8%. While this may not seem ideal, it represents a significant leap forward.  In the previous speed record for BCIs, the average error rate was 25% with a vocabulary of 50 words.

The interface for Ann Johnson

The team working with Anne Johnson implanted just one sensor into the outer layer of her brain. However, it contained approximately the same number of electrodes as the four sensors developed by the Stanford team.  The training process followed a similar methodology. By the end of the experiment, the model could predict formulations at an average speed of 78 words per minute with an error rate of 25.5%. Specialists from the University of California, San Francisco, took an additional step. Instead of simply displaying words on the screen, they integrated the system with a digital avatar of Ann Johnson's face (trained to analyze brain activity) and a synthetic voice (AI recreated it based on pre-stroke patient videos).

Prospects for Development

Implanting sensors in patients required a risky brain surgery. The speech error rate remains relatively high. Currently, both systems can only be used in laboratory conditions. However, scientists continue their ongoing work on wireless BCIs with AI.  Over time, these technologies, combined with digital avatars and improved synthetic voices, will help countless people communicate easily and expressively, just as they did before their injury or disease. Perhaps the ideal interfaces of the future will bear the names of the first patients and testers— Pat Bennett and Ann Johnson.


Tags:
#News