Paralyzed man’s brain waves turned into sentences on computer in medical first

Health

The Guardian 15 July, 2021 - 08:23am 40 views

It will take years of additional research but the study, reported Wednesday, marks an important step toward one day restoring more natural communication for people who can’t talk because of injury or illness.

“Most of us take for granted how easily we communicate through speech,” said Dr Edward Chang, a neurosurgeon at the University of California, San Francisco, who led the work. “It’s exciting to think we’re at the very beginning of a new chapter, a new field” to ease the devastation of patients who have lost that ability.

Today, people who can’t speak or write because of paralysis have very limited ways of communicating. For example, the man in the experiment, who was not identified to protect his privacy, uses a pointer attached to a baseball cap that lets him move his head to touch words or letters on a screen. Other devices can pick up patients’ eye movements. But it’s a frustratingly slow and limited substitution for speech.

In recent years, experiments with mind-controlled prosthetics have allowed paralyzed people to shake hands or take a drink using a robotic arm – they imagine moving and those brain signals are relayed through a computer to the artificial limb.

Chang’s team built on that work to develop a “speech neuroprosthetic” – a device that decodes the brainwaves that normally control the vocal tract, the tiny muscle movements of the lips, jaw, tongue and larynx that form each consonant and vowel.

The man who volunteered to test the device was in his late 30s. Fifteen years ago he suffered a brain-stem stroke that caused widespread paralysis and robbed him of speech. The researchers implanted electrodes on the surface of the man’s brain, over the area that controls speech.

A computer analyzed the patterns when he attempted to say common words such as “water” or “good”, eventually learning to differentiate between 50 words that could generate more than 1,000 sentences.

Prompted with such questions as “How are you today?” or “Are you thirsty” the device allowed the man to answer “I am very good” or “No I am not thirsty” – not voicing the words but translating them into text, the team reported in the New England Journal of Medicine.

It takes about three to four seconds for the word to appear on the screen after the man tries to say it, said lead author David Moses, an engineer in Chang’s lab. That’s not nearly as fast as speaking, but quicker than tapping out a response.

In an accompanying editorial, Harvard neurologists Leigh Hochberg and Sydney Cash called the work a “pioneering demonstration.

They suggested improvements but said if the technology pans out it could help people with injuries, strokes or illnesses like Lou Gehrig’s disease whose “brains prepare messages for delivery but those messages are trapped”.

Chang’s lab has spent years mapping the brain activity that leads to speech. First, researchers temporarily placed electrodes in the brains of volunteers undergoing surgery for epilepsy, so they could match brain activity to spoken words.

Only then was it time to try the experiment with someone unable to speak. How did they know the device interpreted the volunteer’s words correctly? They started by having him try to say specific sentences such as “Please bring my glasses” rather than answering open-ended questions until the machine translated accurately most of the time.

Read full article at The Guardian

Brain implant helps man 'speak' through a computer

CNN 15 July, 2021 - 09:10am

Updated 3:23 AM ET, Thu July 15, 2021

Sign up here to get The Results Are In with Dr. Sanjay Gupta every Tuesday from the CNN Health team.

Severely paralyzed man communicates using brain signals sent to his vocal tract | Engadget

Engadget 15 July, 2021 - 06:05am

So far, neuroprosthetic technology has only allowed paralyzed users to type out just one letter at a time, a process that can be slow and laborious. It also tapped parts of the brain that control the arm or hand, a system that's not necessarily intuitive for the subject. 

The USCF system, however, uses an implant that's placed directly on the part of the brain dedicated to speech. That way, the subject can mentally activate the brain patterns they would normally use to say a word, and the system can translate the entire word, rather than single letters, to the screen. 

To make it work, patients with normal speech volunteered to have their brain recordings analyzed for speech related activities. Researchers were then able to analyze those patterns and develop new methods to decode them in real time, using statistical language models to improve accuracy. 

However, the team still wasn't sure if brain signals controlling the vocal tract would still be intact in patients paralyzed for many years. To that end, they enlisted an anonymous participant (known as Bravo1) who worked with researchers to create a 50-word vocabulary that the team could decipher using advanced computer algorithms. That included words like "water," "family" and "good," enough to allow the patient to create hundreds of sentences applicable to their daily life. The team also used an "auto-correct" function similar to those found on consumer speech recognition apps. 

To test the system, the team asked patient Bravo1 to reply to questions like "How are you today?" and "Would you like some water?" The patient's attempted speech then appeared on the screen as "I am very good," and "No, I am not thirsty." 

The system was able to decode their speech at up to 18 words per minute with 93 percent accuracy, with a 75 percent median accuracy. That might not sound great compared to the 200 words per minute possible with normal speech, but it's much better than the speeds seen on previous neuroprosthetic systems

“To our knowledge, this is the first successful demonstration of direct decoding of full words from the brain activity of someone who is paralyzed and cannot speak,” said Edward Chang, MD, Chair of Neurological Surgery at UCSF and senior author on the study. “It shows strong promise to restore communication by tapping into the brain's natural speech machinery.”

The team said the trial represents a proof of principal for this new type of "speech neuroprosthesis." Next up, they plan to expand the trial to include more participants, while also working to increase the number of words in the vocabulary and improve the rate of speech.  

Please enter a valid email address

Brain implant turns thoughts into words to help paralyzed man 'speak' again

CNET 14 July, 2021 - 09:59pm

Facebook's work in neural input technology for AR and VR looks to be moving in a more wrist-based direction, but the company continues to invest in research on implanted brain-computer interfaces. The latest phase of a years-long Facebook-funded study from UCSF, called Project Steno, translates attempts at conversation from a speech-impaired paralyzed patient into words on a screen.

"This is the first time someone just naturally trying to say words could be decoded into words just from brain activity," said Dr. David Moses, lead author of a study published Wednesday in the New England Journal of Medicine. "Hopefully, this is the proof of principle for direct speech control of a communication device, using intended attempted speech as the control signal by someone who cannot speak, who is paralyzed."

From the lab to your inbox. Get the latest science stories from CNET every week.

Brain-computer interfaces (BCIs) have been behind a number of promising recent breakthroughs, including Stanford research that could turn imagined handwriting into projected text. UCSF's study takes a different approach, analyzing actual attempts at speech and acting almost like a translator.

The study, run by UCSF neurosurgeon Dr. Edward Chang, involved implanting a "neuroprosthesis" of electrodes in a paralyzed man who had a brainstem stroke at age 20. With an electrode patch implanted over the area of the brain associated with controlling the vocal tract, the man attempted to respond to questions displayed on a screen. UCSF's machine learning algorithms can recognize 50 words and convert these into real-time sentences. For instance, if the patient saw a prompt asking "How are you today?" the response appeared on screen as "I am very good," popping up word by word.

Moses clarified that the work will aim to continue beyond Facebook's funding phase and that the research still has a lot more work ahead. Right now it's still unclear how much of the speech recognition comes from recorded patterns of brain activity, or vocal utterances, or a combination of both. 

Moses is quick to clarify that the study, like other BCI work, isn't mind reading: it relies on sensing brain activity that happens specifically when attempting to engage in a certain behavior, like speaking. Moses also says the UCSF team's work doesn't yet translate to non-invasive neural interfaces. Elon Musk's Neuralink promises wireless transmission data from brain-implanted electrodes for future research and assistive uses, but so far that tech's only been demonstrated on a monkey.

Facebook Reality Labs' BCI head-worn device prototype, which didn't have implanted electrodes, is going open-source.

Meanwhile, Facebook Reality Labs Research has shifted away from head-worn brain-computer interfaces for future VR/AR headsets, pivoting for the near future to focusing on wrist-worn devices based on the tech acquired from CTRL-Labs. Facebook Reality Labs had its own non-invasive prototype head-worn research headsets for studying brain activity, and the company has announced it plans to make these available for open-source research projects as it stops focus on head-mounted neural hardware. (UCSF receiving funding from Facebook but no hardware.) 

"Aspects of the optical head mounted work will be applicable to our EMG research at the wrist. We will continue to use optical BCI as a research tool to build better wrist-based sensor models and algorithms. While we will continue to leverage these prototypes in our research, we are no longer developing a head mounted optical BCI device to sense speech production. That's one reason why we will be sharing our head-mounted hardware prototypes with other researchers, who can apply our innovation to other use cases," a Facebook representative confirmed via email.

Consumer-targeted neural input technology is still in its infancy, however. While consumer devices using noninvasive head or wrist-worn sensors exist, they're far less accurate than implanted electrodes right now. 

Man with severe paralysis communicates via brain waves in groundbreaking study

Yahoo News 14 July, 2021 - 08:19pm

Why it matters: This is the first known "successful demonstration of direct decoding of full words from the brain activity of someone who is paralyzed and cannot speak," neurosurgeon Edward Chang, senior author on the study, said in a statement from the University of California, San Francisco.

"It shows strong promise to restore communication by tapping into the brain's natural speech machinery."

The study, published in the New England Journal of Medicine on Wednesday, indicates that the approach researchers took could one day help thousands of people who are unable to speak if the method is developed further, according to U.C. San Francisco.

What they did: For the study, known as "BRAVO" (Brain-Computer Interface Restoration of Arm and Voice), researchers worked with a man in his late 30s whose paralysis is due to a brainstem stroke over 15 years ago that left him with limited head, neck, and limb movements. He had been communicating by using a pointer attached to a baseball cap to poke letters on a screen.

Chang surgically implanted electrodes into the part of the brain that controls speech. The man worked with researchers to create a 50-word vocabulary — words like "water," "family" and "good" — that Chang's team recognized from brain activity using advanced computer algorithms.

Researchers translated signals intended to control muscles of the vocal system for speaking words, rather than signals to move the arm or hand to enable typing.

Chang said in his statement that this approach, known as speech neuroprosthesis, tapped into the natural and fluid aspects of speech, promising more rapid and organic communication.

Of note: The study follows an international team of researchers announcing in May that they helped a person with paralysis translate their imagined handwriting into text through a brain–computer interface that was faster than other types of assistive communication.

What they're saying: Leigh Hochberg, a neurologist with Massachusetts General Hospital, Brown University and the Department of Veterans Affairs, who wasn't involved in Chang's study but co-wrote an editorial on it, said this speech neuroprosthesis research was part of a wave of innovative progress in the field, per the New York Times.

"It's now only a matter of years before there will be a clinically useful system that will allow for the restoration of communication," added Hochberg, who directs the BrainGate project, which is working on projects to help people affected by neurological disease.

While agents dragged their feet, the report found, Larry Nassar continued to prey on gymnasts.

Microsoft changes its paperclip emoji to look like its early virtual assistant, the object of much affection and frustration.

About one in six adults aged 65 or older in the U.S. have lost all their teeth.

Alzheimer's, like many diseases, has a genetic component. Tek Images/Science Photo Library via Getty ImagesSince the human genome was first mapped, scientists have discovered hundreds of genes influencing illnesses like breast cancer, heart disease and Alzheimer’s disease. Unfortunately, Black people, Indigenous people and other people of color are underrepresented in most genetic studies. This has resulted in a skewed and incomplete understanding of the genetics of many diseases. We are two res

For Dr. Sanjay Gupta, the neurosurgeon and CNN chief medical correspondent, the fight against Alzheimer's disease is personal. His grandfather developed the brain disorder when Gupta was a teenager. Unfortunately, it's an experience that will touch more and more families in the future. The World Health Organization estimates that cases of dementia (of which Alzheimer's is the most common type) will triple by the year 2050, as the population ages. Today, a focus for many health experts is how to

In medical breakthrough, scientists turn paralyzed man’s thoughts into sentences on computer screen

pressherald.com 14 July, 2021 - 05:01pm

A California team is developing a device to decode brain waves that normally control the vocal tract, the tiny muscle movements of the lips, jaw, tongue and larynx that form each consonant and vowel.

In a medical first, researchers harnessed the brain waves of a paralyzed man unable to speak – and turned what he intended to say into sentences on a computer screen.

It will take years of additional research but the study, reported Wednesday, marks an important step toward one day restoring more natural communication for people who can’t talk because of injury or illness.

“Most of us take for granted how easily we communicate through speech,” said Dr. Edward Chang, a neurosurgeon at the University of California, San Francisco, who led the work. “It’s exciting to think we’re at the very beginning of a new chapter, a new field” to ease the devastation of patients who lost that ability.

Today, people who can’t speak or write because of paralysis have very limited ways of communicating. For example, the man in the experiment, who was not identified to protect his privacy, uses a pointer attached to a baseball cap that lets him move his head to touch words or letters on a screen. Other devices can pick up patients’ eye movements. But it’s a frustratingly slow and limited substitution for speech.

Tapping brain signals to work around a disability is a hot field. In recent years, experiments with mind-controlled prosthetics have allowed paralyzed people to shake hands or take a drink using a robotic arm – they imagine moving and those brain signals are relayed through a computer to the artificial limb.

Chang’s team built on that work to develop a “speech neuroprosthetic” – decoding brain waves that normally control the vocal tract, the tiny muscle movements of the lips, jaw, tongue and larynx that form each consonant and vowel.

Volunteering to test the device was a man in his late 30s who 15 years ago suffered a brain-stem stroke that caused widespread paralysis and robbed him of speech. The researchers implanted electrodes on the surface of the man’s brain, over the area that controls speech.

Researcher David Moses works with clinical trial participant “BRAVO 1” to record brain activity while he attempted to produce words and sentences at the University of California, San Francisco, in 2020. Fifteen years ago, the participant suffered a brain-stem stroke that caused widespread paralysis and robbed him of speech. Todd Dubnicoff/UCSF via Associated Press

A computer analyzed the patterns when he attempted to say common words such as “water” or “good,” eventually becoming able to differentiate between 50 words that could generate more than 1,000 sentences.

Prompted with such questions as “How are you today?” or “Are you thirsty” the device eventually enabled the man to answer “I am very good” or “No I am not thirsty” – not voicing the words but translating them into text, the team reported in the New England Journal of Medicine.

It takes about three to four seconds for the word to appear on the screen after the man tries to say it, said lead author David Moses, an engineer in Chang’s lab. That’s not nearly as fast as speaking but quicker than tapping out a response.

In an accompanying editorial, Harvard neurologists Leigh Hochberg and Sydney Cash called the work a “pioneering demonstration.”

They suggested improvements but said if the technology pans out it eventually could help people with injuries, strokes or illnesses like Lou Gehrig’s disease whose “brains prepare messages for delivery but those messages are trapped.”

Chang’s lab has spent years mapping the brain activity that leads to speech. First, researchers temporarily placed electrodes in the brains of volunteers undergoing surgery for epilepsy, so they could match brain activity to spoken words.

Dr. Edward Chang, right, and postdoctoral scholar David Moses work at the University of California, San Francisco’s Mission Bay campus in 2019. “Most of us take for granted how easily we communicate through speech,” says Chang, a neurosurgeon. “It’s exciting to think we’re at the very beginning of a new chapter, a new field” to ease the devastation of patients who lost that ability. Noah Berger/UCSF via Associated Press

Only then was it time to try the experiment with someone unable to speak. How did they know the device interpreted his words correctly? They started by having him try to say specific sentences such as, “Please bring my glasses,” rather than answering open-ended questions until the machine translated accurately most of the time.

Next steps include ways to improve the device’s speed, accuracy and vocabulary size – and maybe one day allow a computer-generated voice rather than text on a screen – while testing a small number of additional volunteers.

Success. Please wait for the page to reload. If the page does not reload within 5 seconds, please refresh the page.

Enter your email and password to access comments.

Please check your email to confirm and complete your registration.

Create a commenting profile by providing an email address, password and display name. You will receive an email to complete the registration. Please note the display name will appear on screen when you participate.

Already registered? Log in to join the discussion.

Only subscribers are eligible to post comments. Please subscribe or to participate in the conversation. Here’s why.

Use the form below to reset your password. When you've submitted your account email, we will send an email with a reset code.

Send questions/comments to the editors.

Device taps brain waves to help paralyzed man communicate

Associated Press 14 July, 2021 - 04:20pm

In a medical first, researchers harnessed the brain waves of a paralyzed man unable to speak — and turned what he intended to say into sentences on a computer screen.

It will take years of additional research but the study, reported Wednesday, marks an important step toward one day restoring more natural communication for people who can’t talk because of injury or illness.

“Most of us take for granted how easily we communicate through speech,” said Dr. Edward Chang, a neurosurgeon at the University of California, San Francisco, who led the work. “It’s exciting to think we’re at the very beginning of a new chapter, a new field” to ease the devastation of patients who lost that ability.

Today, people who can’t speak or write because of paralysis have very limited ways of communicating. For example, the man in the experiment, who was not identified to protect his privacy, uses a pointer attached to a baseball cap that lets him move his head to touch words or letters on a screen. Other devices can pick up patients’ eye movements. But it’s a frustratingly slow and limited substitution for speech.

Tapping brain signals to work around a disability is a hot field. In recent years, experiments with mind-controlled prosthetics have allowed paralyzed people to shake hands or take a drink using a robotic arm -- they imagine moving and those brain signals are relayed through a computer to the artificial limb.

Chang’s team built on that work to develop a “speech neuroprosthetic” -- decoding brain waves that normally control the vocal tract, the tiny muscle movements of the lips, jaw, tongue and larynx that form each consonant and vowel.

Volunteering to test the device was a man in his late 30s who 15 years ago suffered a brain-stem stroke that caused widespread paralysis and robbed him of speech. The researchers implanted electrodes on the surface of the man’s brain, over the area that controls speech.

A computer analyzed the patterns when he attempted to say common words such as “water” or “good,” eventually becoming able to differentiate between 50 words that could generate more than 1,000 sentences.

Prompted with such questions as “How are you today?” or “Are you thirsty” the device eventually enabled the man to answer “I am very good” or “No I am not thirsty” -- not voicing the words but translating them into text, the team reported in the New England Journal of Medicine.

It takes about three to four seconds for the word to appear on the screen after the man tries to say it, said lead author David Moses, an engineer in Chang’s lab. That’s not nearly as fast as speaking but quicker than tapping out a response.

In an accompanying editorial, Harvard neurologists Leigh Hochberg and Sydney Cash called the work a “pioneering demonstration.”

They suggested improvements but said if the technology pans out it eventually could help people with injuries, strokes or illnesses like Lou Gehrig’s disease whose “brains prepare messages for delivery but those messages are trapped.”

Chang’s lab has spent years mapping the brain activity that leads to speech. First, researchers temporarily placed electrodes in the brains of volunteers undergoing surgery for epilepsy, so they could match brain activity to spoken words.

Only then was it time to try the experiment with someone unable to speak. How did they know the device interpreted his words correctly? They started by having him try to say specific sentences such as, “Please bring my glasses,” rather than answering open-ended questions until the machine translated accurately most of the time.

Next steps include ways to improve the device’s speed, accuracy and vocabulary size — and maybe one day allow a computer-generated voice rather than text on a screen — while testing a small number of additional volunteers.

The Associated Press Health and Science Department receives support from the Howard Hughes Medical Institute’s Department of Science Education. The AP is solely responsible for all content.

His voice silenced for years, a man can now communicate using only the electrical impulses from his brain

The Washington Post 14 July, 2021 - 04:01pm

The 38-year-old man, who chose to remain anonymous but is dubbed BRAVO-1 in the study, suffered a brain stem stroke 15 years ago that severed the neural connection between his brain and his vocal cords. He is paralyzed from the neck down and has been communicating by painstakingly tapping letters on a keyboard with a pointer attached to the bill of a baseball cap.

Now, merely by trying to utter words, he has 50 at his disposal and can create short sentences that primarily concern his well-being and care. A computer decodes his brain activity and displays the sentences on a screen with a median accuracy of about 75 percent, at a rate of more than 15 words per minute. Average conversational speech occurs at about 150 words per minute.

Christian Herff, an assistant professor of neural engineering at Maastricht University in the Netherlands who was not involved in the new work, called the progress described in the study “gigantic.” Previous research had demonstrated the same technique in test subjects who still were able to speak.

“It’s actually quite a big deal,” Herff said. “This is the first study that really does it in a patient who is not able to speak.”

Edward F. Chang, chairman of UCSF’s Department of Neurological Surgery and leader of the research team, said the advance would not have been possible even five years ago. Since then, progress in artificial intelligence and the decoding of neural signals led to the result published Wednesday in the New England Journal of Medicine. The researchers described the technology as a “neuroprosthesis.”

Chang said in an interview that he has been working in this area for 10 years, motivated by the patients he saw who had lost the ability to speak. Thousands of people suffer that fate each year as a result of strokes, trauma and diseases such as amyotrophic lateral sclerosis — ALS — and cerebral palsy.

“I just see every day how devastating it is for our patients who have lost the ability to speak after a stroke or a brain injury,” Chang said. “It’s part of what makes us human. When you’ve lost it, it’s really devastating.”

Chang implanted a grid of electrodes on the sensorimotor cortex of the patient’s brain, which controls the production of speech. A wire carries the electrical signal from the electrodes to a port that is permanently attached to the top of his head and can be connected by a cable to a computer.

During 48 sessions that lasted 22 hours, the scientists recorded the brain signals BRAVO-1 produced as he attempted to say 50 words flashed on a screen. They then used “deep-learning algorithms to create computational models for the detection and classification of words from patterns in the recorded cortical activity,” according to their paper. They employed other models to generate the probable next words in sentences the man was trying to say.

Chang said mere chance would have resulted in a 2 percent accuracy rate with a vocabulary of 50 words. The researchers were able to produce correct words and sentences as much as 93 percent of the time.

Most of the research in this part of the field of brain-computer interface has been conducted on patients with epilepsy who volunteer after having electrodes implanted to diagnose the source of their seizures. Chang and other scientists believed a person with anarthria — the inability to speak — still would be able to generate the same brain activity, but it wasn’t certain until his team succeeded.

A decade ago, researchers showed that sounds, or phonemes — rather than full words — could be deciphered, but with much less accuracy than was achieved in Chang’s effort. In May, Stanford University researchers published work that showed they had developed a way to allow a paralyzed man to write whole sentences with similar technology by imagining himself writing the letters.

Voice-recognition software that is ubiquitous on cellphones, computers and elsewhere was developed with many more hours of repetition and refinement than Chang’s group was able to put in with a severely disabled patient, other experts said. To expand BRAVO-1’s vocabulary and determine whether the technology works for others will require providing much more data for algorithms to decode, they said. Improving accuracy also could be another goal.

“While this was impressive, there still is substantial room for improvement in terms of the accuracy of single-word decoding and sentence decoding,” said Marc W. Slutzky, a professor of neurology at Northwestern University’s Feinberg School of Medicine. Another major step forward would be fully implantable devices that communicate with decoding devices wirelessly, he said.

Herff said others may want to try routing the deciphered language through a voice synthesizer rather than onto a screen, to allow for intonation and expression that make speech such an important human trait.

“This is really just the beginning,” Chang said. “We’re not saying we’ve accomplished anything. . . . It’s really just the start.”

Tapping Into the Brain to Help a Paralyzed Man Speak

The New York Times 14 July, 2021 - 04:00pm

In a once unimagined accomplishment, electrodes implanted in the man’s brain transmit signals to a computer that displays his words.

He has not been able to speak since 2003, when he was paralyzed at age 20 by a severe stroke after a terrible car crash.

Now, in a scientific milestone, researchers have tapped into the speech areas of his brain — allowing him to produce comprehensible words and sentences simply by trying to say them. When the man, known by his nickname, Pancho, tries to speak, electrodes implanted in his brain transmit signals to a computer that displays his intended words on the screen.

His first recognizable sentence, researchers said, was, “My family is outside.”

The achievement, published on Wednesday in the New England Journal of Medicine, could eventually help many patients with conditions that steal their ability to talk.

“This is farther than we’ve ever imagined we could go,” said Melanie Fried-Oken, a professor of neurology and pediatrics at Oregon Health & Science University, who was not involved in the project.

Three years ago, when Pancho, now 38, agreed to work with neuroscience researchers, they were unsure if his brain had even retained the mechanisms for speech.

“That part of his brain might have been dormant, and we just didn’t know if it would ever really wake up in order for him to speak again,” said Dr. Edward Chang, chairman of neurological surgery at University of California, San Francisco, who led the research.

The team implanted a rectangular sheet of 128 electrodes, designed to detect signals from speech-related sensory and motor processes linked to the mouth, lips, jaw, tongue and larynx. In 50 sessions over 81 weeks, they connected the implant to a computer by a cable attached to a port in Pancho’s head, and asked him to try to say words from a list of 50 common ones he helped suggest, including “hungry,” “music” and “computer.”

As he did, electrodes transmitted signals through a form of artificial intelligence that tried to recognize the intended words.

“Our system translates the brain activity that would have normally controlled his vocal tract directly into words and sentences,” said David Moses, a postdoctoral engineer who developed the system with Sean Metzger and Jessie R. Liu, graduate students. The three are lead authors of the study.

His answer, displayed onscreen: “I am very good.”

In nearly half of the 9,000 times Pancho tried to say single words, the algorithm got it right. When he tried saying sentences written on the screen, it did even better.

By funneling algorithm results through a kind of autocorrect language-prediction system, the computer correctly recognized individual words in the sentences nearly three-quarters of the time and perfectly decoded entire sentences more than half the time.

“To prove that you can decipher speech from the electrical signals in the speech motor area of your brain is groundbreaking,” said Dr. Fried-Oken, whose own research involves trying to detect signals using electrodes in a cap placed on the head, not implanted.

After a recent session, observed by The New York Times, Pancho, wearing a black fedora over a white knit hat to cover the port, smiled and tilted his head slightly with the limited movement he has. In bursts of gravelly sound, he demonstrated a sentence composed of words in the study: “No, I am not thirsty.”

In interviews over several weeks for this article, he communicated through email exchanges using a head-controlled mouse to painstakingly type key-by-key, the method he usually relies on.

The brain implant’s recognition of his spoken words is “a life-changing experience,” he said.

“I just want to, I don’t know, get something good, because I always was told by doctors that I had 0 chance to get better,” Pancho typed during a video chat from the Northern California nursing home where he lives.

Later, he emailed: “Not to be able to communicate with anyone, to have a normal conversation and express yourself in any way, it’s devastating, very hard to live with.”

During research sessions with the electrodes, he wrote, “It’s very much like getting a second chance to talk again.”

Pancho was a healthy field worker in California’s vineyards until a car crash after a soccer game one summer Sunday, he said. After surgery for serious damage to his stomach, he was discharged from the hospital, walking, talking and thinking he was on the road to recovery.

But the next morning, he was “throwing up and unable to hold myself up,” he wrote. Doctors said he experienced a brainstem stroke, apparently caused by a post-surgery blood clot.

A week later, he woke up from a coma in a small, dark room. “I tried to move, but I couldn’t lift a finger, and I tried to talk, but I couldn’t spit out a word,” he wrote. “So, I started to cry, but as I couldn’t make any sound, all I made were some ugly gestures.”

It was terrifying. “I wished I didn’t ever come back from the coma I was in,” he wrote.

The new approach, called a speech neuroprosthesis, is part of a surge of innovation aimed at helping tens of thousands of people who lack the ability to talk, but whose brains contain neural pathways for speech, said Dr. Leigh Hochberg, a neurologist with Massachusetts General Hospital, Brown University and the Department of Veterans Affairs, who was not involved in the study but co-wrote an editorial about it.

That could include people with brain injuries or conditions like amyotrophic lateral sclerosis (A.L.S.) or cerebral palsy, in which patients have insufficient muscle control to speak.

“The urgency can’t be overstated,” said Dr. Hochberg, who directs a project called BrainGate that implants tinier electrodes to read signals from individual neurons; it recently decoded a paralyzed patient’s attempted handwriting motions.

“It’s now only a matter of years,” he said, “before there will be a clinically useful system that will allow for the restoration of communication.”

For years, Pancho communicated by spelling out words on a computer using a pointer attached to a baseball cap, an arduous method that allowed him to type about five correct words per minute.

“I had to bend/lean my head forward, down, and poke a key letter one-by-one to write,” he emailed.

Last year, the researchers gave him another device involving a head-controlled mouse, but it is still not nearly as fast as the brain electrodes in the research sessions.

Through the electrodes, Pancho communicated 15 to 18 words per minute. That was the maximum rate the study allowed because the computer waited between prompts. Dr. Chang says faster decoding is possible, although it’s unclear if it will approach the pace of typical conversational speech: about 150 words per minute. Speed is a key reason the project focuses on speaking, tapping directly into the brain’s word production system rather than hand movements involved in typing or writing.

“It’s the most natural way for people to communicate,” he said.

Pancho’s buoyant personality has helped the researchers navigate challenges, but also occasionally makes speech recognition uneven.

“I sometimes can’t control my emotions and laugh a lot and don’t do too good with the experiment,” he emailed.

Dr. Chang recalled times when, after the algorithm successfully identified a sentence, “you could see him visibly shaking and it looked like he was kind of giggling.” When that happened or when, during the repetitive tasks, he’d yawn or get distracted, “it didn’t work very well because he wasn’t really focused on getting those words. So, we’ve got some things to work on because we obviously want it to work all the time.”

The algorithm sometimes confused words with similar phonetic sounds, identifying “going” as “bring,” “do” as “you,” and words beginning with “F” — “faith,” “family,” “feel” — as a V-word, “very.”

Longer sentences needed more help from the language-prediction system. Without it, “How do you like my music?” was decoded as “How do you like bad bring?” and “Hello how are you?” became “Hungry how am you?”

But in sessions that the pandemic interrupted for months, accuracy improved, Dr. Chang said, both because the algorithm learned from Pancho’s efforts and because “there’s definitely things that are changing in his brain,” helping it “light up and show us the signals that we needed to get these words out.”

Before his stroke, Pancho had attended school only up to sixth grade in his native Mexico. With remarkable determination, he has since earned a high school diploma, taken college classes, received a web developer certificate and begun studying French.

“I think the car wreck got me to be a better person, and smarter too,” he emailed.

With his restricted wrist movement, Pancho can maneuver an electric wheelchair, pressing the joystick with a stuffed sock tied around his hand with rubber bands. At stores, he’ll hover near something until cashiers decipher what he wants, like a cup of coffee.

“They place it in my wheelchair, and I bring it back to my home so I can get help drinking it,” he said. “The people here at the facility find themselves surprised, they always asked me, ‘HOW DID YOU BUY THAT, AND HOW DID YOU TELL THEM WHAT YOU WANTED!?’”

He also works with other researchers using the electrodes to help him manipulate a robotic arm.

His twice-weekly speech sessions can be difficult and exhausting, but he is always “looking forward to wake up and get out of bed every day, and wait for my U.C.S.F. people to arrive.”

The speech study is the culmination of over a decade of research, in which Dr. Chang’s team mapped brain activity for all vowel and consonant sounds and tapped into the brains of healthy people to produce computerized speech.

Researchers emphasize that the electrodes are not reading Pancho’s mind, but detecting brain signals corresponding to each word he tries to say.

“He is thinking the word,” Dr. Fried-Oken said. “It’s not random thoughts that the computer is picking up.”

Dr. Chang said “in the future, we might be able to do what people are thinking,” which raises “some really important questions about the ethics of this kind of technology.” But this, he said, “is really just about restoring the individual’s voice.”

In newer tasks, Pancho mimes words silently and spells out less common words using the military alphabet: “delta” for “d,” “foxtrot” for “f.”

“He is truly a pioneer,” Dr. Moses said.

The team also wants to engineer implants with more sensitivity and make it wireless for complete implantation to avoid infection, said Dr. Chang.

As more patients participate, scientists might find individual brain variations, Dr. Fried-Oken said, adding that if patients are tired or ill, the intensity or timing of their brain signals might change.

“I just wanted to somehow be able to do something for myself, even a tiny bit,” Pancho said, “but now I know, I’m not doing it just for myself.”

“Neuroprosthesis” Restores Words to Man with Paralysis

UCSF News Services 01 July, 2021 - 02:14pm

Technology Could Lead to More Natural Communication for People Who Have Suffered Speech Loss

Researchers at UC San Francisco have successfully developed a “speech neuroprosthesis” that has enabled a man with severe paralysis to communicate in sentences, translating signals from his brain to the vocal tract directly into words that appear as text on a screen.

The achievement, which was developed in collaboration with the first participant of a clinical research trial, builds on more than a decade of effort by UCSF neurosurgeon Edward Chang, MD, to develop a technology that allows people with paralysis to communicate even if they are unable to speak on their own. The study appears July 15 in the New England Journal of Medicine.

“To our knowledge, this is the first successful demonstration of direct decoding of full words from the brain activity of someone who is paralyzed and cannot speak,” said Chang, the Joan and Sanford Weill Chair of Neurological Surgery at UCSF, Jeanne Robertson Distinguished Professor, and senior author on the study. “It shows strong promise to restore communication by tapping into the brain's natural speech machinery.”

Each year, thousands of people lose the ability to speak due to stroke, accident, or disease. With further development, the approach described in this study could one day enable these people to fully communicate.

Previously, work in the field of communication neuroprosthetics has focused on restoring communication through spelling-based approaches to type out letters one-by-one in text. Chang’s study differs from these efforts in a critical way: his team is translating signals intended to control muscles of the vocal system for speaking words, rather than signals to move the arm or hand to enable typing. Chang said this approach taps into the natural and fluid aspects of speech and promises more rapid and organic communication.

“With speech, we normally communicate information at a very high rate, up to 150 or 200 words per minute,” he said, noting that spelling-based approaches using typing, writing, and controlling a cursor are considerably slower and more laborious. “Going straight to words, as we’re doing here, has great advantages because it’s closer to how we normally speak.”

Previously, Chang and colleagues in the UCSF Weill Institute for Neurosciences mapped the cortical activity patterns associated with vocal tract movements that produce each consonant and vowel. To translate those findings into speech recognition of full words, David Moses, PhD, a postdoctoral engineer in the Chang lab and one of the lead authors of the new study, developed new methods for real-time decoding of those patterns and statistical language models to improve accuracy.

But their success in decoding speech in participants who were able to speak didn’t guarantee that the technology would work in a person whose vocal tract is paralyzed. “Our models needed to learn the mapping between complex brain activity patterns and intended speech,” said Moses. “That poses a major challenge when the participant can’t speak.”

In addition, the team didn’t know whether brain signals controlling the vocal tract would still be intact for people who haven’t been able to move their vocal muscles for many years. “The best way to find out whether this could work was to try it,” said Moses.

To investigate the potential of this technology in patients with paralysis, Chang partnered with colleague Karunesh Ganguly, MD, PhD, an associate professor of neurology, to launch a study known as “BRAVO” (Brain-Computer Interface Restoration of Arm and Voice). The first participant in the trial is a man in his late 30s who suffered a devastating brainstem stroke more than 15 years ago that severely damaged the connection between his brain and his vocal tract and limbs. Since his injury, he has had extremely limited head, neck, and limb movements, and communicates by using a pointer attached to a baseball cap to poke letters on a screen.

The participant, who asked to be referred to as BRAVO1, worked with the researchers to create a 50-word vocabulary that Chang’s team could recognize from brain activity using advanced computer algorithms. The vocabulary – which includes words such as “water,” “family,” and “good” – was sufficient to create hundreds of sentences expressing concepts applicable to BRAVO1’s daily life.

For the study, Chang surgically implanted a high-density electrode array over BRAVO1’s speech motor cortex. After the participant’s full recovery, his team recorded 22 hours of neural activity in this brain region over 48 sessions and several months. In each session, BRAVO1 attempted to say each of the 50 vocabulary words many times while the electrodes recorded brain signals from his speech cortex.

To translate the patterns of recorded neural activity into specific intended words, the other two lead authors of the study, Sean Metzger, MS and Jessie Liu, BS, both bioengineering doctoral students in the Chang Lab used custom neural network models, which are forms of artificial intelligence. When the participant attempted to speak, these networks distinguished subtle patterns in brain activity to detect speech attempts and identify which words he was trying to say.

To test their approach, the team first presented BRAVO1 with short sentences constructed from the 50 vocabulary words and asked him to try saying them several times. As he made his attempts, the words were decoded from his brain activity, one by one, on a screen.

Then the team switched to prompting him with questions such as “How are you today?” and “Would you like some water?” As before, BRAVO1’s attempted speech appeared on the screen. “I am very good,” and “No, I am not thirsty.”

The team found that the system was able to decode words from brain activity at rate of up to 18 words per minute with up to 93 percent accuracy (75 percent median). Contributing to the success was a language model Moses applied that implemented an “auto-correct” function, similar to what is used by consumer texting and speech recognition software.

Moses characterized the early trial results as a proof of principle. “We were thrilled to see the accurate decoding of a variety of meaningful sentences,” he said. “We’ve shown that it is actually possible to facilitate communication in this way and that it has potential for use in conversational settings.”

Looking forward, Chang and Moses said they will expand the trial to include more participants affected by severe paralysis and communication deficits. The team is currently working to increase the number of words in the available vocabulary, as well as improve the rate of speech.

Both said that while the study focused on a single participant and a limited vocabulary, those limitations don’t diminish the accomplishment. “This is an important technological milestone for a person who cannot communicate naturally,” said Moses, “and it demonstrates the potential for this approach to give a voice to people with severe paralysis and speech loss.”

Authors: The full author list is David A. Moses, PhD*; Sean L. Metzger, MS*; Jessie R. Liu, BS*; Gopala K. Anumanchipalli, PhD; Joseph G. Makin, PhD; Pengfei F. Sun, PhD; Josh Chartier, PhD; Maximilian E. Dougherty, BA; Patricia M. Liu, MA; Gary M. Abrams, MD; Adelyn Tu-Chan, DO; Karunesh Ganguly, MD, PhD; and Edward F. Chang, MD, all of UCSF. Funding sources included National Institutes of Health (U01 NS098971-01), philanthropy, and a sponsored research agreement with Facebook Reality Labs (FRL), which completed in early 2021. * Denotes equal contribution.

Funding: Supported by a research contract under Facebook’s Sponsored Academic Research Agreement, the National Institutes of Health (grant NIH U01 DC018671-01A1), Joan and Sandy Weill and the Weill Family Foundation, the Bill and Susan Oberndorf Foundation, the William K. Bowes, Jr. Foundation, and the Shurl and Kay Curci Foundation. UCSF researchers conducted all clinical trial design, execution, data analysis and reporting. Research participant data were collected solely by UCSF, are held confidentially, and are not shared with third parties. FRL provided high-level feedback and machine learning advice.

About UCSF: The University of California, San Francisco (UCSF) is exclusively focused on the health sciences and is dedicated to promoting health worldwide through advanced biomedical research, graduate-level education in the life sciences and health professions, and excellence in patient care. UCSF Health, which serves as UCSF’s primary academic medical center, includes top-ranked specialty hospitals and other clinical programs, and has affiliations throughout the Bay Area. UCSF School of Medicine also has a regional campus in Fresno. Learn more at ucsf.edu or see our Fact Sheet.

© 2021 The Regents of The University of California

Health Stories