Reality Labs

BCI milestone: New research from UCSF with support from Facebook shows the potential of brain-computer interfaces for restoring speech communication

July 14, 2021

TL;DR: Today, we’re excited to celebrate milestone new results published by our UCSF research collaborators in The New England Journal of Medicine, demonstrating the first time someone with severe speech loss has been able to type out what they wanted to say almost instantly, simply by attempting to speak. In other words, UCSF has restored a person’s ability to communicate by decoding brain signals sent from the motor cortex to the vocal tract.This study marks an important milestone for the field of neuroscience, and it concludes Facebook’s years-long collaboration with UCSF’s Chang Lab.

Recommended Reading

These groundbreaking results show what’s possible — both in clinical settings like Chang Lab, and potentially for non-invasive consumer applications such as the optical BCI we’ve been exploring over the past four years.

To continue fostering optical BCI explorations across the field, we want to take this opportunity to open source our BCI software and share our head-mounted hardware prototypes to key researchers and other peers to help advance this important work. In the meantime, Facebook Reality Labs will focus on applying BCI concepts to our electromyography (EMG) research to dramatically accelerate wrist-based neural interfaces for intuitive AR/VR input.


The room was full of UCSF scientists and equipment — monitors and cables everywhere. But his eyes were fixed on a single screen displaying two simple words: “Good morning!”

Though unable to speak, he attempted to respond, and the word “Hello” appeared.

The screen went black, replaced by another conversational prompt: “How are you today?”

This time, he attempted to say, “I am very good,” and once again, the words appeared on the screen.

A simple conversation, yet it amounted to a significant milestone in the field of neuroscience. More importantly, it was the first time in over 16 years that he’d been able to communicate without having to use a cumbersome head-mounted apparatus to type out what he wanted to say, after experiencing near full paralysis of his limbs and vocal tract following a series of strokes. Now he simply had to attempt speaking, and a computer could share those words in real time — no typing required.

Established in 2017, Facebook Reality Labs’ (FRL) Brain-Computer Interface (BCI) project began with an ambitious long-term goal: to develop a silent, non-invasive speech interface that would let people type just by imagining the words they want to say.

The team has made great progress on this mission over the course of four years, investing deeply in the exploration of head-mounted optical BCI as a potential input method for the next computing platform — in other words, a way to communicate in AR/VR with the speed of voice and the discreetness of typing. In addition to our internal efforts, we’ve supported a team of researchers at University of California San Francisco (UCSF) who are developing an implantable communications prosthesis for people who have lost the ability to speak. Facebook’s goal in funding this research has been to determine whether a silent interface capable of typing 100 words per minute is possible, and if so, what neural signals are required — a goal that is well aligned with UCSF’s work.

UCSF published the first results two years ago in Nature Communications, showing for the first time that a small set of spoken words and phrases can be decoded from brain activity in real time. Since then, UCSF also demonstrated full-sentence decoding from brain to text using machine learning.

Today we’re excited to celebrate the next chapter of this work and a new milestone that the UCSF team has achieved and published in TheNew England Journal of Medicine: the first time someone with severe speech loss has been able to type out what they wanted to say almost instantly, simply by attempting speech. In other words, UCSF has restored a person’s ability to communicate by decoding brain signals sent from the motor cortex to the muscles that control the vocal tract — a milestone in neuroscience. These results mark the culmination of a decade of Dr. Edward Chang’s research at UCSF.

“My research team at UCSF has been working on this [speech neuroprosthesis] goal for over a decade. We’ve learned so much about how speech is processed in the brain during this time, but it’s only in the last five years that advances in machine learning have allowed us to get to this key milestone,” says Edward Chang, Chair of Neurosurgery, UCSF. “That combined with Facebook’s machine learning advice and funding really accelerated our progress.”

A new milestone for the field of BCI

This final phase of the project, which we call Project Steno, kicked off in 2019 in the Chang Lab at UCSF and involved a research participant who had lost the ability to speak normally following a series of strokes. The participant underwent elective surgery to place electrodes on the surface of his brain. During the course of the study, the participant worked directly with the UCSF team to collect dozens of hours of attempted speech with a BCI. This data was in turn used by UCSF to create machine learning models for speech detection and word classification. Through this study, the participant was able to truly communicate in real time despite the strokes that paralyzed him over 16 years ago.

Previous studies from UCSF, such as the Nature Communications study, have successfully decoded a small set of full, spoken words and phrases from brain activity in real time, and other Chang Lab studies showed that their system is able to recognize a significantly larger vocabulary with extremely low word error rates. However, these results were all achieved while people were actually speaking out loud, and we didn’t yet know whether it would be possible to decode words in real time when people simply intended to speak. The results from the study published today bring this all together, demonstrating the successful decoding of attempted conversational speech in real time. We’ve learned a lot from Project Steno, particularly as it applies to how algorithms can use language models to improve accuracy for brain-to-text communication.

“Project Steno is the first demonstration of attempted speech combined with language models to drive a BCI,” notes FRL Neural Engineering Research Manager Emily Mugler. “The result is a beautiful example of how we can leverage the statistical properties inherent in language — how one word can lead to another in the construction of a sentence — to dramatically increase the accuracy of a BCI.”

Just like your phone can use auto-correct and auto-complete to improve the accuracy of what you type in a text, we can use the same techniques with a BCI to improve the accuracy of the algorithm’s prediction of what someone wants to communicate.

Facebook’s contribution to Project Steno

Facebook provided high level feedback, machine learning advice, and funding throughout Project Steno, but UCSF designed and oversaw the study, and worked directly with the participant. Facebook was not involved in data collection with the research participant in any way; all the data remains onsite at UCSF and under the control of UCSF at all times. To be clear, Facebook has no interest in developing products that require implanted electrodes. Facebook’s funding enabled UCSF to dramatically increase their server capacity, allowing them to test more models simultaneously and achieve more accurate results.

Finally, Mugler led the FRL Research BCI team’s technical feedback,  advising on the methods used to help the participant learn how to use the BCI. How do you train someone to speak with only their brain? This is a non-trivial feat given nothing like it has ever been done before. Mugler, who joined Facebook in the early days of our BCI program in 2017, has spent much of her career focused on restorative communication BCIs for patients who have lost the ability to speak due to conditions such as ALS.

“To see this work come to fruition has been a dream for the field for so long, and for me personally,” says Mugler. “As a BCI scientist, a core pursuit for me throughout my career has been to demonstrate that the neural signals that drive speech articulation can be decoded for a more efficient BCI that can be used for communication. These results have unlocked a lot of possibilities for assistive technologies that could significantly improve quality of life for those with speech impairment.”

The results published today by UCSF carry crucial implications for the future of assistive technology, as this has the potential to help unlock conversational communication for patients with similar injuries. We’re excited to see Project Steno’s impact manifest across the field of neuroscience long into the future.

Exploring high-bandwidth interaction for AR/VR

Getting BCI to this successful milestone is a good point to reevaluate FRL’s objectives for the BCI program as a whole — as well as a point at which this work is ready to share with the broader neuroscience community. We’ve always known that a silent speech BCI would be a long-term research endeavor, and we’ve made substantial progress towards this goal, developing a wearable prototype that uses near-infrared light to measure blood oxygenation in the brain from outside of the body and indirectly measure brain activity in a safe, non-invasive way. In the process, we've also explored novel methods for sensing tissue movement that could potentially redefine the boundaries of what's possible to sense from the brain non-invasively.

While we still believe in the long-term potential of head-mounted optical BCI technologies, we’ve decided to focus our immediate efforts on a different neural interface approach that has a nearer-term path to market: wrist-based devices powered by electromyography. Here’s how EMG works: When you decide to move your hands and fingers, your brain sends signals down your arm via motor neurons, telling them to move in specific ways in order to perform actions like tapping or swiping. EMG can pick up and decode those signals — the hand and finger movements you’ve already decided to make — at the wrist and translate them into digital commands for your device. In the near term, these signals will let you communicate with your device with a degree of control that’s highly reliable, subtle, personalizable, and adaptable to many situations. As this area of research evolves, EMG-based neural interfaces have the potential to dramatically expand the bandwidth with which we can communicate with our devices, opening up the possibility of things like high-speed typing.

“We’re developing more natural, intuitive ways to interact with always-available AR glasses so that we don’t have to choose between interacting with our device and the world around us,” says FRL Research Director Sean Keller. “We’re still in the early stages of unlocking the potential of wrist-based electromyography (EMG), but we believe it will be the core input for AR glasses, and applying what we’ve learned about BCI will help us get there faster.”

Speech was the focus of our BCI research because it’s inherently high bandwidth — you can talk faster than you can type. But speech isn’t the only way to apply this research — we can leverage the BCI team’s foundational work to enable intuitive wrist-based controls, too. Given this, we’re no longer pursuing a research path to develop a silent, non-invasive speech interface that would allow people to type just by imagining the words they want to say. Instead of a speech-based neural interface, we’re pursuing new forms of intuitive control with EMG.

“As a team, we’ve realized that the biofeedback and real-time decoding algorithms we use for optical BCI research can accelerate what we can do with wrist-based EMG,” Mugler explains. “We really want you to be able to intuitively control our next-generation wristbands within the first few minutes of putting them on. In order to use a subtle control scheme with confidence, you need your device to give you feedback, to confirm it understands your goal. To add another layer of accuracy, we can use real-time decoding algorithms that leverage the statistical properties of language. Applying these BCI research concepts to EMG can help wrist-based control feel intuitive and useful right from the start.”

The road ahead

We want to continue supporting the exploration of head-mounted optical BCI technologies that our external collaborators are developing, even as we focus on wrist-based input devices for AR/VR internally at FRL. To this end, the team has open sourced its BCI software and will share its head-mounted hardware prototypes with key researchers and other peers to help advance new use cases, such as assistive technologies. We’ll share more details about our optical BCI open-source collaborations as this project evolves.

As a research organization, we believe that sharing aspects of our work through open source can be a good way to move the whole research community forward, benefiting everyone. That’s why we frequently publish our research, share code publicly, and invest in academic studies such as our collaboration with UCSF. The benefits of external collaboration are especially pronounced for BCI research, which requires deep cross-disciplinary investigation at the intersection of machine learning and neuroscience. As part of our collaboration with Chang Lab, for example, Facebook AI Research (FAIR) helped UCSF use Facebook’s open source code “Wav2letter'' to improve their language models in real-time demonstrations.

“It’s become evident through conversations with our academic collaborators that sharing this work with our peers in the public sphere will yield more impactful results for the neuroscience world at large,” says Mugler. “In the spirit of collaboration, to advance the BCI field, we want to give other BCI researchers access to our tools. The work we’ve done in our lab is state of the art, and we know we can go further together than we can alone.”

We also aim to foster open public dialogue on neuroethics, an interdisciplinary field that examines how emerging neuroscience impacts society and the individual. Our goal when we announced our Responsible Innovation Principles in 2020 was to guide our work as we build the next computing platform in a responsible, privacy-centric way. But we know we can't do it alone. That’s why we’re committed to working with third parties and enlisting the help of experts and academics in ethics, privacy, safety, and security to collaborate on building future technology platforms, including neural interfaces.

We’re also deepening our investment in neuroethics both internally as a team and externally with the research community. This includes a new Request for Proposals (RFP) — Engineering Approaches to Responsible Neural Interface Design — focused on uncovering new methods to ensure privacy and inclusion on future technology platforms. Areas of interest include optical BCIs, with a focus on functional near-infrared spectroscopy (fNIRS) and EMG. In addition to this RFP, we regularly participate in dialogue with organizations like the NeuroRights Initiative, which focuses on fostering ethical innovation in the fields of neurotechnology and AI.

Our dedication to developing the interface of the future for AR/VR has only grown over the years, and we’ll continue sharing more about these challenges as the work progresses. Later this year, we’ll share more about how haptic wearables will add another dimension to the next computing platform, enhancing our ability to establish presence and learn new interaction paradigms.

We're hiring AR/VR engineers & researchers!

Help us build the metaverse

Reality Labs

Reality Labs brings together a world-class team of researchers, developers, and engineers to build the future of connection within virtual and augmented reality.