Scientists have successfully decoded the silent monologue in people’s heads in research which could help people who are unable to audibly speak.
Researchers from Stanford University, USA, translated what a participant was saying inside their head using brain-computer interface (BCI) technologies.
BCIs have recently emerged as a tool to help people with disabilities communicate more easily. For example, current BCI technologies are implanted into the regions of the brain that control movement and can decode neural signals which can then translate into prosthetic hand movements.
Last year, a study published in the New England Journal of Medicine demonstrated BCIs were able to decode the words a 45-year-old man with amyotrophic lateral sclerosis (ALS) was attempting to speak.
ALS, also known as motor neurone disease (MND), is a neurodegenerative disease that progressively affects nerve cells and weakens muscle control. In many patients, ALS can affect the muscles involved in speech production leading to many difficulties in audibly talking.
The team from Stanford was inspired to see if BCIs could decode not just attempts of audible speech but inner speech too.
“For people with severe speech and motor impairments, BCIs capable of decoding inner speech could help them communicate much more easily and more naturally,” says Erin Kunz, the lead author of the study.
“This is the first time we’ve managed to understand what brain activity looks like when you just think about speaking.”
The team used microelectrodes implanted in the motor cortex to record the neural activity of four participants with severe paralysis from a brainstem stroke or ALS. The participants were then asked to either attempt to speak or imagine saying a set of words.
The researchers noticed that both attempted and imagined speech evoked similar patterns of neural activity in the motor cortex, the region of the brain responsible for speaking. While inner speech had a weaker magnitude of activation, both forms of communication activated overlapping regions in the brain.
“If you just have to think about speech instead of actually trying to speak, it’s potentially easier and faster for people,” says Benyamin Meschede-Krasa, co-first author of the study.
The team then used these results to train an AI model to interpret the imagined words. The BCI could decode imagined sentences from a vocabulary of up to 125,000 words with an accuracy rate as high as 74%. These results are published in Cell.
To mitigate any privacy concerns, the team also demonstrated a password-controlled mechanism that would prevent the BCI from detecting inner speech unless the participant unlocked it with a chosen keyword.
A participant is using the inner speech neuroprosthesis. The text above is the cued sentence, and the text below is what’s being decoded in real-time as she imagines speaking the sentence. Credit: Emory BrainGate Team.
In this version of the study, the system would only begin inner-speech decoding when users would think of the phrase “chitty chitty bang bang”. The BCI correctly detected the password with 98.75% accuracy
The team writes that currently, BCI systems are unable to decode free-form inner speech without making substantial errors. However, the researchers say that more advanced devices and sensors alongside better algorithms could achieve free-form inner speech decoding with a lower word error rate.
“This work gives real hope that speech BCIs can one day restore communication that is as fluent, natural, and comfortable as conversational speech,” says senior author Frank Willett.
“The future of BCIs is bright.”