In the new study, the Stanford team wondered whether neurons in the motor cortex also contained useful information about speech movements. That is, could they detect how “Subject T12” attempted to move her mouth, tongue, and vocal cords when attempting to speak?
These are small, subtle movements, and according to Sabes, a major discovery was that just a few neurons contained enough information for a computer program to predict with great accuracy the words a patient was trying to say. Shenoy’s team transmits this information to a computer screen, where the patient’s words appear.
The new results build on previous work by UCSF’s Edward Chang, who wrote this talk on the most complex movement people make. We push out the air, add vibration to make it audible, and put it into words with our mouths, lips and tongue. To make the “f” sound, you need to place your upper teeth on your lower lip and squeeze out air—just one of dozens of mouth movements required to speak.
the way forward
Chang has previously used electrodes placed on top of the brain to allow volunteers to speak through a computer, but in their preprint, the Stanford researchers say their system is more accurate and three to four times faster.
“Our results show a viable pathway to restore communication at the speed of a conversation in paralyzed patients,” wrote the researchers, including Shenoy and neurosurgeon Jaimie Henderson.
David Moses, who worked with Chang’s group at UCSF, said the current work reached “impressive new performance benchmarks.” However, even if records keep being broken, he said, “demonstrating robust and reliable performance over multi-year timescales will become increasingly important.” Any commercial brain implant could struggle to get past regulators, especially if it degrades or records less accurately over time.
The path forward may include more sophisticated implants and closer integration with artificial intelligence.