Being able to produce accurate speech in real time could change the lives of those with severe speaking difficulties
Register for full access
Our library content is no longer freely available. Please register to gain access to more than 12,000 innovations, updated daily. Our content is global in scope and covers solutions to the world's biggest challenges across 18 sectors.
Spotted: Neuroscientists at the University of California San Francisco have created a speech synthesiser that is controlled by brain activity. This promising development could mean that a brain-controlled virtual vocal tract may not be far away.
The researchers realised that different regions in the brain contained the instructions needed to create movement in the mouth and throat during speech. Using this insight, they then mapped brain activity in epilepsy patients who already had electrodes implanted in their brains. The patients read sentences out loud and the researchers were able to reverse engineer the exact timing of movements of the mouth, lips, tongue and throat needed to reproduce the sounds.
A virtual vocal tract was then created for each participant. This was made up of a decoder that used a machine-learning algorithm to transform brain activity into movements, and a synthesiser that converted the movements into speech. The system was even able to produce complete sentences that sounded like the person who is “speaking.”
“This is an exhilarating proof of principle that with technology that is already within reach, we should be able to build a device that is clinically viable in patients with speech loss,” UCSF professor Edward Chang told The Engineer.