We have completed maintenance on DiscoverMagazine.com and action may be required on your account. Learn More

Brain-Machine Interface Could Give Voice to the Voiceless

A speech prosthetic could give voice to people who can't speak, by converting their brain activity into words. 

By Gordy Slack
Mar 17, 2014 4:30 PMNov 14, 2019 8:18 PM
Edward-Chang
Edward Chang studies exactly what our brains do when we're trying to speak. | Eric Millette

Newsletter

Sign up for our email newsletter for the latest science news
 

Mind reading usually conjures images of scam artists with crystal balls, but a group of San Francisco neuroscientists and engineers is developing a device that can do it sans crystal ball. Their research aims to figure out what people with paralysis or brain injury are trying to say by studying how they attempt to move their mouths. By decoding patterns in the part of the brain that orchestrates the movement of the lips, tongue, jaw and larynx, the mechanical mind reader — a speech prosthetic — will give these people a voice through a computer-driven speech synthesizer.

In the short term, the device would help patients whose brains can’t drive the vocal machinery in their mouths. That includes the thousands of people with brain trauma, spinal cord injury, stroke or ALS who are fully conscious but unable to express their thoughts. (Most now rely on devices that require physical input.) The team published a paper in Nature mapping the relevant brain activity with unparalleled precision and has developed a general design for the device. After fixing bugs and securing funding, the researchers expect to start human trials in two or three years.

In the long term, the technology underlying this prosthetic could advance the broader field of brain-machine interfaces. The key to this device is not so much in its physical mechanisms but in the algorithms behind it, says neurosurgeon Edward Chang of the University of California, San Francisco. They’re what gives the device its ability to decode the complex “language” of the brain, expressed through the electrical signals of large groups of neurons.

Learning to Speak Brain

Chang — co-director of the Center for Neural Engineering and Prostheses, a UC Berkeley-UC San Francisco collaboration — is both a brain surgeon and a neuroscientist familiar with the field’s deep computational frontiers. He says he works in “the most privileged research environment in the world: the inside of the human brain.”

This environment is complicated, but a speech prosthetic isn’t actually as tricky as you might think. “The signals generated in the part of the motor cortex that controls and coordinates the lips, tongue, jaw and larynx as a person speaks are already intended to control an external device: the mouth,” says Keith Johnson, a UC Berkeley professor of linguistics and a co-author with Chang on last year’s Nature paper, which describes, for the first time, the neuronal mechanisms controlling speech. “Having those same signals control a different physical device, the speech prosthetic, is a much more tractable problem than trying to figure out what thought a person would like to express and trying to give voice to that,” Johnson says.

Reading brain commands for mouth movements may be simpler than reading cognitive content, but it is hardly easy. “As far as motor activities go, human speech is as complex as it gets,” says Chang. Even a simple phrase is the equivalent of an Olympic gymnastics routine for the speaker’s tongue, lips, jaw and larynx. Just as a gymnast’s twists, flips, jumps and landings all require precise muscle control and perfect timing, a fraction of a second too long before curling the tongue or engaging the larynx can mean the critical difference between saying “snappy” or “crappy.”

Beyond mapping the precise locations of the brain areas controlling these movements for the first time, Chang and colleagues also recorded and analyzed patterns of neuron activity in those areas. By cataloging these dynamic “higher-order” patterns showing when, and how intensely, each set of neurons turns on, Chang’s lab learned how to read intended speech directly from the brain.

Inside Man

“Speech is one of the most defining human behaviors,” says Chang, “but until now we just haven’t been in a position to study how the human brain orchestrates it.” Functional MRI and other noninvasive imaging techniques, which have revealed much about other parts and functions of the brain, are too imprecise to measure the exact neurologicalactivity responsible for speech. Animal studies don’t help either; we’re the only species that really talks. To find out how the human brain coordinates speech, the researchers had to study it in people.

As a neurosurgeon specializing in epilepsy, Chang already has access to living brains. To study both the nature of seizures and the best surgical path to the part of the patient’s brain causing them, Chang removes a large piece of each patient’s skull to reveal the brain beneath. He then places thin, dense grids of sensitive electrodes directly onto the brain’s exposed surface; these sensors read changes in the activation behavior of columns of neurons beneath them.

The grids stay in place for up to two weeks, as the patient experiences a number of seizures, at which point the grids are removed. But while Chang’s patients wait for the seizures, with the grids installed and already collecting data, many of the patients volunteer to have him collect more information about the brain’s functions, as was the case with this study of speech and language.

To map the speech centers, Chang and his colleagues recorded electrical activity from about 12 patients while they pronounced various sounds, such as “bah,” “dee” and “goo.” They analyzed the data to determine exactly what was happening in the ventral sensory motor cortex (vSMC) — how that area of the brain region was laid out, and in what order the neurons activated.

Kristofer Bouchard, a postdoctoral fellow in Chang’s lab, then applied state-space analysis, a mathematical technique for understanding the structure of a high-dimensional, complex system like the vSMC. The analysis allowed the researchers to identify and isolate the specific patterns of neural activity that play out in this part of the brain when someone speaks, effectively enabling them to decode the brain’s signals. Put another way, they can now turn thoughts into words through mechanized mind reading.

Soon a permanently embedded grid of electrodes will identify those patterns and simultaneously send them to an external processer that will convert the brain signals into synthesized words.

Alison Mackey/Discover; Brain and computer images reprinted with permission from Macmillan Publishers Ltd: Nature Vol 495 Pg 327-332 Copyright 2013

Talking to Yourself

Helping voiceless patients speak their minds is already a laudable goal, but this device could do more. The technology may one day let healthy people control electronic gadgets with their thoughts.

UC San Diego neuroscientist Bradley Voytek suggests such speech-reading brain-machine interfaces (BMIs) could make an excellent control interface for all kinds of devices beyond voice synthesizers because speech is so precise. We have much better control over what we say (even what we say to ourselves in our own minds) than over what we think.

The possibilities are tantalizing. You could silently turn off your phone’s ringer in the theater just by thinking the words, “Phone controls: turn off ringer.” Or compose and send an email from the pool without interrupting your stroke: “Email on: Toni, I’m still swimming and will be 15 minutes late for dinner. Send. Email off.” Voytek dreams even bigger: “Pair this [technology] with a Google self-driving car, and you can think your car to come pick you up. Telepathy and telekinesis in the cloud, if you will.”

Here’s the rub. Even if Chang’s speech prosthetic 1.0 is ready to roll in two or three years, implanting the device would require serious brain surgery, something that would make even committed early adopters balk. For commercial speech-reading BMIs to become mainstream, one of two things would need to happen: Brain implant surgery would have to become much safer, cheaper and more routine, or noninvasive sensing devices would have to become much more powerful.

But triggering a wave of new, convenient gadgets for the masses would just be a bonus. This technology already promises to make a difference for patients who’ve lost their voices. It doesn’t take a mind reader to see that.

[This article originally appeared in print as "What's on Your Mind?"]

1 free article left
Want More? Get unlimited access for as low as $1.99/month

Already a subscriber?

Register or Log In

1 free articleSubscribe
Discover Magazine Logo
Want more?

Keep reading for as low as $1.99!

Subscribe

Already a subscriber?

Register or Log In

More From Discover
Recommendations From Our Store
Shop Now
Stay Curious
Join
Our List

Sign up for our weekly science updates.

 
Subscribe
To The Magazine

Save up to 40% off the cover price when you subscribe to Discover magazine.

Copyright © 2024 Kalmbach Media Co.