My dad was like a little kid in a toy store. He would go around asking, ‘What’s that noise? Where is it coming from?’ He was exploring a whole new world of sound that had long been forgotten. So wrote Vernon Hise last October after his father, Bobby, had received an experimental implant designed to help him hear after years of total deafness.
This testimonial is for a new type of cochlear implant--a hearing aid, surgically implanted in the inner ear, designed to restore partial hearing to completely deaf people--invented by Blake Wilson, director of the Center for Auditory Prosthesis Research at the Research Triangle Institute in North Carolina. A cochlear implant consists of an external speech processor that converts sound to electric signals and then transmits them to several electrodes. In turn the electrodes directly stimulate auditory nerves in the ear. Although each electrode is tuned to a different frequency, the electric signals electrodes emit can interfere with one another, muffling the sound the patient hears. To improve the quality of sound, Wilson devised a method, called continuous interleaved sampling, that uses brief pulses, each delivered at a different time, rather than continuous signals. This way the pulses don’t have a chance to interfere with one another.
Soon after Wilson got the idea in 1989, he rigged up a demonstration. The first patient who tried it noticed an immediate improvement in performance, says Wilson. We then worked to improve it, and the patient again noticed a large difference. That suggested we should test the idea with many people. One of the biggest advantages of this approach is that it can be tailored to a person’s individual hearing deficiency.
Several foreign companies, including Med-El in Austria and Bionic Systems in Belgium, are already using a version of the processor in their cochlear implants. In this country, Advanced Bionics in Sylmar, California, recently received fda approval for an implant based on the technology. Wilson’s new challenge is to reduce the manufacturing costs so that the implants can be sold in China.
Arkenstone’s PC-Based Navigation Systems for the Blind
Innovator: James Fruchterman
Venturing off the beaten path is always difficult, but for blind people, who cannot read a map, the challenge can be downright onerous. Blind people want two things: to get information and to get around, says James Fruchterman, an electrical engineer. When the idea of combining digital maps with talking software came up, a lightbulb went on.
Fruchterman invented two devices that give blind people the means to navigate unfamiliar areas. Because most visually impaired people can type, he designed the devices as attachments to a personal computer. One of them, Atlas Speaks, enables blind people to plan their travel around a city. You begin by typing in your starting location and your destination. Then, using the up, down, left, and right keys on your pc, you scout out a route under the helpful guidance of a digital map and software, which tells you through a speaker whether at each point you are moving toward or away from your destination. By trial and error, it is possible to find the best route.
The other device, Strider, allows a blind person to find his or her exact location outdoors using a laptop computer. In addition to the digital map, it contains a radio receiver tuned for signals from the satellites of the global positioning system, so it always knows to within a few feet where it is. By typing in commands, the user can elicit helpful advice, such as: You’re at the corner of First and Main; the subway entrance is 300 feet away at three o’clock.
Fruchterman’s company, Arkenstone, in Sunnyvale, California, began selling Atlas Speaks for $995 in January with a map of the United States, and plans to release Strider this summer.
Now Hear This
Madah-Com’s Clear Subway Speakers
Innovator: David Manela
One day this past winter, people assembled in a New York City subway station heard yet another announcement coming over the loudspeaker. Expecting the typical garbled output that sounds like muffled Serbo- Croatian, they were surprised to hear intelligible English speech coming through loud and clear.
What made this clarity possible was a wireless digital communications network called waves (for Wireless Audio/Visual and Emergency System). It was invented by electrical engineer David Manela, who first got the idea three years ago while reading in the newspaper about a plan for a new public-address system for the New York subway that involved laying a hundred miles of wire. I thought, ‘These guys don’t know how to do it,’ he recalls.
Most public-address systems consist of speakers connected by copper wires to amplifiers. Even though the wires are insulated, they are vulnerable to interference from the electromagnetic waves given off by radios, the motors of passing trains, and the upper atmosphere. And the expensive and cumbersome wires make it difficult to place speakers where passengers can best hear them. Manela eliminated wires entirely. In his waves setup, a base station consisting of a modified pc sends radio signals to transceivers placed at strategic points throughout the subway. The transceivers amplify the signal and send it by wire to nearby speakers. To make the radio waves virtually immune to interference, Manela used a technique known as frequency hopping, in which the wireless transmissions can jump from one frequency to another, as often as they need, always making sure that the clearest channel is being used.
Waves can be used for transmitting not only sound but also still video images and text for signs, which would allow municipalities to comply with the Americans With Disabilities Act and address passengers through both visual and audio media.
Manela formed madah-Com in New York City in 1993 to build and market waves. We are bringing the revolution of the personal computer to the audio business, he says.
Rutgers’s Computer-Assisted Speech Training
Innovator: Paula Tallal
For the past two decades, Rutgers University cognitive neuroscientist Paula Tallal has worked with children with severe speech and language problems who cannot, for instance, differentiate among simple sounds such as ba, da, and ga. Several years ago, Tallal identified the source of the problem. These children hear well and can sequence sounds, she explains. But they need orders of magnitude more processing time in their brains. While most seven-year-olds process the signals in tens of milliseconds, the children with so-called language learning impairment, or lli, take hundreds of milliseconds.
Teachers typically drill these children repeatedly to get them to recognize the sounds, but this approach takes years, if it works at all. In 1994 Tallal took a different approach: she developed a computer program to stretch out the troubling speech signals over a longer period than normal and amplified them, making them comprehensible to lli children listening on headphones. By wearing these glasses for the ears, the children improved their ability to understand normal speech. In one month they registered a two-year improvement.
Last year Tallal began the even more ambitious project of weaning her patients off the ear glasses. In collaboration with Michael Merzenich of the University of California at San Francisco, she devised several computer games that help the children speed up the rate at which their brains can process speech. Each time a child achieves a new target speed, the computer nudges the speed a bit higher. We got tremendous results with this approach, which just blew my socks off, reports Tallal. It worked for every single child in the program.
Tallal and her collaborators are now polishing their computer games software and hope eventually to market it. They have also begun working on ways of diagnosing lli children as young as six months and are trying to identify a gene responsible for the impairment.
Tuning Carnegie Hall
Sabine’s Invisible Soundman
Innovator: Doran Oster
Preparing an auditorium’s acoustics for a musical performance is a time-consuming task for sound engineers--and one doomed to failure. Acoustics are incredibly finicky, dependent on such unpredictable factors as whether the audience is wearing sound-damping furs or sound-reflecting plastic raincoats. Even a change in temperature or humidity wreaks havoc on the most carefully tuned hall. The only way of coping with this uncertainty is to monitor the hall’s acoustics while a performance is going on, but to do so engineers would have to disrupt the performance.
In 1993 Doran Oster, an engineer who founded the company Sabine in Alachua, Florida, had a better idea: measure the acoustics with tones too quiet to be heard by the human ear. That way engineers could tinker with the acoustics unbeknownst to the audience. He set to work developing a device that emits a series of extremely quiet tones, covering the full range of audible frequencies, through the amplifiers and speakers of the public-address system into the hall. The device measures the tones as they enter the hall and compares them with the same tones recorded earlier in the empty hall. Engineers monitoring the tones can then make adjustments to restore a perfect frequency response--making sure that low, middle, and high sounds are neither too loud nor too soft. We came up with a solution that the audience wouldn’t hear, says Oster.
Building the device entailed designing sound filters that would pick out the quiet tones from the music. Last summer Oster unveiled a prototype, in development for two years, called the Real-Q real-time adaptive equalizer.