David Stewart remembers when he was the only deaf child in his elementary school, and the shame he felt in being different. He could speak but not always clearly enough for people to understand him. To avoid the inevitable taunts of schoolmates, he would keep silent, sometimes for a year at a stretch.
Now an associate professor of educational psychology and special education at Michigan State University in East Lansing, Stewart has helped devise a computer program that gives deaf students more options to fall back on should the spoken word elude them. Called the Personal Communicator, it combines an English dictionary, a dictionary of American Sign Language, and a voice synthesizer. With this program, a nervous student can type answers at a keyboard and let the computer speak them aloud to the class.
Carrie Heeter, director of Michigan State’s Communication Technology Laboratory, wrote the software and led a team that included Stewart and Patrick Dickson, a professor of education at the university. They designed the program to supplant conventional ASL dictionaries, which convey the gestures of signing with static drawings that are usually difficult to interpret. Instead the program translates English words into short video clips of ASL signs, a format that is easier to use and thus better suited to fast-paced classroom discussions. It can also translate whole stories.
In addition, deaf students and their teachers can interact on chat screens and save the conversations for further study. This allows a deaf child to take the Personal Communicator home and go over the day’s classroom activities or homework assignments with his or her parents, says Heeter. The team is now working on increasing the program’s repertoire of ASL signs from the 1,000 it currently has to 2,500 by the summer of 1996.
Heeter hopes that the $99 program, which runs on an Apple Powerbook laptop computer, manages to catch the fancy of hearing children and pique their interest in ASL. It may encourage them to interact more freely with their deaf classmates, she says.
University of California at Santa Cruz’s Animated Talking Face
Innovators: Michael Cohen and Dominic Massaro
Part of the fun of old Godzilla movies lies in hearing the actors speak in English but seeing their lips move to Japanese. Such lip-synch comedy may soon be a thing of the past. Psychologists Dominic Massaro and Michael Cohen at the University of California at Santa Cruz have created a computer program that reproduces the facial expressions people make when they talk.
Cohen and Massaro originally intended to design a computer- generated talking face that deaf and hard-of-hearing people could use to practice their lip-reading skills. For more than four years, they videotaped people as they talked, converted the videotape to digital images, and broke down the images into segments that correspond to the parts of speech. Then they reassembled the segments to create a computer program that can convert typewritten words into a computer-generated talking face whose facial muscles, jaw, and tongue move with an accuracy unprecedented for an animated figure. Our lip-reading subjects tell us that ours is the only animated face they can understand, says Massaro.
The inventors believe that the talking face has potential for a wide range of uses, such as computer games with more lifelike characters, friendlier computers that can talk, and even the dubbing of foreign films so the actors’ mouths move in a way that is consistent with their speech. The program can also help students learn foreign languages by showing them how the mouth is supposed to move during speech. It turns out that visual clues--the position of the lips and jaw of the speaker, for example--can be very important to understanding what a person says, says Massaro.
A Robot Servant for the Internet
University of Washington’s Softbot
Innovator: Oren Etzioni
The collection of computer networks known as the Internet is a high-tech bazaar, but its exciting diversity can be a big headache when there’s something you need to find in a hurry. Software programs that purport to make it easier to navigate the Internet don’t tell you where to look among its thousands of chat forums, data bases, mailing lists, and shopping services.
With this in mind, Oren Etzioni set out in 1991 to invent a software robot that would help make the Internet accessible to ordinary folks. His Softbot, unveiled last year, acts as a personal electronic servant that ventures onto the Internet and returns with the sought-after nuggets of information. Etzioni gave his creation an intimate working knowledge of the Internet and enough artificial intelligence to interpret instructions and evaluate the information it finds. Say you want the phone number of a long-lost colleague. You know only that she’s a sociologist who might hold an academic post somewhere in North America. You power up your personal Softbot--Etzioni named his Rodney--and tell it what little you know. The Softbot then logs on to the Internet and tries to find the number. It may need to come back to you with questions, says Etzioni, but it does its best with what you’ve provided.
Etzioni and colleague Daniel Weld, both at the University of Washington in Seattle, designed the Softbot so that the user can easily modify it to handle new Internet services as they become available. Apple Computer has licensed the technology.
Digging up electronic cadavers
National Library of Medicine’s Visible Human
Innovators: Michael Ackerman and Donald Lindberg
Say good-bye to B movies about mad scientists digging up corpses for dubious experiments--that plot is now obsolete. The National Library of Medicine in Bethesda, Maryland, is making a pair of corpses available free to anybody who can muster a vaguely respectable reason for wanting to examine them.
The lucky couple are actually digital versions of recently deceased specimens who have been scanned, frozen, sliced, and photographed by the library’s Visible Human Project. These digital cadavers form the basis for lifelike simulations of surgical and radiation treatments and may also prove valuable as an educational tool for students at all levels. By providing this electronic yardstick for the normal male and female physique, the project is intended to simplify and standardize the study of human anatomy.
When the project got started in 1989, the biggest obstacle was finding suitable volunteers. Project officers Michael Ackerman and Donald Lindberg, who came up with the idea and put together the technologies needed to create the images, searched for freshly deceased candidates between 21 and 60 years of age with no visible abnormalities. They finally found a 39-year-old Texas man convicted of first-degree murder and executed by lethal injection in 1993. Researchers scanned the body using a variety of techniques including computed tomography, magnetic resonance imaging, and X-ray imaging. Then they froze it in gelatin, shaved it into one- millimeter slivers--1,870 of them from head to toe--and photographed each slice with a digital camera. Finally they merged all of the images to form one three-dimensional computer model.
The curious are now able to peruse the male cadaver to their heart’s content, but it occupies a whopping 15 gigabytes of memory--100 times more storage than an average hard drive--and would take two full weeks, day and night, to download from the library via the Internet. When the female companion comes out this autumn, she’ll be even bigger: Ackerman’s group is cutting her into gossamer-thin 1Ž3-millimeter slices to provide greater detail. It’s an overwhelming amount of information, Ackerman says. So far, more than 100 publishers, software companies, universities, and other organizations have signed on to use the data to make interactive encyclopedias, CD-ROM references, and other products.
3-D Models in an Instant
University of Washington’s Surface Modeling Technology
Innovator: Tony DeRose
Computers are now an indispensable tool for designing everything from chairs and lamps to carburetors and airplane wings, but the machines have been as thick as a brick when it comes to copying objects that already exist. A designer wanting to reproduce the Venus de Milo or the curve of a competitor’s aircraft would have no choice but to make a line-by-line, curve-by-curve rendering using cumbersome software tools.
No longer. Tony DeRose, professor of computer science and engineering at the University of Washington in Seattle, and his colleagues have developed computer software that uses a laser beam to scan the surface outline of any physical object--even complex ones with curves and multiple edges or planes. It then reconstructs the object as a three-dimensional computer model, which designers and engineers are free to manipulate at will. With this device, what used to take hundreds--if not thousands--of hours of manual labor can be done in a matter of minutes.
Manufacturing firms such as Boeing, General Motors, and Westinghouse may use the software to eliminate warehouses full of old spare parts and templates, converting them into computer data. DeRose also expects engineers to reverse-engineer competitors’ products. In addition, the technology will allow programmers to scan objects for use in so-called virtual reality software, the computer-created environments viewed through special goggles. That way they can include a far richer assortment of objects and textures to better trick the user into accepting this make- believe world. In a similar way, engineers could use the technology to create a more realistic airplane cockpit for flight simulation.
DeRose has begun fielding inquiries from computer software companies interested in incorporating his technology into new 3-D imaging products. He is also trying to find a way to bring down the cost of the laser scanner, which today is at least $50,000. Our ultimate goal is to get rid of these expensive machines altogether, using software techniques and handheld 3-D cameras to make this technology more generally accessible, he says.