Virtual Jack

He looks like a human, he's shaped like a human, he moves, works, and reacts like a human. He's the avant-garde of simulated us, and the future of virtual reality.

By Gary TaubesJun 1, 1994 12:00 AM


Sign up for our email newsletter for the latest science news

In early incarnations Jack was never seen without sunglasses--a convenient ploy to hide his creators' failure to provide him with eyes. The lack, however, posed no problem for Jack, who technically couldn't see in any case; he still managed to walk around his virtual world with a single- minded intensity and stiff-legged gait reminiscent of John Wayne on the corners and Clint Eastwood on the straightaways.

In those early days Jack--whose name is now a registered trademark--was built like a high-tech, computerized crash dummy. He had stubby cylindrical fingers like Dorothy's Tin Man and wore tight-fitting blue slacks and a green John Deere baseball cap in honor of the company for which he was first commercially employed. That job entailed sitting in the virtual three-dimensional cabin of a virtual John Deere bulldozer and establishing that a real, flesh-and-blood driver of a real John Deere vehicle would be able to see the blade tips, front and rear--assuming, of course, he had eyes. For meritorious service rendered, Jack was made the centerfold of the John Deere annual report of 1991.

Today, with a set of sparkling blue eyes and a new physique that makes him look more like a department-store mannequin than a crash dummy, Jack is heading toward a red cube that sits on the opposite side of a simple three-dimensional maze of corridors displayed on a computer terminal. He methodically winds his way through these passageways, turning corners, always heading closer to the cube. When he arrives, he stops and stands before the cube with supreme patience. His operator--Trip Becket, a young doctoral student at the University of Pennsylvania--then centers his cursor on the cube and clicks his mouse once, promptly removing the cube to the far side of the maze. Without so much as a glimmer of dismay, Jack turns in a tight circle and walks off after it. "Jack doesn't get angry," comments Norm Badler, Jack's creator and a computer scientist at Penn. "He's not real smart yet."

Badler's monumental understatement of Jack's intellectual capacity aside, this little man wandering the screen represents the future of virtual reality, the still-to-be-determined nature, that is, of computer-created 3-D worlds that can be experienced from the inside, that can be seen, heard, touched, and manipulated. Think of Jack as a robot--or in this case, an android--existing only in the phase space of a computer. In one of his more technical moments, Badler calls him "a biomechanically reasonable substitute for a human in a task-oriented virtual environment."

Virtual reality has already been employed in architecture--where virtual offices, for instance, are created and inspected before their designs are committed to reality--and in "human factors design," the attempt to make things like dashboards and control panels comfortable and accessible to the humans who have to use them. But if we are ever to explore these worlds fully--to really get inside them and experience them-- we'll need simulated humans like Jack and a handful of other such programmed creations that can move as we do, behave as we do, and respond to stimuli as we do.

These virtual worlds will be no more real, of course, than the two-dimensional graphic creations of television and computers today, but they will be fully accessible to us. With the help of a helmet and viewing goggles, you can see what Jack sees. Turn your head, and Jack turns his head; your view of his virtual world changes as his does. Add "data gloves" or even a skintight bodysuit with appropriate sensors, and your movements become Jack's movements; his sensations become your sensations.

Admittedly, some of this wizardry is not yet fully refined--and most of it is still extremely expensive--but as time goes by and computing power improves, so will Jack and his computerized peers. Eventually, simulated humans will be able to hear, feel, and sense the environment and learn from it. Multiple Jacks will be able to coordinate their own group activities and explain or comment on their own behavior. They will become the surrogate eyes and the surrogate bodies of humans in the 3-D worlds of virtual reality. If virtual reality takes over the world, so to speak, Jack will be there waiting for you. "Jack will do anything you want or go anywhere you want," says Badler. "Nothing is predetermined."

Jack's life so far--he was born in the mid-1970s as an offshoot of Badler's doctoral thesis on programming computers to describe complex motions--has been spent in a room at the Computer Graphics Lab at Penn, which was recently renamed the Center for Human Modeling and Simulation. The room, which happens coincidentally to have once housed the famous ENIAC, the world's first electronic calculator, is now ringed by a dozen computer workstations and an equal number of students. One student works on Jack's facial expressions, so that when some future Jack speaks or smiles, the muscles in his face will respond appropriately. ("Lips are very challenging," notes Badler, "and just this week I saw a paper on tongue modeling. It's hard to do, but important. If you're going to have a simulated person, the tongue will occasionally pop out and show up, and so it will have to be modeled.") One student works on Jack's new physique. One works on programming simple models of the mechanics of crowd motion, because once you have one Jack, you'll want to have crowds of them.

Another student works on refining Jack's spinal column--the next- generation Jack will have simulated internal organs, and if Jack is to have internal organs, he'll need a working spinal column and rib cage to hold them. Jack needs organs because one day he may become a patient for surgical training. Future young surgeons will be able to hone their initial abilities on virtual patients rather than real ones. "If you screw up," notes Badler, "repair is instantaneous in a virtual patient."

Giving Jack a spine is no easy task. The earliest, organless Jack was little more than an idealized skeleton, hinged at the joints, on which hung a concatenation of 3-D polygons; each body part--an upper arm, a calf, a hand--looked as if it were built out of a child's building blocks. Jack's torso was a single flat-sided block that expanded from a relatively small waist upward and outward into a chest. It sat on the flat top of the pelvis, balanced on a single point, and could rotate and pivot and little more.

Jack's current need for a spine required breaking down that single block. Now Jack has a torso that's a stack of 17 thin, flat polygons, one for each of his 17 vertebrae. The polygons can move forward, backward, or sideways and rotate or even slide over one another. "If you imagine the pelvis is a table," Badler explains, "then you have this column of vertebrae coming up from the table and you have a handle at the neck. Depending on how you move that handle around, all the vertebrae have to follow according to a very limited range of motion. The relationship is built-in in advance, based on medical data that describe the relative change of each vertebra with respect to its neighbors. You don't want to break the back, you only want it to bend."

The same theory lies behind the motions of the rest of Jack's joints. Each is programmed to have a range of motion equivalent to that of a generic human. "You don't have to worry about an elbow bending the wrong way, for instance, so that Jack looks double-jointed," says Badler. When Jack moves or is moved, the program calculates what happens to each relevant polygon and joint. If Jack waves good-bye, for example, only the hand, wrist, forearm, and elbow joint positions have to be calculated and recalculated. But if Jack is asked to squat, the computer recalculates the position of nearly every polygon and joint in his body.

Badler and his students can make Jack move in several ways: they can give him a whole program on how to walk and then just let him go; they can put a cursor on his body and slide him around; or they can put the cursor on his legs and move them one at a time. Badler takes particular pleasure in showing visitors how Jack can do the limbo, keeping his balance even when bending over backward to satisfy the demands of his creator. Even here, however, there are limits: Badler and his crew have given Jack a center of gravity, a program that seeks to keep him positioned over a quadrilateral bounded on its sides by Jack's feet, at its front by a line connecting his toes, and at its back by a line connecting his heels. Take the cursor, put it on Jack's shoulders and tug him around, and he'll take a step or two to stay balanced, just as any human would. "He does this funny crab walk, but it's his only option," says Badler. "Instead of falling over he tries to maintain a reasonable human posture." If he were tugged on too hard--and if Badler's students were to take the trouble to program in a force of gravity--Jack would take the fall, and do so realistically.

Lately, Badler's students have been working on extending Jack's locomotive ability so he can take steps in any direction--forward, backward, sideways, or along curved paths. To do so requires expanding not just his physical capabilities but his intellectual capabilities as well. Take Jack's mission to find the red cube. He has two simple, programmed behavioral rules to follow: he is to be repelled by the walls of the maze and attracted to the cube. Thus, with each step, Jack has to calculate the repulsive influence of the walls and the attractive influence of the cube, then select the direction of motion. "The reason he walks down corridors," says Trip Becket, "is that the negative influence on each side is the same and the only way to go is forward. That's the same reason he'll walk around corners."

Jack's decision-making process can be seen beneath his feet in the shape of a fanned-out array of blue, green, and yellow dots, all surrounding a single red dot. The blue dots, says Becket, are those possible foot positions that Jack considers undesirable because they head him away from the cube. The yellow dots are those positions Jack has eliminated because they are geometrically impossible and would have him stepping on his own feet. The green dots are closer to the goal, and the single red dot is the position that Jack considers the best choice--that's where he'll put his foot down next. "He actually pretends that he steps on each one of the dots," says Becket, "and then decides what the new influences will be and picks the one that has the most favorable situation."

Though it sounds high tech, the program is actually quite primitive. When Becket takes the cube and places it behind one of the walls, outside the confines of the maze, Jack promptly walks over to the wall and starts pacing back and forth endlessly. "The problem is, he can get stuck," says Becket. "He doesn't remember where he's been before, just like a housefly that will go bashing against a window over and over and not see that there's an opening on the side. It's because Jack's using animal- level behaviors."

Eventually, using the techniques of artificial intelligence, Badler and his colleagues hope to bestow on Jack the ability to plan more than one step ahead. "What he might do," says Becket, "is use a high-level robotic reasoning technique that comes up with the path to the goal all at once. What we want to do is use that as another influence in this potential field. He tries to stay along the path, but if something falls in front of him, for example, he'll go around and get back to the path."

As if teaching Jack to walk isn't hard enough, Badler's other plans for Jack include teaching him to respond to language, first written and eventually spoken. It's concepts like language more than anything that prove what a piece of work is man, and what a tabula rasa is Jack. "We have these animated agents out there," says Bonnie Webber, a computer scientist working with Badler, "and we want to be able to say things to them. But when you start looking at how we tell other people to do things, then you start seeing how rich those communications are and how much we expect others to understand from them. It's really amazing because we're presuming a lot about the context in which the instructions are given."

Badler and Webber have been working on a program they call Soda Jack, which puts Jack to work behind the counter at a rather idealized representation of an ice cream parlor. When you type a command telling Soda Jack to, say, get you a virtual ice cream cone, you're asking him not only to recognize ice cream, to know to look in his virtual freezer and be able to find it there, but also to know how to scoop it out and put it in the cone--which means knowing, among other things, where the scoop is, how it works, how to hold it, and even that if the cone is not held right side up, and the scoop with the ice cream is not turned upside down over the cone, then the entire task is not going to proceed smoothly. Whereas humans can assimilate this information with evolutionary ease, somebody has to program all this into Jack. (His programmers make the task easier, for now, by cheating a little. When Jack dips his scoop into his vat of virtual ice cream, he pulls it out with a perfect double dip on the end.)

Soda Jack accomplishes his task by following what Webber calls an intention structure for each command. The intention structure takes an utterance--"Get me ice cream," for instance--and breaks it down for Jack, telling him the things he has to do, like walking, looking, grasping an object, and opening a door. Opening a door requires even more primitive actions, like grasping a handle or twisting a knob. "You move to open a sliding door differently from how you move to open a swing door," Webber notes. "The whole notion of where you go and what you do with your hands has to be planned before it can be sent to the simulator, because it's not making those decisions."

For Soda Jack, Badler and Webber will need to program intention structures for a number of tasks. You should be able to ask Soda Jack to make you a malted or an egg cream, or to count out your change. Even then, of course, these intention structures will work only within their specific virtual environments. Soda Jack, for instance, will be able to get you some ice cream, but washing your car will be beyond him. For that, you'll need Car Wash Jack and a whole new set of programs.

Depending on how you look at it, progress with Jack has been either inexorably slow or remarkably fast. The catch is that getting Jack to look, move, and act more and more like a real human takes more and more computer power. For one thing, it takes ever more polygons to approach the smoothness and shape of a human body, and each of those new polygons has to be repositioned with each movement. The late-model 1993 Jack, for instance, has 40,000 polygons. "The old Jack was only 2,500 polygons," says Badler, "so that's an order of magnitude right there. Give me another order of magnitude more computer speed and I'll eat it up having a more detailed model." Modeling clothing, for instance, can add a factor of ten to the computational space needed for Jack. Internal organs may double or triple or increase tenfold the required computer speed. Facial expressions increase the demand another ten to a hundred times. Understanding spoken language, another ten thousand to a million times.

Not that Badler and his colleagues are pessimistic. After all, computer capacities and capabilities are constantly increasing. Within a decade or two, says Badler, "you should be able to have a computer screen with agents that will do your bidding, and they should appear no less realistic than what you see on prime-time television." But there is one problem with creating a full-fledged, never-to-be-flesh-and-bone synthetic human. "As soon as we make a 3-D model that starts to look like a person," Badler says, "we start to demand that it behave like a person. And that's the hard nut to crack."

1 free article left
Want More? Get unlimited access for as low as $1.99/month

Already a subscriber?

Register or Log In

1 free articleSubscribe
Magazine Examples
Want more?

Keep reading for as low as $1.99!


Already a subscriber?

Register or Log In

More From Discover
Recommendations From Our Store
Shop Now
Stay Curious
Our List

Sign up for our weekly science updates.

To The Magazine

Save up to 70% off the cover price when you subscribe to Discover magazine.

Copyright © 2021 Kalmbach Media Co.