You wouldn’t fly on a commercial jet plane unless you were confident that the pilot had logged some serious time in a flight simulator, preparing for every eventuality. Someday it may be just as inconceivable to undergo delicate surgery without assurances that your doctor has taken a few practice runs on a three-dimensional, interactive simulation of your own anatomy. Researchers at Stanford University are hastening that day by developing a training technology that allows doctors to rehearse surgical procedures before the patient reaches the operating room.
The demonstration project, called the Stanford Rhinological Virtual Surgical Environment (VSE), uses a haptic interface—mechanical feedback that simulates the sense of touch—developed by SensAble Technologies of Woburn, Massachusetts. The VSE system combines that interface with a set of detailed CT scans, taken before the operation, to create a digital “body double” of the patient. Using the patient’s own scans in the simulation could greatly assist doctors performing surgery near critical parts such as the optic nerve and carotid artery, where damage could cause permanent debilitation or death. In such operations, knowing the precise quirks of an individual’s anatomy is crucial to a successful outcome.
Kenneth Salisbury, a professor in Stanford’s departments of computer science and surgery, says that tactile feedback combined with the personalized information gives the VSE system a big advantage over current medical training simulations that use virtual surgery. “Existing systems allow you to move surrogate instruments around, watch how they look on the screen, and learn to make movements in the correct direction,” he says, adding, “It starts to get more interesting when you add the feeling and the reaction of tissue.”
The Stanford team has developed an enhanced haptic interface that can re-create essentially all of the physical challenges a surgeon would encounter in a procedure. From a clinical point of view, though, plastic training mannequins will probably always be useful. “It’s the same with an airplane,” Salisbury says. “You want a simulated plane that looks and feels like the one you’ll be flying.” Clinical trials of the VSE system are slated to take place over the next couple years.
HOW IT WORKS
A haptic-feedback device (A) operates in a way analogous to the graphics card in your computer monitor, but instead of creating an image, it renders the feeling of a physical object—in this case bone, cartilage, or a tumor. “Rather than controlling red, green, and blue pixels that are visible to the eye, the device controls the three-dimensional forces felt by the hand,” says Daniel Chen, chief technology officer at SensAble Technologies.
To create models of patients’ sinuses, multiple two-dimensional scans were taken of their sinus cavities to create a composite 3-D display (B) that is viewable on a standard PC. From the same data, physical mass, friction, and compliance properties are assigned to the anatomical parts within the sinus. The virtual sinus is then engaged by the haptics device, an armlike series of joints containing lightweight motors connected to an endoscope-tipped stylus (C). The device contains optical encoders that link the stylus’s movements with the image on the screen.
When the encoders sense that you have “bumped” into something, they signal the device to engage its motors so they respond with appropriate force. When a surgeon grazes the wall of the virtual nose, he or she will feel soft resistance; if the surgeon presses harder, the resistance will increase to mimic the feel of the underlying cartilage or bone.