This Eophrynus prestivicii might look like a spider you'd see setting up shop in your basement, but it's actually a fossil that pre-dates the dinosaurs, having roamed the Earth about 300 million years ago.
Scientists found the proto-spider frozen in the mineral siderite (no, not carbonite) in England and used an X-ray version of a CT scan to image it. The X-rays picked up details as fine as 20 micrometers, revealing much about the animal's natural history.
For instance, researchers say Eophrynus was probably an ambush predator that pounced on passing insects and grabbed them with those nasty-looking, forward-angled front legs.
When used in conjunction with photogrammetry--a technique for establishing spatial coordinates using visual reference points, shown in the left photo--laser scanning can be used on large objects such as airplanes, sailboats--and fusion reactors.
A team of metrologists scanned the plasma containment chamber, or stellarator (as in, "this thing works like a star"), that uses magnetic fields to confine the super-hot plasma within the reactor at the Princeton Plasma Physics Lab in sections, then stitched more than 20 of them together in a seamless mosaic. The scans were accurate to 50 microns; the false color in the right image indicates how much the stellarator deviates from its ideal shape.
This 3-D contact scanner illustrates two principles: first, you really can build anything with Legos; and second, radiation-based imaging is not the only route to 3-D glory.
As the scanner pokes its prey (here, a small, green plastic frog) with a needle-like probe driven by a tiny motor, a light sensor detects contact between probe and object with an accuracy of 30 micrometers, and a linear actuator translates the rotation of the Lego gears into linear distance at a resolution of 6.25 micrometers.
A computer records the resultant three-dimensional coordinate data and uses them to construct a 3-D grayscale model.
3-D films have come a long way since Creature from the Black Lagoon, thanks in large part to technical innovations like the new virtual camera pictured here. The device is basically a monitor that translates motion-capture data into computer rendering in real time. (Imagine watching an actor playing a scene against a green screen--at the same time, the monitor shows a CGI doppelganger imitating her movements exactly.)
But that's not all the virtual camera does. Motion-capture cameras ringing the soundstage track its position and use that data to compute a realistic virtual image displayed on the monitor. The game controller-style joysticks and buttons allow the camera operator to change perspective within the virtual 3-D space; move the camera down and tilt it up and the monitor displays a CGI character from below, just as if you were pointing a real camera at a real actor.
The camera is currently being used to shoot James Cameron's Avatar and Peter Jackson's Tintin.
Creating 3-D images is one thing. Viewing them is another. A company called Spatial View has developed software and hardware that allows users to both create and view stereoscopic images on their iPhones.
You can view the 3-D content through traditional red-and-cyan glasses if you like, but Spatial View also makes autostereoscopic lenticular lenses that fit over the iPhone screen to create the illusion of depth without the kooky spectacles. Adult entertainment company Pink Visual is already filming 3-D porn for your viewing, uh, pleasure.
Transparent face masks like the one pictured here help to speed healing and minimize scarring among patients suffering from severe burns by applying gentle, consistent pressure to inflamed facial tissues. But for patients who have recently sustained third-degree facial burns, schlepping across town to a scanning facility might not be a great option.
So Seattle-based CimMed makes bedside scans using handheld laser scanners. The mobile technicians email the data to headquarters, where it is used to construct a three-dimensional model of the patient's face. A rapid prototyping machine then builds the mask out of soft, flexible silicon gel. The entire process takes less than four hours.
The speed and accuracy of laser scanning makes it ideal for a wide variety of medical applications. In this case, a company that designs active-function artificial finger prostheses for partial-finger amputees needed high-resolution, 3-D scans of two small hand casts.
The scans, which were accurate to a twentieth of a millimeter, were used to generate virtual models that could be examined from any angle, allowing for highly precise machining--and a very close fit.
You don't need a $25,000, industrial-grade laser scanner to get jiggy in three dimensions. Two German scientists have developed a system called DAVID that runs off a handheld line-laser, a webcam, and some free software.
Simply position the object you wish to scan against a background containing calibration marks and sweep the laser across its surface like a paintbrush. (Remember the laser-scanner that examined Ripley's ship at the beginning of Aliens?)
Once DAVID's manual scan is complete, the software uses the calibration marks and the reflected laser light to calculate the three-dimensional coordinates of the object, and voila--you have yourself a quick and dirty 3-D model.
Plain old white light can be used for scanning purposes, too. A Canadian startup has developed a system that requires nothing more than a digital camera hooked up to a laptop and a common presentation projector. The projector spits a series of computer-generated black-and-white patterns against an object, and software algorithms analyze the deformation of those patterns to construct a three-dimensional model.
Orthotics manufacturers like it because they can measure feet without having to bother with specialized equipment or messy casts. In the image shown here, color indicates depth, allowing easy visualization of arch shape.
Visible light isn't the only kind of radiation used in 3-D scanning. Computed tomography (CT) employs high-energy X-rays to penetrate the surfaces of objects, allowing researchers to construct multidimensional models of hidden structures. The technique was used to image the bones of our primate ancestor, Lucy, for clues to her distinctive physical traits and lifestyle.
The false-color 3-D rendering of her mandible was constructed from 801 cross-sectional slices, each 129 microns thick. A portion of the rendering was cut away to reveal the roots of her teeth.
Depth perception is crucial to our ability to navigate a three-dimensional world. It's important for robots, too. Here we see the SVS, or Stereo Vision System, a stereoscopic video platform for robot navigation and surveillance.
The data from two separate video cameras are combined to form 3-D images, or anaglyphs, that can be viewed with red-and-cyan glasses. The system is shown mounted on a YARB (Yet Another Robot Blimp).
When viewed side-by- side, the difference in perspective between the images captured by each of the video cameras becomes apparent. (Note the shift in relative position between the foreground birdhouse and the background window.) The SVS circuitry color-codes and merges the images.
When the resulting anaglyphs are viewed through red-and-cyan glasses, the left eye sees one image while the right eye sees the other, and the brain interprets the disparity as depth of field.
From old-school B-movies to laser-generated holograms, 3-D images have long been a source of fascination. And why not? It would be a shame to lose that precious third dimension.
Alas, creating and manipulating realistic 3-D images has not always been easy. But advances in digital technology are enabling everyone from laboratory researchers to Hollywood filmmakers to expand the range of 3-D imagery available to us; witness the great scans released last week showing two 300-million-year-old proto-spiders (see the next slide).
Some 3-D imaging applications are helping to save lives; others are purely frivolous. Some are quite sophisticated; others, surprisingly simple. None will leave you flat.