Register for an account

X

Enter your name and email address below.

Your email address is used to log in and will not be shared or sold. Read our privacy policy.

X

Website access code

Enter your access code into the form field below.

If you are a Zinio, Nook, Kindle, Apple, or Google Play subscriber, you can enter your website access code to gain subscriber access. Your website access code is located in the upper right corner of the Table of Contents page of your digital edition.

Technology

Digital Puppets Mimic Celebrities With Chilling Accuracy

D-briefBy Carl EngelkingDecember 9, 2015 2:05 AM
tom_hanks.jpg

Newsletter

Sign up for our email newsletter for the latest science news

Computer researchers built highly accurate 3-D models of celebrity faces and expressions using images from Google — no scanning required. (Credit: University of Washington) No job, it seems, is safe from the machines, and that includes actors. Computer researchers from the University of Washington combed the Internet to gather vast photo collections of celebrities’ faces and built 3-D, digital imitations of their likenesses. The resulting digital puppets not only looked the part, they also conveyed the facial expressions and mannerisms of their real-world counterparts with chilling accuracy. And in this strange new world the team created, former president George W. Bush controlled digital puppets of other well-known celebrities, including President Barack Obama, to make it look like the they gave an interview they never did. Watch the video below and see for yourself.

Pictures Worth More Than 1,000 Words

The team’s demonstration is the latest advancement in a five-year effort to improve 3-D face reconstruction, tracking and texture modeling. The team, led by Ira Kemelmacher-Shlizerman, combined Google images and machine learning algorithms to re-create faces and expressions without actually scanning a real person’s face. To do this, researchers searched the Web and gathered at least 200 photos of celebrities donning various expressions and poses. Machine learning algorithms mapped 49 facial anchoring points — the corners of the eyes, mouth and nose, for example — to assemble a 3-D model of, say, Tom Hanks’ face, with a standard expression. Then, algorithms overlaid Tom Hanks’ various expressions over the standard model to capture the way his face changed as he smiled or frowned.

The result was a digital model that was the spitting image of Hanks, and one that could also capture the the subtle eye wrinkles and mouth creases of the actor's smile. And to take it a step further, researchers used YouTube videos of another person talking to drive the digital puppets. Although, say, Daniel Craig was doing the talking and smiling in a YouTube video, the Hanks digital puppet conveyed the same facial expressions and mannerisms, but with a uniquely Tom Hanks twist. Researchers will present their study at the International Conference on Computer Vision in Chile later this month.

New Memories

Right now you need a lot of photos spanning a variety of facial expressions and lighting scenarios to create a 3-D model with the accuracy of photogenic Tom Hanks. However, the researchers envision a day when people will use their technology to interact with 3-D digital personas lifted from family albums or historic photo collections. For example, in the future we might put on a pair of VR goggles and enjoy computer-simulated coffee with an interactive model of a loved one who is thousands of miles away, or we’ll finally get that opportunity to converse with a virtual Albert Einstein.

3 Free Articles Left

Want it all? Get unlimited access when you subscribe.

Subscribe

Already a subscriber? Register or Log In

Want unlimited access?

Subscribe today and save 70%

Subscribe

Already a subscriber? Register or Log In