Register for an account

X

Enter your name and email address below.

Your email address is used to log in and will not be shared or sold. Read our privacy policy.

X

Website access code

Enter your access code into the form field below.

If you are a Zinio, Nook, Kindle, Apple, or Google Play subscriber, you can enter your website access code to gain subscriber access. Your website access code is located in the upper right corner of the Table of Contents page of your digital edition.

Technology

Computer Algorithm Turns Videos into Living Van Goghs

D-briefBy Carl EngelkingMay 12, 2016 11:25 PM
ice-age.jpg

Newsletter

Sign up for our email newsletter for the latest science news

Computers are becoming rather versatile copycats, thanks to deep-learning algorithms. Just last year, researchers “trained” machines to transfer the brushstrokes of iconic artists onto any still image. Now, Manuel Ruder and a team of computer scientists from the University of Freiburg in Germany have taken the technology a step further: They're altering videos. The team’s style transfer algorithm makes clips from Ice Age or the television show Miss Marple appear as living paintings crafted by the likes of Van Gogh, Picasso or any other artist. And the results speak for themselves.

Cutting Through the Layers

Deep-learning algorithms rely on artificial neural networks that operate similarly to the connections in our brain. They allow computers to identify complex patterns and relationships in data by parsing it layer by layer. More fine-grained information is extracted the deeper the layers go. Last year, researchers at the University of Tubingen demonstrated that it was possible to separate the content of an image from its artistic style using these deep-learning algorithms. Basically, they could use an artist’s “style” like an image filter, regardless of the image’s content — you can now add a Starry Night twist to your own images. Ruder and his team built upon this work, and applied it frame-by-frame in videos.

Making it Smooth

However, Ruder and company discovered the algorithm interpreted individual frames differently, producing unwatchable videos that flickered when the frames were strung together. As a workaround, they designed a constraint that penalized deviations between frames, which reduced jitteriness and yielded smooth, eye-pleasing clips.

Although the algorithm struggles with larger and faster movements between frames, the preliminary results are still quite beautiful. They recently submitted their findings to the preprint server arXiv. Now that computer scientists appear to have nailed down video, it’s only a matter of time before we find ourselves inside works of art via a virtual reality headset.

3 Free Articles Left

Want it all? Get unlimited access when you subscribe.

Subscribe

Already a subscriber? Register or Log In

Want unlimited access?

Subscribe today and save 70%

Subscribe

Already a subscriber? Register or Log In