|Researchers at MIT have developed an algorithm that can automatically separate window reflections from a digital image (left) and remove them (top right). Credit: MIT|
We have all tried to take a picture through a window of a gorgeous view, only to have our own mug reflected back in the photo.Photographers are often stymied by their own reflection or that of their camera when shooting through glass, but researchers at MIT and Google Research have developed new software that has the ability to remove reflections, dust, and raindrops that make their way into your pictures.
The algorithm, building on work done by Daniel Zoran and Yair Weiss of the Hebrew University of Jerusalem, divides images into 8-by-8 blocks of pixels and determines each pixel’s correlation with another. Their technique is able to identify which minuscule sections of the photo are part of the reflection and which are the actual image seen behind the glass.
The researchers in a video explaining the technology behind the software said that:
“ Photographers are often forced to take images through obstructing elements ,” said the researchers in a video explaining the technology behind the software. “ For instance when taking images through a glass window, reflections from indoor objects often obstruct the scene we wish to capture. Our algorithm automatically decomposes the sequence of images into a background and reflection component to produce a new clean image where the reflection has been removed.
Similarly when photographing a scene thought a fence we would like to remove the occluding fence from our image.Again our algorithm is able to decompose the sequence into the background and foreground to produce the desired de-fenced image.”Sadly we’ll be waiting quite a while however, as currently photos are being processed by a beefy eight-core Intel Xeon CPU and 64GB of RAM.”
A non-optimised implementation takes around 20 minutes, while a low-resolution photo being analysed by a Windows Phone prototype app takes two minutes.
The researchers will be presenting their paper at the Siggraph computer graphics and interaction conference in Los Angeles later this month.
See the video to get more information: