How did Facebook build auto-enhance for iOS?

autoenhance650

Your iPhone’s camera might sometimes stink at taking a good, clear photo. But Facebook wants to help, introducing an auto-enhance feature for photos uploaded through the iOS app.

Facebook’s lead engineer explained in a new blog post how this technology came to be.

Facebook’s Director of Engineering, Brian Cabral, wrote about the feature:

At first blush it seems like an intractable problem: How do we recover light in the dynamic range missed by the camera to create a noise-free image? It turns out this is an age-old problem in photography that confronted the 20th century masters such as Ansel Adams and Ernst Haas. Photographic film and paper also had these same problems, arguably to a greater degree. The masters evolved a set of darkroom techniques that managed local and global tone through the use of dodging-and-burning techniques along with chemical recipes and various colored filters. These techniques required a huge amount of time to execute and years to master. Later, in the digital age, desktop tools followed suit providing similar techniques in the digital domain. While not quite as time consuming, it still required a level of mastery and patience that inhibited all but most avid enthusiasts and professionals.

Our approach was to adapt ideas from the masters and figure out automated algorithms – collectively known as computational imaging – that would apply these techniques in the right amount and the right time. We developed three computational imaging technologies drawn directly from these historical techniques: adaptive Global Tone Mapping (GTM), Local Tone Mapping (LTM), and Noise Suppression (NS). Applied together, these manage the dynamic range in the way our visual system remembers them and the way in which the 20th century masters brought images to life.

Readers: What do you think of the new auto-enhance feature?