Google's Pixel phone has a great camera, and one of the reasons is AI. Google has used its machine learning talent to get better shots and shooting modes from a small smartphone lens. And now, the company has open access to one of these artificial intelligence tools, a piece of software that supports Pixel's portrait mode.
As announced in a blog post earlier this week, Google has opened a source codenamed DeepLab-v3 +. This is an image segmentation tool created with convolutional neural networks, or CNN: an automatic learning method that is particularly good for analyzing visual data. Image segmentation analyzes objects within an image and divides them; Divide the elements in the foreground of the background elements.
A diagram that shows how image segmentation works for a typical photograph. Image: Google
This may seem a bit trivial, but it is a very useful skill for the cameras, and Google uses it to feed their images in portrait mode in the Pixel. These are the photos of bokeh style that blur the background of a shot, but leave the subject clear. The iPhone popularized, but it is worth noting that Apple uses two lenses to create the portrait effect, while Google does with only one. (Is Apple's portrait mode better than Google's? I'll leave that debate for commentators.)
As Google software engineers Liang-Chieh Chen and Yukun Zhu explain, image segmentation has improved rapidly with the recent deep learning boom, reaching "levels of precision that were difficult to imagine even five years [ago]." The company says it hopes to publicly share the system "other groups in academia and industry [will be able] to reproduce and further improve" in the work of Google.
At the very least, opening this piece of software to the community should help application developers who need a segmented image segmentation, just like Google does.