X

New In-Depth Video Highlights How Pixel 2's Camera Works

Google took to the stage today to finally reveal its long-rumored Pixel 2 and Pixel 2 XL devices and one of the most interesting aspects of those devices is the new camera. As revealed during the devices’ unveiling, the camera managed to score no fewer than 98 points under DxOMark’s recently reworked testing protocols, making it the best overall smartphone camera to be built yet. That may be a bit surprising, even with consideration for the high rating of the predecessor to Google’s new Pixel devices. Thankfully, a new video from YouTuber “Nat and Friends” helps break down exactly how Google was able to achieve that score using only a single camera setup.

For starters, the camera hardware in both variations of the Pixel 2 is comprised of no less than 6 lenses in a stacked configuration, with a wide variety of shapes, allowing for minor corrections to be made. That’s accomplished via the camera’s software, which adjusts those lenses to as needed. Beyond that, the camera features optical image stabilization, electronic image stabilization, and a more mechanical zoom functionality. Both zooming and optical image stabilization (OIS) are controlled by a miniaturized array of motorized parts surrounding the camera, so the second-generation Pixel can zoom by adjusting the lenses from front to back, while the whole array is shifted to the left, right, up, or down by a motorized mechanism to offset camera shake and other common problems which often lead to lacking clarity in photos and videos. Meanwhile, the electronic image stabilization (EIS) is controlled by software to further eliminate problems caused by movement of the camera during shooting. Both OIS and EIS also work simultaneously in the Pixel devices, which Google says is a first. Behind those lenses, the new smartphones also take advantage of a dual-pixel sensor. That means each pixel on the 12-megapixel sensor is split between left and right to capture a slightly different perspective, which allows the machine-learning controlled camera software to get a better sense of the depth and improve autofocus capabilities.

Finally, the image processing in Google’s new devices requires 30 steps. The camera’s pixel sensors are set up to only absorb specific types of light, and to more accurately pick up on those colors. Then the software algorithms process the image further to balance white balance, reduce noise, and more. Google’s camera differs in that it uses algorithms rather than hardware to accomplish all of those processes. That computational photography carries over into the new Pixels’ HDR Plus mode, as well. Instead of taking a single image, the cameras take up to ten images in rapid succession and, with no shutter lag to speak of, combines them. The images are each under-exposed in order to save darker portions of an image but each image is combined on a per pixel basis, meaning that only the pixels with the best fit for a final image are chosen for the final photo. That also serves to eliminate artifacts in the photo that can be caused by any movement to a given scene during the capturing of the image. The algorithm itself was created using millions of photos. Better still, because all of these enhancements are software related, they also apply to the new Pixel devices’ front-facing camera.