X

Google Cloud CEO On Facial Recognition & Diversity

Google Cloud’s leader, Diane Greene, spoke at a recent company conference on the subject of facial recognition, and remarked that the technology still lacks the necessary diversity to avoid innate biases. Essentially, Greene was saying that most facial recognition tools have not been exposed to enough of a diverse data set at this point in time to be reasonably expected to properly recognize people of all races. Google’s iteration of the technology is available to help recognize subjects in Google Photos, but is otherwise largely unavailable to the public, along with its underlying technical framework, training data sets and other building blocks. This means that there’s no way to quantify at this point just what data Google is feeding its machine or how diverse the data is.

Greene’s comments come just after an Amazon facial recognition system was tested by the American Civil Liberties Union, and incorrectly identified a few US Congress members, mostly people of color, as arrestees. The ACLU was quick to point out that seems horribly biased, leading Amazon to defend its system by saying that the ACLU did not use the right AI settings for its testing. This disparity could possibly be explained, at least in part, by the fact that development teams for these sorts of technologies are normally some of the earliest testers, and thus the earliest data that the machines are fed. Many studies have pointed out that large portions of America’s tech work force are white, which logically means that these facial recognition systems’ earliest faces, what they will base all future interactions on in at least some measure, are white people.

Even if the machines are fed a balanced data set later on, or even while being built, it can be hard to strike the proper balance in diversity to get the machines to fully recognize people of all races, shapes and colors. Google learned this the hard way back in 2015, when its image recognition AI incorrectly decided that a black couple in a photo were actually a couple of gorillas. Google apologized and moved on from the incident, but has still not opened its recognition framework to the public. As an important distinction, however, it has opened up its Cloud Vision API, which does not do facial recognition per se. According to Greene, the technology still has a long way to go before it can be trusted for most public-facing use cases. This issue speaks to a fundamental truth of AI technology; that it will never be completely perfect.