X

Google Photos' Monkey Issue Shines Light On AI Problems

Google Photos mistakenly tagged a photo of a black software developer and some friends as “gorillas” back in 2015, and the fact that Google has still not been able to truly solve the issue, choosing instead to simply block certain tags in the app, shines some light on a few interesting issues regarding the way that artificial intelligence sees the world and the caution that AI programmers have to exercise in building. The slight triggered a review of the problem behind it on Google’s end, and when the team figured out that it would not be a quick and easy fix, they opted instead to avoid similar mishaps in the immediate future with a band-aid fix; a few select terms, such as monkeys, gorillas, and chimpanzees, have been blocked on Google Photos. You cannot find anything by searching them, and the app will not tag images with those words.

WIRED dove deeper into the issue, and its report includes some basic trials to see how the problem has been handled. All of the blocked search terms will turn up no results, and uploading images of the subjects of those terms leaves them difficult to find. Other types of monkeys that look very distinctly different from humans on a basic level, like orangutans and marmosets, are easy to find, so long as one does not use the term “monkey” to search for them. Likewise, photos of black people have been rendered somewhat hard to find in order to avoid mislabeling, though terms like “African” and “Afro” can turn up such photos with reportedly spotty results. Interestingly, Google’s publicly available Cloud Vision API was able to identify monkeys under the blocked terms just fine, down to the very species of gorilla on show. Doubtlessly, the black community is not the first to be collectively offended by an AI error, and won’t be the last; cases like this will pop up, and due to the impossibly vast nature of real-world uses of AI and the gap between that and their training data sets, those problems will most likely have to be dealt with like this one, on a case by case basis as they pop up.

The kicker here is obvious; Google’s AI engineering team is so uncertain of how well they’ve managed to fix the computer vision issue that caused the incident in 2015 that they’d rather not take chances about the matter. That action is meant not to offend, but it is, in and of itself, a sort of commentary on the current sociopolitical landscape surrounding the tech world, as well as the ever-improving, always-imperfect state of AI technology. This problem is an interesting one not because of the social aspect, but because of what it says about AI systems in general. Indeed, accidentally offending a person or group of people with an automated social faux pas, no matter how severe, is among the lowest of stakes when it comes to the sorts of AI systems being built and deployed these days.