Tech giant Google recently involved itself with the Pentagon as part of an artificial intelligence and computer vision project, called Project Maven, and a number of employees have taken issue with the project and penned a letter asking CEO Sundar Pichai to pull out. The Pentagon and Google previously assured the masses that the fruits of the joint project would not be used for offensive purposes, but the Google employees penning the letter are having none of it, opening the letter by saying “We believe that Google should not be in the business of war.” On top of calling for the cancellation of Project Maven, Googlers want to see a refusal to ever work on any kind of wartime or weaponry technology be declared loud and clear as company policy. Thus far, the letter has the backing of around 3,100 Google employees and counting.
The heart of Project Maven is an image analysis system that is being built to take images and videos captured by drones and other means, then interpret that data not only by identifying objects, but by tracking their movements. It is worth noting that while such a project could easily be used against a database to identify people out and about with facial recognition, that does not seem to be part of the plans for this project. The project’s results would be reported to the Department of Defense, a decisive red flag that drove Googlers to put their concerns out in the air by voice, and then on pen and paper to write this letter. Despite assurances to the contrary, many Google employees are convinced that this technology will be put to work for offensive purposes, such as driving machines of war, launching weapons, or carrying out remote strikes with militarized drones.
It is no secret that Google’s collective values, as a company, lean toward the political left. Naturally, this means that most employees would abhor warfare and not want their work to aid that particular cause. According to the letter, Project Maven stands to “irreparably damage” Google’s brand image and core values, which will make it harder to garner talent, among other potential setbacks. “We cannot outsource the moral responsibility of our technologies to third parties,” the letter states, touching on the fact that Google’s core business model has been built upon user trust from the company’s outset. This controversy comes amid growing fears over the possibility of rogue, biased, or misused AI causing harm to humanity, and the letter directly addresses this before closing with a strong reiteration of the direct call to action found at the beginning of the document.