X

Eric Schmidt Predicts Consensus On Rules Regarding Military AI

Reflecting the thoughts of a good chunk of the tech world right now, former Google CEO and current Alphabet Chairman Eric Schmidt thinks that Silicon Valley should help the US Government with military projects in both offensive and defensive capacities, and he says that the industry is likely to reach some sort of consensus on what constitutes acceptable use of AI on the battlefield in order to facilitate that cooperation. Essentially, what Schmidt is suggesting is that tech firms do lend their expertise to the military, but that they do so under a set of conditions that will govern how the resulting technologies could be used, most likely in blocking things like attacking civilians, committing war crimes, and other egregious acts.

While Schmidt holds that using AI for the advancement of warfare could potentially be an ethical use case for the technology, many Googlers disagree. A number of employees recently penned a letter to Google CEO Sundar Pichai, asking that the company pull out of Project Maven, an undertaking for the Department of Defense that involves equipping drones with the ability to process captured video and recognize faces and objects. In the letter, the Googlers say outright that the company should not be helping out when it comes to war, and that there is no real way to ensure that the fruits of Project Maven will never be used in an offensive capacity.

Given the current worldwide sociopolitical climate, using AI on the battlefield is an extremely controversial issue. Many see AI as the future of warfare, which means that those who refuse to put AI to work on the battlefield will be left behind from a technological standpoint, and thus at a disadvantage. On the other hand, there are those who argue that there are many reasons that AI should be kept off of the battlefield, such as the fact that AI programs are inherently naive and submissive, which means that they could be told to do all sorts of atrocious things, and the fact that learning AI are complicated and sensitive, which means that they could go rogue at any time, with potentially devastating consequences.