X

Google Revises AI Ethical Principles Following Controversies

Google is refocusing attention on the principles surrounding its AI-related efforts following recent closures of its controversial Projects Dragonfly and Maven. The search giant is centering that around two distinct categories including the raising of employee awareness and reviews of both products and deals to ensure they adhere to its AI standards. Central to both sides of that is ethics, beginning with new training programs based on a Santa Clara University Markkula Center for Applied Ethics project. The course is intended to ensure that both technical and non-technical Google employees are equipped to handle ‘multifaceted ethical issues’ as they arise in day-to-day work. That’s supplemented by a more direct new primer on fairness in the company’s free Machine Learning Crash Course, guiding workers developers across the board through inherent biases associated with AI and how to address that. An AI Ethics Speaker Series has been launched as well, featuring experts external to Google discussing the implications of AI and ethics in various cultures and ‘professional disciplines’.

Rather than leaving employees to turn that knowledge into unbiased and ethically sound apps, the company has now established a formal review structure backing up the above-mentioned ethical training and built on its AI Principles. Each project will undergo the review process and there are no fewer than three core groups meant to help lower risks associated with those. At the top of that is a group of senior executives that will address cross-product or cross-technology projects as well as any that grow beyond the complexity of the remaining two groups. Just below that is a group of ‘senior experts’ from various fields across the Alphabet ecosystem with the technological, functional, and application knowledge. At its base, that’s held up by a designated ‘innovation team’ comprised of researchers, social scientists, ethicists, human rights specialists, policy and privacy advisors, and legal experts that will be responsible for overseeing daily operations and performing initial assessments.

Background: As alluded to above, Google’s shift in ethical standards and the decision to bolster AI Principles seems to follow and is likely related to at least a couple of fairly significant controversies faced by the company over the year. The more recent of those was encompassed under the internal codename Project Dragonfly. That was an extremely secretive effort by the company that remained mostly obscure even for employees and would have resulted in the launch of Google’s Search services in China for the first time in years. The ethical blowback caused by the project, resulting in a further backlash from employees chiefly stemmed from restrictions in the region pertaining to censorship of search terms and heavy government monitoring. However, employees were also kept out of the loop, including in privacy and other areas of policy. Google had consistently claimed the project was not anywhere near finished, more of a quandary being explored than a full-fledged new piece of software. The company never managed to convince its employees or the general public that was the case though, leading to a relatively firm string of questions from the US government.

Separate from Project Dragonfly but related to Google’s recent history with ethical implications tied in with the program is Project Maven. That was reportedly shut down in June after internal leaks showed that new technologies were being generated solely for the use of the US government. Summarily, those were AI-based inventions that would have, according to officials, been used to “root out” extremist activities and human right abuses. Googlers called out the project and many resigned or threatened to because of Maven’s close ties to the military and secrecy around the project. Concerns also surfaced relatively quickly that the results of the project would be linked to weapons technology, creating an ethical dilemma. It was around that same time that the company’s initial ethical framework for AI Principles was introduced.

The new standards are reported by Google to at least partially be behind a decision by the company’s Cloud division not to sell general-purpose facial recognition solutions. The search giant recently revealed that it had concluded that it would be unethical to sell the technology without comprehensive consideration for how it might be used. More importantly, the company publicly recognized that abuses are likely to occur with any implementation of AI with regard to privacy and the tracking of users, regardless of the intentions behind the software. Google also took the opportunity presented by that announcement to reveal a grant to be paid out toward the creation of the Asia Pacific AI for Social Good Research Network, which seems inexorably linked to concerns about the Alphabet subsidiary’s recent activities in the Asia region.

Impact: The latest updates to Google’s AI Principles have not been around for long enough to be far-reaching but the company is off to a good start nonetheless. The search giant says that as many as 100 Googlers have, for example, already gon through its Santa Clara University co-created “Ethics In Technology Practice” course. Moving forward that will be made accessible to every employee associated with the company. Its AI Ethics Speaker Series has already held a total of eight sessions featuring eleven different speakers covering AI in the criminal justice system and bias that can appear in natural language processing. The fairness module, meanwhile, has been used to train more than 21,000 Googlers and is currently available in just English but will be spreading to all eleven that the Machine Learning Crash Course is presented in “soon.” Although none of the new solutions is likely to address the entirety of the issues behind AI at Google, it does show some initiative following a substantial amount of broad-ranging pushback and should lead to positive changes.