In the face of impending AI legislation, Google is intent on not only having its own voice heard but also those of other AI policy stakeholders the company outlined in a recent blog post. That the risks associated with the technology, the company says, are simply too much for one country or company to take on alone.
The search giant has released its own outline of key areas that need to be discussed on the matter in a formally laid out whitepaper, as well as some advice against over-regulation.
Most of the more general use case scenarios are already covered by other regulation, Google claims, offering up medical device regulations as an example. A device should comply with all regulations pertaining to devices, including laws such as those encompassed under HIPAA, irrespective of whether AI is included. The same rules should apply to other industries as well.
AI concepts that need to be discussed, according to Google
Legislators and policymakers need to come together with industry leaders to discuss how the technology works and what tradeoffs, if any, can be made in specific situations, Google says. There is no one-size-fits-all solution since the multi-purpose technology is shifting its way through the entirety of the electronics industry and spawning new products and services along the way.
First, Google says that the rules need to be defined on a sector-by-sector basis and by the context of various applications for the technology. In any given scenario, accountability and explainability will need to be balanced against security and the protection of proprietary information. The user experience needs to be examined in conjunction with those attributes.
Standards of explainability should be set by the industry including both minimum acceptable compliance standards and best practices taking all of the likely tradeoffs into account.
Frameworks should also be in place to ensure that there is a balance between competing goals and definitions of ‘fairness’, with clarification of how factors in hypothetical situations should be appraised and prioritized.
A framework is needed to ensure that liability is given due consideration for instances where AI implementations don’t go as planned too. As with many other areas of Google’s proposal, existing rules and legislation may be sufficient to cover liabilities but that’s an area where plenty of evaluation is needed to be sure. Google wants to explore insurance alternatives and related regulations to ensure that gaps are filled in liability where they do exist.
Areas that aren’t necessarily clearly defined yet, on the other hand, include rules pertaining to safety and an examination of how human interactions with AI should be regulated.
For safety, Google points to the need for diligence in safety checks and documentation of standards that are contextually relevant to any given industry in which AI will be or is used. That includes certifications and specifications of tests that still need to be outlined and set but most importantly applies to applications for the technology where maintaining safety is critical.
Tying back into safety, Google finishes outlining its points with a discussion on how Human and AI collaborations should be regulated. That all needs to start with discussing and outlining where AI should be allowed to fully automate a system or work by itself outside of a system, the search giant says. But legislators and policy stakeholders additionally need to decide on a variety of approaches for including human oversight and review of critical systems that are appropriate to a wide variety of use cases.
Hard-learned lessons
Google is no stranger to concepts surrounding the need to regulate the use of AI. Last year, the company was forced to shut down its collaboration with the US government on an AI drone project dubbed Project Maven, following substantial backlash from both the public and its own employees.
That backlash came due to concerns about government spying and in spite of the fact that the search giant did work toward policies early in 2018 that would prohibit the use of AI in any form of weaponry.
Project Maven came up again later on in the year when Google was forced to appear before United States legislators, facing questions about why it had continued work on a Chinese search engine. That would allow restrictions imposed by authorities in that region, effectively showing Google working with the government while refusing to work with the US military.
Amid those controversies and others, Google ultimately had to rework its own AI policies alongside public statements to show that it understood the risks — and that it would not abuse the technology or enable it to be used for unethical ends. As part of that change, Google implemented new AI ethics training frameworks for employees at every level of its business.
The company also noted at the time that it would not sell its technology without a full understanding and agreements ensuring it would be used ethically. That hasn’t convinced everybody and the latest suggestions from the company might only make matters worse if it appears Google is trying to rule over the legislative process of any given country or its competitors.
Regardless of Google’s mistakes, its history with AI and struggles with ethical dilemmas may place it in a good position to lend help in bringing industry leaders together. That should help the industry guide policy decisions to maintain an ethical approach to AI without holding back the entire enterprise.