X

Google Comes Up With A List Of Safety Rules For A.I.

Artificial intelligence or better known to some simply as AI, is set to take on a bigger role in the world, making smart home appliances and to change the way we interact with our smartphones. AI might have the ability to make everyone’s lives easier but of course it isn’t entirely perfect and comes with it’s own issues. As AI is still in it’s infancy, a lot on it is yet to be explored and discovered and thus it has the potential to cause accidents that may emerge from poor design of real-world AI systems.

To counter that problem, a group of engineer’s from Google’s deep-learning research unit, Google Brain, have come up with a paper titled, “Concrete Problems in A.I. Safety”. The paper was written with the aim to tackle practical issues such as negative side effects and to ensure safe exploration in AI. To do so, the researchers came up with a list of five problems which Google thinks is important to focus on. The researchers used cleaning robots as examples in all five problems.

The first problem highlighted in the paper is avoiding negative side effects. How can we ensure that our cleaning robot will not cause any disturbances to it’s surroundings while pursuing its goals? For example, knocking over a vase because it can clean faster by doing so? Secondly, avoiding rewarding hacking. For example, if we reward the robot for achieving an environment free of messes, it might disable its vision to ensure it will not find any messes just so it will get rewarded. Thirdly, Scalable Oversight. How can we efficiently ensure that the cleaning robot adheres to the objective that are too expensive to be frequently covered during it’s training? For example, it should throw out things that are unlikely to belong to anyone, but put aside things that might belong to someone else. The fourth issue highlighted was safe exploration. How do we ensure that the cleaning robot doesn’t make exploratory moves with very negative side effects to itself? For instance, the robot should be able to explore safe mopping strategies and shouldn’t do something such as putting a wet mop in a power socket. Lastly, the paper pointed out robustness to distributional shift. How do we teach the cleaning robot so that it recognizes, and behaves robustly, when in an environment different from its training environment?

However, Google doesn’t fear AI and in fact, has huge plans for it but to go on ahead with their plans, they need to ensure that A.I. will be safe for everyone else. To do so, Google has even put in place safety measures for their own AI, DeepMind in the form of a kill switch.