Artificial Intelligence is an incredible development, going from simple yes and no to being able to answer complex questions and make decisions. The development of A.I. has taken place over a relatively short portion of human history, allowing for a range of questions to pop up and, thus far, go unanswered. With the possibility of soon replacing smartphones and taking over smart homes, questions of A.I. taking over human jobs, going rogue and sabotaging systems or even harming humans have all been commonly posed and debated in recent years. With A.I. systems seeming to become more powerful and advanced with each passing day, as shown by systems like Google Now, Amazon Alexa and various robots that Japan seems to crank out as a hobby, it is becoming increasingly important to answer those questions.
Eric Schmidt, chairman of Alphabet, has a few ideas about the future of A.I. He has stated that we have nothing to fear from A.I. and that it is only a tool created and controlled by humans, first and foremost. “We are building tools that humans control. AI will reflect the values of those who build it.”, Schmidt says of the burgeoning tech. He posits that computers are not like us in many ways, mainly that they lack experience and emotions. This leads to a need for programming and data collection for long periods before A.I. tech can be perfected. Without emotional bias, however, A.I. systems will be able to make every decision in a logical and objective way.
Schmidt also proposed three ground rules for A.I. creators to follow. The first of these rules is that an A.I. creation should always serve the needs of the many, the greater good, rather than an individual or interested party. This will prevent advanced A.I. from being developed or commissioned by special parties or those with vast wealth with logic and operations that would serve to better their station at the cost of others. Schmidt also calls for A.I. development to always be community-engaged and open to facilitate community betterment of the A.I. system, those using it and those who benefit from its use. Naturally, such A.I. programs should always be created and governed responsibly. Lastly and most interestingly, Schmidt asks that A.I. systems undergo verification to ensure proper function. Given the nature of A.I., a computer program that’s capable of learning and growing its base of knowledge and operations, it’s not hard to imagine a program growing itself out of control and eventually deviating from its original function. Schmidt’s three rules cover a great many bases in the great frontier of A.I.