Google’s DeepMind unit has created a new version of its famous AlphaGo artificial intelligence program, along with a special program and accompanying algorithm that allow it to be fed the rules or parameters of something like a game, then teach itself how to achieve superhuman performance on that front without any human intervention. Previous versions of AlphaGo required humans to specify a goal and provide at least some data before being able to begin showing meaningful improvement in performance. This version eliminates that, with the testing being talked about in the white paper from DeepMind being self-teaching to superhuman levels in Chess and Shogi after being fed only the rules of each game. The AI managed to do this in only 24 hours.
AlphaGo Zero uses a vast array of AI conventions in its self-enhancement, perfected over decades of AI research. The AI’s base programs are actually packed more with AI programming smarts than general smarts. While not built as such, it’s arguable that this version of AlphaGo can be defined as an artificial general intelligence, since it is able to take to a number of different tasks and self-enhance over time. Theoretically, AlphaGo Zero could improve itself into infinity across all of its cognitive and concrete mastery domains, given sufficient time, processing power, and nodes.
The implications of this development are nearly infinite. Essentially, using these principles, AI programs could potentially improve their own capabilities exponentially with nothing more than a broad or narrow goal in mind and access to enough computing power and nodes to run a large number of simulations. This development does not open the door to AI programs that can learn convincing emotions or learn to create smarter and smarter AI in its own image or better per se, but these things are within AlphaGo Zero’s purview, if those in control of it decide to set it to such tasks. Naturally, with all of the doomsday talk surrounding AI these days, centering around a call to caution from Tesla CEO Elon Musk, there are countless protections in place to prevent the AI from doing anything that its creators and users don’t want it to do.