X

DeepMind's AlphaStar AI Again Beats Two Pro Players In Exhibition

DeepMind, the Google-owned company behind the AI that beat a world-famous Go player, recently came out victorious when its AlphaStar AI went toe-to-toe with two human eSports pros in Blizzard’s StarCraft II strategy game. The AI managed to stomp Grzegorz “MaNa” Komincz and Dario “TLO” Wünsch from Team Liquid in the game, five round wins to zero.

The exhibition matches made absolutely no concessions for the AI player, and happened one on one, back to back. AlphaStar did not have any special tweaks to its programming, nor was it fed any sort of instruction from its creators during or between the matches.

One of the more interesting details about this whole thing is that each person played against five different AI agents that had all been trained different ways, resulting in the use of some unconventional strategies and a high degree of adaptability during the matches. In the leadup to the face-off, DeepMind had created five different clones of AlphaStar, and allowed them to have at one another in tournaments, learning, growing, and even individualizing as they did so.

Background: There are many things that set AlphaStar apart from other game-playing AI programs. The first, and perhaps most important factor is that AlphaStar was trained only using raw game data with supervised learning and reinforcement learning, in a manner quite similar to how a human may learn to play the game.

The reason that this is such a big deal is that it required multiple AI breakthroughs to make it possible. StarCraft II is an incredibly complex, multi-layered game. Anything can, and often does happen during a match, and there’s never any one best strategy, or best possibility to work toward.

You start out picking one of three races, then deploying worker units to begin creating buildings, gun installations, ships, and other objects and fixtures. The endgame, of course, is to create a society that produces a militia that can wipe out that of your opponent. In this endeavor, you’ll have to manage long-term goals and large operations, as well as short-term goals and individual units.

It’s not hard to see why a game like this may stump an AI, but when DeepMind announced AlphaStar a while back, it did so with the intention of addressing exactly those issues. The chief way that DeepMind addressed this was by having the AI learn the game’s conventions on such a deep level that it reportedly had at least 10 different potential moves in mind every second of the game, and could predict consequences of early moves far down the line across practically infinite branching timelines.

Impact: The big takeaway here is that DeepMind has figured out an AI training method that’s able to load an AI program up with so much information that it wouldn’t have a hard time with a complex game like StarCraft 2. Having a neural network backing everything helped, of course. Given that the team reported there were usually 26 legal moves in-game for every action at any given second, the fact that AlphaStar knew even 10 of them was more than enough for it to figure out what its opponents could and couldn’t do, and the least risky ways to counteract its enemy’s every move.

Training an AI to play a video game may seem frivolous on the surface, but in reality, this is a very big deal. The research breakthroughs made here could lead to AI that can think much wider and further down the road in terms of possibilities, aiding in everything from self-driving car operations to more humanlike AI opponents in video games.