With a ground-breaking AlphaStar performance, January kickstarted 2019 AI-related research and activities.
The January edition of AI Monthly digest brings a marvel to behold–AlphaStar beating human champions at the real-time strategy game StarCraft. To find out why that’s important, read the full story.
AlphaStar beats top world StarCraft II pro players
AlphaStar, a DeepMind-created agent, beat two world-famous professional players in StarCraft II. The agent was using a Protoss race and playing against another Protoss. The game itself can also play Zerg and Terrans, but AlphaStar is trained only to play Protoss vs. Protoss matches.
The machine defeated Dario “TLO” Wunsch, a Zerg specialist playing Protoss for the occasion, 5-0 in the first five-match round. It then made quick work of professional Protoss player Grzegorz “MaNa” Komnicz beating the champion 5:0.
A noticeable advantage AlphaStar had against both players was its access to the entire StarCraft II map at once. It is still obscured by the fog of war, but the agent doesn’t have to mimic the human’s camera moves. DeepMind prepared one other agent to address this issue and plays using a camera interface, but it lost to MaNa 0:1.
To make the matches fair, the DeepMind team reduced the Actions Per Minute (APM) ratio to a human level and ensured the machine had no advantage in terms of reaction time. Nonetheless, it was clear at crucial moments that AlphaStar had bursts of APM far above human abilities. DeepMind is aware of this and will probably do something about it in the future. For now, however, we will be content to focus on what we have seen.
How the matches went
Unlike human players, AlphaStar had employed some unorthodox yet not necessarily wrong strategies – avoiding walling the entrance to the base with buildings was the most conspicuous one. What’s more, the model used significantly more harvesting drones than pro players normally use.
Beyond its superiority in micromanagement (the art of managing a single unit and using its abilities on the battlefield), the agent didn’t display any clearly non-human strategies or abilities. However, AlphaStar was seen at its finest when it managed to win the match by managing a large number of Stalkers, the unit that is normally countered by Immortals in a rock-paper-scissors manner. As MaNa, the human player confronting the agent, noted, he had never encountered a player with such abilities. As such, the gameplay was clearly on a superhuman level, especially considering the fact that MaNa executed the counter-tactic, which failed due to AlphaStar’s superior micromanagement.
How the Deepmind team did it
The initial process of training the agent took ten days – three of supervised learning built on the basis of replays of top StarCraft II players. The team then infused the agent with reinforcement learning abilities (an approach similar to our team’s cracking Montezuma’s Revenge) and created the “AlphaStar” league to build multiple agents competing against each other. league witnessed a similar cycle with some strategies emerging and being later countered.
After that, the team selected five agents for a match with TLO. To further polish their skills, the agents were trained for another week before the match with the MaNa. As a Protoss specialist, MaNa posed a greater challenge than TLO, a Zerg-oriented player who was learning Protoss tactics only to square off against AlphaStar.
Courtesy of Blizzard, the developer of StarCraft II, Deepmind was delivered a significantly faster version of StarCraft II. This version enabled each agent in AlphaStar league to experience up to 200 years of real-time gameplay in just two weeks.
Why it matters
The AI community has grown accustomed to witnessing agents cracking Atari Classics and popular board games like chess or Go. Both environments provide a diverse set of challenges, with chess being a long-term fully observable strategy game and Atari delivering real-time experience with limited data.
StarCraft combines all manner of challenge by forcing players to follow the long-term strategy without knowledge of an opponent’s strategy and movement until it is in the line of sight of its own units (normally the battlefield is covered by the “fog of war”). Each encounter may show that a strategy needs to be fixed or adapted, as many units and strategies tend to work in a rock-paper-scissors manner, enabling players to play in a tactic-counter-tactic circle. Problem-solving in real time while sticking to a long-term strategy, constantly adapting to a changing environment and optimizing one’s efforts are all skills that can be later extrapolated to solve more challenging real-world problems.
Thus, while playing computer games is fun, the research they enable is very serious. It also lays bare the contrast between human and machine abilities. The computer was able to beat a human player after about four hundred years of constant playing. The human expert, meanwhile, was twenty-five years old, had started playing StarCraft at the age of six and had to sleep or go to school while not playing StarCraft.
Nevertheless, impressive and inspiring.
Understanding the biological brain using a digital one
According to Brain Injury Alliance Wisconsin, approximately 10% of individuals are touched by brain injuries and 5.3 million Americans (a little more than 2% of the US population) live with the effects of a brain injury. Every 23 seconds someone suffers a brain injury in the US.
Such injuries add up to $76.5 billion in annual costs once treatment, transportation and the range of indirect costs like lost productivity are considered.
While brain trauma may sometimes be responsible for the loss of speech, strokes and motor neurone disease are also to blame. Although patients lose the ability to communicate, they often remain conscious. Stephen Hawking is perhaps the most famous such person. Hawking used a speech generator, which he controlled with the muscles in his cheek. The generators can also be controlled with the eyes.
Applying neural networks to interpret the signals within the brain enabled the scientists to reconstruct speech. Summarizing the efforts, Science magazine points out that the effects are more than promising.
Alzheimer’s disease is another challenge that may be tackled with neural networks. There are no medications that heal the disease, but applying the treatment early enough makes it manageable. With Alzheimer’s, the earlier the diagnosis is made, the more effective the treatment will be. The challenge is in the diagnosis, which often comes too late for the disease to be reversible.
By feeding the neural networks with glucose PET scans, researchers from the University of California delivered a system that can diagnose the early symptoms of Alzheimer’s disease up to six years earlier than doctors do.
Why it matters
The human brain is one of the most complex devices in the universe, so understanding how it works is obviously a great challenge. Applying neural networks to treat brain-related diseases may come with a bit of irony – we need an outer, artificial brain to outthink the way our own is working.
Democratizing the AI – the Finnish way
Machine learning and artificial intelligence, in general, tend to be depicted as a black box, with no way to get to know “what the machine is thinking”. At the same time, it is often shown as a miraculous problem-solver, pulling working solutions out of seemingly nothing like a magician procuring a rabbit from a hat. But this too is a misconception.
Like every tool before it, neural networks need to be understood if they are to yield the most valuable outcomes. That’s one reason Finland aims to train its population in AI techniques and machine learning. Starting with 1% of its population (or roughly 55,000 people), the country aims to boost its society and economy by being a leader in the practical application of AI.
Initially a grassroots movement, the initiative gained the support of the government and Finland’s largest employers.
Why it matters
The biggest barrier in using AI and machine learning-powered techniques is uncertainty and doubt. Considering that people are afraid of things they don’t understand, spreading the knowledge about machine learning will support the adoption and reduce societal reluctance to adapting these tools. Moreover, understanding the mechanisms powering ML-based tools will give users a greater understanding of just what the tools are and are not capable of.
New state-of-the-art in robotic grasping
The issues Artificial Intelligence prompts frequently ignite philosophical debate and add interesting insight and inspiration. This recent paper on robot grasping is short of neither insights nor inspiration.
[bctt tweet=”The idea behind the use of reinforcement learning to control robotic arms is simple – hard-coding all the possible situations the robot may encounter is virtually impossible, but building a policy to follow is much easier” via=”no”]
What’s more, building the controller for the robotic arm requires the mountains of data coming from the sensors to be cross-combined. Every change – be it lighting, color or position of an object — can confuse the controller and result in failure.
Thus, the research team built a neural network that processes the input into the “canonical” version, stripped of the insignificant details like shades or graphical patterns – so that grasping is the only thing that matters. Ushering in a new state of the art in robotic grasping, the results are impressive.
Why do the results matter?
There are two reasons these results are important. First, building the controllers of robotic arms is now simpler. Robots that can move in non-supervised, non-hardcoded ways and grasp objects will be used in astonishing ways to improve human lives–for example, as assistants for the disabled or by augmenting the human workforce in manufacturing.
The second breakthrough is how researchers achieved their improvements. Instead of building more powerful neural networks to improve the input processing, the researchers downgraded the data into a homogenous, simplified “canonical” version of the reality. It seems that when it comes to robotic perception, Immanuel Kant was right. There are “things that exist independently of the senses or perception”, but they are unknowable–at least for a robotic observer. Only operating within a simplified reality enables the robot to perform the task.
Keep informed on state-of-the-art machine learning
With the rapidly changing ML landscape, it is easy to lose track of the latest developments. A lecture given by MIT researcher Lex Fridman is a good way to start. The video can be seen here:
Read previous editions of AI Monthly digest: