AlphaStar: Grandmaster level in StarCraft II using multi-agent reinforcement learning

Subscribers:
284,000
Published on ● Video Link: https://www.youtube.com/watch?v=BTLCdge7uSQ



StarCraft II
Game:
Category:
Let's Play
Duration: 37:17
17,359 views
599


DeepMind's new agent to tackle yet another Esport: Starcraft II. This agent uses deep reinforcement learning with a new technique, called League Training, to catapult itself to Grandmaster-level skill at playing this game.

Abstract:
Many real-world applications require artificial agents to compete and coordinate with other agents in complex environments. As a stepping stone to this goal, the domain of StarCraft has emerged as an important challenge for artificial intelligence research, owing to its iconic and enduring status among the most difficult professional esports and its relevance to the real world in terms of its raw complexity and multi-agent challenges. Over the course of a decade and numerous competitions, the strongest agents have simplified important aspects of the game, utilized superhuman capabilities, or employed hand-crafted sub-systems. Despite these advantages, no previous agent has come close to matching the overall skill of top StarCraft players. We chose to address the challenge of StarCraft using general purpose learning methods that are in principle applicable to other complex domains: a multi-agent reinforcement learning algorithm that uses data from both human and agent games within a diverse league of continually adapting strategies and counter-strategies, each represented by deep neural networks. We evaluated our agent, AlphaStar, in the full game of StarCraft II, through a series of online games against human players. AlphaStar was rated at Grandmaster level for all three StarCraft races and above 99.8% of officially ranked human players.

Authors: Oriol Vinyals, Igor Babuschkin, Wojciech M. Czarnecki, Michaël Mathieu, Andrew Dudzik, Junyoung Chung, David H. Choi, Richard Powell, Timo Ewalds, Petko Georgiev, Junhyuk Oh, Dan Horgan, Manuel Kroiss, Ivo Danihelka, Aja Huang, Laurent Sifre, Trevor Cai, John P. Agapiou, Max Jaderberg, Alexander S. Vezhnevets, Rémi Leblond, Tobias Pohlen, Valentin Dalibard, David Budden, Yury Sulsky, James Molloy, Tom L. Paine, Caglar Gulcehre, Ziyu Wang, Tobias Pfaff, Yuhuai Wu, Roman Ring, Dani Yogatama, Dario Wünsch, Katrina McKinney, Oliver Smith, Tom Schaul, Timothy Lillicrap, Koray Kavukcuoglu, Demis Hassabis, Chris Apps, David Silver

https://www.deepmind.com/blog/article/AlphaStar-Grandmaster-level-in-StarCraft-II-using-multi-agent-reinforcement-learning

Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher







Tags:
ml
ai
machine learning
reinforcement learning
deep rl
deepmind
google
starcraft
alphastar
alphago
alphazero
value function
policy
vtrace
upgo
terran
protoss
zerg
build order
strategy
pointer network
transformer
league training
league
battlenet
artificial intelligence
bot
rl
deep reinforcement learning
model-free
exploiters
self-play
ficticious self-play
rts



Other Statistics

StarCraft II Statistics For Yannic Kilcher

At this time, Yannic Kilcher has 17,359 views for StarCraft II spread across 1 video. Less than an hour worth of StarCraft II videos were uploaded to his channel, less than 0.24% of the total video content that Yannic Kilcher has uploaded to YouTube.