StarCraft AI Deep Q-Network StarCraft: BroodWar Blizzard Entertainment AI Competition AI Convolutional Neural Network(CNN) Q Deep Q-Network(DQN) CNN DQN,,, 1. StarCraft: Brood War *1 Blizzard Entertainment ( RTS ) RTS 2010 StarCraft AI Competition * 2 2014 18 AI BWAPI *3 API StarCraft () () TerranZargProtoss 3 1 HP HP 0 HP 0 StarCraft *1 http://us.blizzard.com/en-us/games/sc/ *2 http://www.sscaitournament.com/ *3 http://bwapi.github.io/ AI AI Convolutional Neural Network(CNN) Q Deep Q-Network(DQN) AI StarCraft DQN 125 2 Starcraft 3 Starcraft 4 5 6 1
2. StarCraft StarCraft () () TerranZargProtoss 3 1 Terran SCV Zarg Drone Zarg Terran Creep Creep Protoss Zarg HP HP Pylon HP HP 0 HP Terran Medic Heal Terran Vessel Defensive Matrix 1 HP250 1 StarCraft 3. 3.1 StarCraft AI StarCraft [1] StarCraft AI 1 AI AI 2
α w(z) = Ue iα z w(z) = Q log z(q > 0) w(z) = iγ log z 2 3.2 AI [2] 3 AI HP 2 HP r m r t+1 = enemy health it enemy health it+1 i=1 (agent health t agent healt t+1 ) (1) Terran Vulture Marine 6 Vulture 1000 AI 100% RTS AI Kiting 3.4 UCB UCT(UCB applied to Tree) [4] UCT UCB(Upper Confidence Bound) i UCB 3.3 Kiting [3] Kiting one-step Q-learning Watkins s Q(λ)one-step Sarsa Sarsa(λ) UCB(i) = Q i + C ln N N i (2) Q i i C N i N i i C UCB 3
i j q (j) i q (j) i = ω 1 HP + ω 2 DM + ω 3 CP + ω 4 EG (3) HP DMCP EG ω n Q i q i 3.5 Deep Q-Network Deep Q-Network Atari 2600 [5] Deep Q-Network Q Q(s, a) CNN Q Experience Replay Replay Memory CNN 2 2 Atari 2600 4 110x84 epsilon-greedy Deep Q-Network ( 1 ) Replay-Memory D N ( 2 ) ( 3 ) ( a ) s 1 = {x 1 } ϕ 1 = ϕ(s 1 ) ( b ) t = 1 T ( s T ) ( i ) ϵ a a t ( ii ) Q (ϕ(s t ), a; θ) a t ( iii ) a t r t x t+1 ( iv )s t, a t, x t+1 ϕ(t + 1) ( v ) (ϕ t, a t, r t, ϕ t+1 ) D ( vi )D 1 minibatch 4 (ϕ j, a j, r j, ϕ j+1 ) ( vii )minibatch y j Q r j () y j = b() Deep Q-Network 1000 BreakoutPongEnduro Space Invaders Deep Q-Network *4 StarCraft RTS Deep Q-Network 4. 4.1 HP DQN 2 4.2 1 3 CNN CNN *4 http://research.preferred.jp/2015/06/ distributed-deep-reinforcement-learning/ 4
地形 情報 CNN ユニット情報 3 DQN Q 学習 DQN 行動 ( 1 ) 32x32 1 8x8 () ( 2 ) CNN ( 3 ) HP Q 9 1 DQN 8 t i reward(i, t) cause damage(i, t) i t unit health(i, t) i t HP unit reward(i, t) =cause damage(i, t) 4.3 2 {unit health(i, t) unit health(i, t + 1)} (4) reward(i, t) = 2 unit reward(i, t) 3 1 + unit reward(j, t) (5) 3 j i epsilon-greedy AI 1 DQN 2 1 DQN 4 D enemy (x, y) 4.3.1 4 D enemy (x, y) D enemy (x, y) (D enemy (x, y) ) A* (D enemy (x, y) ) 4.4 5 Marine 8 Marine 8 5. 1 1 Intel Corei7 6700KPalit NE5XTIX015KB- PG600F (GTX TITAN X 12GB) Windows 10 Starcraft C++ BWAPI 5
7 5 8 6 Python Chainer * 5 MessagePack-RPC *6 125 6 7 1 *5 http://chainer.org/ *6 https://github.com/msgpack-rpc 9 8 2 9 1 HP HP AI 10 6
10 1 6. StarCraft DQN DQN AI 2 [1] StarCraft AI 10 (2015). [2] Tung, N., Kien, N. and Ruck, T.: Potential flow for unit positioning during combat in StarCraft, IEEE 2nd Global Conference on Consumer Electronics (GCCE 2013), IEEE, pp. 10 11 (2013). [3] Wender, S. and Watson, I.: Applying reinforcement learning to small scale combat in the real-time strategy game starcraft: broodwar, IEEE Conference on Computational Inteligence and Games (CIG 2012),, IEEE, pp. 402 408 (2012). [4] Zhe W., Kien Quang N., Ruck T., Frank R.: MONTE- CARLO PLANNING FOR UNIT CONTROL IN STAR- CRAFT, The 1st IEEE Global Conference on Consumer Electronics 2012, pp. 263 264 (2012). [5] Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D. and Riedmiller, M.: Playing Atari With Deep Reinforcement Learning, NIPS Deep Learning Workshop (2013). 7