Size: px
Start display at page:

Download ""

Transcription

1

2

3

4 1 1 2 MDP Progressive Widening Double Progressive Widening Weighted UCT Trap Problem Treasure Hunt Problem

5

6 1 Markov Decision Process; MDP MDP MDP , 1, 2,... s 0, s 1, s 2,... A 0, A 1, A 2,... R 0, R 1, R 2,... 1

7 1.1: MDP [1] 1.1 MDP s 1 s 0 A 0 R 0 s 0 s 1 A 0,, 1.1 x a r MDP X A P x X a A x r MDP MDP r sum = γ t r t (1.1) γ 0 γ 1 γ < 1 γ = 1 MDP t=0 2

8 2 / MDP

9 2 MDP MDP 2 2 [2] 2 2 [3][4][5] [3][4] Arati2600 RGB [5] 3D MuJoCo 1 end-to-end MRP ; MDP [6] [7] MDP 1 4

10 2.1 1 (policy) / / Adaptive Playout[8] 5

11 [8] [13] UCT [14] 2.3 [9] 1 MDP MDP

12 Thompson Sampling[10] UCB1,UCB1-tuned, UCB2[11] Thompson Sampling UCB1 Thompson Sampling[10] a D A a T S = argmax v a D a (2.1) a A v a D a Thompson Sampling [10] 0 1 UCB1[11] a N a a R a UCB1 v a v a = R a 2 ln a + C A N a UCB (2.2) N a N a a UCB1 = argmax (v a ) (2.3) a A C UCB1 C UCB1 UCB1 optiminizm in the face of uncetrainty; OFU [12] 2.4 [13] [13] ; Expansion 2.1 ; Backpropagation 2.1 ; Selection 7

13 2.1: [13] ; Simulation UCB1 UCT[14] UCT 1 transition(statex, actiona) x a γ UCB1 Algorithm 1 UCT N x,a x a R x,a x a N x x = a A x N x,a function simulationinu CT (x) if x then return x end if a sim argmax a Ax ( Rx,a ) 2 ln N x N x,a N x,a + C UCB x, r transition(x, a sim ) r sim r + γ simulationinuct(x ) N x N x + 1, N x,asim N x,asim + 1, R x,asim R x,asim + r sim x 0 N x0,a 8

14 2.5 UCT 1 MDP GP-UCB[15] [16] UCB1 Weighted UCB (WUCB) [17] k((state, action), (state, action)) WUCB a a 1 a 2 k((s 1, a 1 ), (s 2, a 2 )) = 0 P ( y, b, r) n(x, a) = r(x, a) = y,b,r P y,b,r P k((x, a), (y, b)) (2.4) k((x, a), (y, b)) r (2.5) n(x) = a A n(x, a) (2.6) v a = r(x, a) n(x, a) + C 2 ln n(x) UCB n(x, a) (2.7) a UCB1 = argmax (v a ) (2.8) a A r 2.6 MDP Double Progressive Widening Weighted UCT 4 9

15 2.6.1 Progressive Widening Progressive Widening (PW) A A UCB1 Progressive Widening PW UCT UCT + PW 2 newaction(statex) x α Progressive Widening Algorithm 2 UCT + PW A x x x function simulationinuct P W (x) if x then return x end if if N α x #A x then A x A x newaction(x) end if ( ) R a sim argmax x,a 2 ln a A a A N x,a + C N x,a UCB N x,a x, r transition(x, a sim ) r sim r + γ simulationinuct-pw(x ) N x N x + 1, N x,asim N x,asim + 1, R x,asim R x,asim + r sim PW N x α 0 < α < 1 PW Double Progressive Widening Double Progressive Widening DPW [18] 1 1 DPW PW Double Progressive Widening DPW UCT UCT + DPW DPW [18] UCT DPW 10

16 UCT + DPW DPW DPW 3 [18] 3 PW PW α PW β Algorithm 3 DPW M x,a x a (, ) (x, a) N x,a,(x,r) x a x r function simulationindp W (x) if x then return x end if if N α x #A x then A x A x newaction(x) end if ( ) R a sim argmax x,a 2 ln a A a A N x,a + C N x,a UCB N x,a if N x,a β #X x,a then Mx, a M x,a transition(x, a sim ) end if x, r M x,a N x,a,(x,r) x,r Mx,a N x,a,(x,r) r sim r + γ simulationindpw(x ) N x N x + 1, N x,asim N x,asim + 1, R x,asim R x,asim + r sim, N x,a,(x,r) N x,a,(x,r) + 1 Progressive Widening UCT 2 DPW PW WUCT WUCB 11

17 2.6.3 Weighted UCT DPW DPW WUCB Weighted UCT (WUCT) [17] WUCB UCB1 UCB1 WUCT 4 DPW 4 A Algorithm 4 WUCT P (,, ) function simulationinw U CT (x) if x then return x end if n 0 r 0 for y, a, r P do n[a] n[a] + k((x, a), (y, a)) r[a] r[a] + k((x, a), (y, a)) r end for n num a A N[a] a sim argmax a A ( r[a] n[a] + C UCB ) 2 ln n sum n[a] x, r transition(x, a sim ) r sim r + γ simulationinwuct(x ) P P (x, a sim, r sim ) 1 12

18 DPW [18] WUCT [17] : 13

19 3.1 [0, 1) x x UCT D 1 2 D D 2 D , 5 3.4, a s a N s,a R s,a N s a 1 N s,a R s,a

20 d th(d) d th(d) d d d w(d) w(d) x a ñ(x, a) r(x, a) S x s d(s) s N s,a R s,a a ñ(x, a) = s S w(d(s))n s,a (3.1) r(x, a) = s S w(d(s))r s,a (3.2) x x A ñ(x) = a A ñ(x, a) (3.3) 15

21 s x 0 x 1 x 0 x 1 ñ(x) = s S w(d(s))n s ñ(x, a) r(x, a) x a UCB w(d) ñ(x, a) x UCT f(n) n(x) = f(ñ(x)) (3.4) ñ(x, a) r(x, a) UCB1 n(x, a) r(x, a) n(x, a) = ñ(x, a) n(x) ñ(x) r(x, a) = r(x, a) n(x) ñ(x) (3.5) (3.6) n(x) n(x, a) r(x, a) UCB1 [11] a try a try = argmax a A ( r(x, a) n(x, a) + C UCB C UCB UCB1 ) 2 ln n(x) n(x, a) (3.7)

22 Algorithm 5 N s,a s a R s,a s a N s s nextstate(state, action) w(depth) th(depth) f(frequency) N prior, R prior x function simulation(x) if x then return x end if A x ñ #A N prior r #A R prior s x, r sim simulationonstatet ree(x, 0, s, A, ñ, r) return r sim function simulationonstatet ree(x, d, s, A, ñ, r) W w(d) for i 0 to #A 1 do a A[i] ñ[i] ñ[i] + W N s,a, r[i] r[i] + W R s,a end for if N s th(d) s x then x s s end if if s x then ñ sum summation of ñ n sum f(ñ sum ) n #A i ñ[i] n sum /ñ sum r #A i r[i] n sum /ñ sum a sim A[argmax i 0..#A 1 ( r[i] n[i] + C UCB 2 ln n sum n[i] x, r transition(x, a) r sim r + γ simulation(x ) else s x s a sim, r sim simulationonstatet ree(x, d + 1, s, A, ñ, r) end if N s,asim N s,asim + 1, R s,asim R s,asim + r sim N s N s + 1 return a sim, r sim 17 ) ]

23 Algorithm 6 N s,a s a R s,a s a nextstate(state, action) w(depth) th(depth) f(frequency) N prior, R prior x function simulation(x) if x then return x end if A x ñ #A N prior r #A R prior s x a, r sim simulationonstatet ree(x, 0, s, A, ñ, r) return r sim function simulationonstatet ree(x, d, s, A, ñ, r) W w(d) for i 0 to #A 1 do a A[i] ñ[i] ñ[i] + W N s,a, r[i] r[i] + W R s,a end for if N s,a th(d) s x then x s s end if if s x then ñ sum summation of ñ n sum f(ñ sum ) n #A i ñ[i] n sum /ñ sum r #A i r[i] n sum /ñ sum a sim A[argmax i 0..#A 1 ( r[i] n[i] + C UCB 2 ln n sum n[i] x, r transition(x, a) r sim r + γ simulation(x ) else s x s a sim, r sim simulationonstatet ree(x, d + 1, s, A, ñ, r) end if N s,asim N s,asim + 1, R s,asim R s,asim + r sim return a sim, r sim 18 ) ]

24 Algorithm 7 rootaction(state) (ex. UCB1) x root while do a root rootaction(x root ) x root, r root transition(x root, a root ) r sim r root + γ simulation(x root) r sim a root end while return WUCB WUCT,, n #A O(#An) n O(n 2 ) O(#A ) 1. th(d) O(n) 0 MDP WUCT 2. th(d) d th(d) d d 1 th(d) 19

25 n O(logn) th(d) WUCT O(n) 1 O(n) 3.4 O(1) O(#A) D d O(Dd) O(1) O(Dd) W UCT 20

26 4 [18][19] [18][19] 2 γ = Trap Problem Trap Problem [18] [18] x x 0 = 0 t a t 8 0 i < 8 a i = i [18] [0, 1] [0, R) d x x = x + a + d r x x < x l x < h 0 t t = 0 t = 2 R = 0.01 R = 0.1 [18] R R = 0.01 R =

27 UCT [18] 1.7 t = 2 x 1.7 t = 0, t = 1 a 1 x < t = 1 t = UCT trap DPW WUCT DPW PW 8 PW DPW C UCB = 1 α = 0.3 β = 0.25 WUCT C UCB = 1 4 (,, ) P P i x, a P i x i, a i σ i,a = 0.5 (1 + k((x, a), (x j, a j ))) 1 2 j 0..i 1 (4.1) (x x i ) 2 2σ i,a 2 k((x, a), (x i, a i )) = e 2 (4.2) πσ i,a σ i,a 0.5 O(#P) 4 n[a] 22

28 C UCB = 1 th(d) = d w(d) = (d + 1) 2 f(n sum ) = ñ 2 3 sum : [ 2R, 2 + 2R] 6 s (s, a) WUCT 1 t = 0 t = 0 UCB1 UCB1 C UCB = 1 Intel Corei7-4790K 4.00 GHz WUCT R = 0.01 R = 1 1, 10, 100, 1000, , 10, 100, 1000ms 4 1ms 100ms ms

29 R = 0.01 R = WUCT DPW WUCT WUCT 10ms 1000ms DPW R = 0.01 R = 0.1 WUCT DPW DPW 4.2 Treasure Hunt Problem Treasure Hunt Problem [19] [19] (x, y) Start (0, 0) Treasure D 0 x 0 0 y D G D G x D D G y D R treasure h D H x D+H D H y D+H R hole (x, y) A θ A 4.3 ϵ R time t t = T [19] t (x, y ) x t+1 = max(0, min(x, D)) y t+1 = max(0, min(y, D)) 24

30 (0, 0), (0, D), (D, 0) D = 5 D = 15 G = 1 h = D/2 A = 1 ϵ = 1 R treasure = 1, R hole = 0.5, R time = T = D 10 [19] D = 5 D = WUCT DPW WUCT PW DPW WUCT PW PW α = 0.2 DPW C UCB Treasure Hunt Problem Trap Problem C = 0.3 WUCT C UCB 1 WUCT Trap Problem Trap Problem 4.2 (x x i ) 2 (x x i ) 2 + (y y i ) 2 2 D 25

31 4.1 σ i,a Trap Problem 0.5 D 2 ([0, D], [0, D]) Trap Problem WUCT 10 WUCT WUCT 1 WUCT/ WUCT/ 1 DPW t = 1 WUCT WUCT D = 5 D = 15 1, 10, 100, 1000, , 10, 100, 1000ms 4 1ms, 10ms ms 1000ms 1000 D = 15 1, 10, 100, 1000, , 10, 100, 1000ms 4 1ms 100ms ms 1000 WUCT

32 D = 5 D = D = 5 D = D = 15 DPW D = 5 D = DPW 1ms 10ms WUCT D = 5 D = 15 D = 5 DPW DPW D = 15 1ms WUCT 1 1ms WUCT D = 15 WUCT 1 DPW ms 27

33 4.1: Trap Problem 28

34 4.2: Trap Problem 29

35 4.3: Treasure Hunt Problem [19] 30

36 4.4: Treasure Hunt Problem 31

37 4.5: Treasure Hunt Problem 32

38 4.6: Treasure Hunt Problem 4.7: Treasure Hunt Problem 33

39 MDP [21][22] 34

40 5.1: 1 Box2D 1 [23]

41 5.3 [24] [25] expectimax [26] Yee [27] kernel regression UCT (KR-UCT) KR-UCT end-to-end 5.4 MDP x a v x, v y w

42 : x y 1 x y 15 2 [29] 15 d d x y mm 15.8m 37

43 d th(d) th(d) = 1.3 d (5.1) w(d) w(d) = (d + 1) 4 (5.2) w(d) x ñ(x) n(x) f(n sum ) = ñ 0.4 sum (5.3) UCB1 C UCB = N s,a, R s,a, N s w(depth), th(depth), f(frequency) 5 γ 1 transition(state, action) 5 1 N new 2 l 2l 38

44 Algorithm 8 A s s R s s N new N policy function simulationincurling(x) if x then return x end if A x x ñ 0 r 0 s x, r sim simulationonstatet ree(x, 0, s, A x, ñ, r) return r sim function simulationonstatet reeincurling(x, d, s, A x, ñ, r) W w(d) for a A x do if a A s then ñ[a] ñ[a] + W N s,a, r[a] r[a] + W R s,a else ñ[a] ñ[a] + W N new, r[a] r[a] + R s /N s end if end for if N s th(d) s x then x s s N s 0 end if if s x then ñ sum summation of ñ n sum f(ñ sum ) n A x a ñ[a] n sum /ñ sum r A x a r[a] n sum /ñ sum n sum n sum + N policy #A ( x a sim A[argmax r[i] + C i 0..#A 1 n[i] UCB ) 2 ln n sum ] n[i] r sim simulationincurling(transition(x, a sim )) else s x s a sim, r sim simulationonstatet reeincurling(x, d + 1, s, A, ñ, r) end if if s N s,asim, s,asim then N s,asim N new, R s,asim 0, N s N s + N new end if 39 N s,asim N s,asim + 1, R s,asim R s,asim + r sim N s N s + 1, R s R s + r sim return a sim, r sim

45 2 N new 3 n sum N policy #A x 2 r[a] R s /N s W 3 n(x, a) r(x, a) N policy n sum [24] l ( l 2, 2l) ( l 4, ) [24] URL 40

46 5.2: [24] 5.2 Q p Q f [24] [24][25] [25] UCB 41

47 1 9 transitionw ithoutnoise(state, action) Algorithm 9 allrootactions(state) candidaterootactions(state) C limit c 0 A all allrootactions(x) N root #A all 0 R root #A all 0 while c < C limit do seq 0 #A all 1 for i 0 to #A all 1 do x transitionw ithoutnoise(x, A all [seq[i]]) N root [seq[i]] N root [seq[i]] + 1 R root [seq[i]] R root [seq[i]] + simulationincurling(x) c c + 1 end for end while A cand candidaterootactions(x) for a A cand do end for UCB1[11] [28] 42

48 Box2D GAT GAT (2016) 43

49 : (2 ) (p = ) 5.3: (10 ) (p = )

50 6 WUCT DPW 2 4 Trap Problem WUCT Treasure Hunt Problem Trap Problem Treasure Hunt Problem Trap Problem WUCT Treasure Hunt Problem WUCT Treasure Hunt Problem DPW Trap Problem DPW RAVE DPW-RAVE[30] D = 5 Treasure Hunt Problem DPW [30] DPW-RAVE Backpropagation

51 2 3 Treasure Hunt Problem D = 15 46

52 47

53 [1] Poole, D., Mackworth, A.: Artificial Intelligence, ( ) [2] Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., van den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., Dieleman, S., Grewe, D., Nham, J., Kalchbrenner, N., Sutskever, I., Lillicrap, T., Leach, M., Kavukcuoglu, K., Graepel, T. and Hassabis, D.: Mastering the game of Go with deep neural networks and tree search, Nature, Vol. 529, pp (2016) [3] Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A.A., Veness, J., Bellemare, M.G., Graves, A., Riedmiller, Martin., Fidjeland, A.K., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S., Hassabis, D.: Human-level control through deep reinforcement learning, Nature, Vol. 518, (2015) [4] Mnih, V., Badia, A. P., Mirza, M., Graves, A., Harley, T., Lillicrap, T. P., Silver, D., Kavukcuoglu, K.: Asynchronous methods for deep reinforcement learning. In International Conference on Machine Learning, pp (2016) [5] Wang, Z., Bapst, V., Heess, N., Mnih, V., Munos, R., Kavukcuoglu, K., de Freitas, N.: Sample efficient actor critic with exprtience replay. ICLR 2017 (2017) [6] Silver, D., van Hasselt, H., Hessel, M., Schaul, T., Guez, A., Harley, T., Dulac- Arnold, G., Reichert, D., Rabinowitz, N., Barreto, A., Degris, T.: The predictron: End-to-end learning and planning, (2017) [7] Genesereth, M., Thielscher, M.: General game playing. Synthesis Lectures on Artificial Intelligence and Machine Learning, 8(2), pp (2014) [8] Graf, T., Platzer, M.: Adaptive Playouts in Monte Carlo Tree Search with Policy Gradient Reinforcement Learning, Advances in Computer Games. Lecture Notes in Computer Science, vol Springer, Cham (2015) [9] Robbins, H.: Some aspects of the sequential design of experiments, Bull. Amer. Math. Soc. 58, No. 5, pp (1952) [10] Thompson, W. R.: On the likelihood that one unknown probability exceeds another in view of the evidence of two samples, Biometrika, 25 (3-4) pp (1933) 48

54 [11] Auer, P., Cesa-Bianchi N., Fischer, P.: Finite-time Analysis of the Multiarmed Bandit Problem. Machine Learning, Vol. 47, pp (2002) [12] Lai, T. L., and Robbins, H. Asymptotically efficient adaptive allocation rules. Advances in Applied Mathematics, Vol. 6, pp (1985) [13] Browne, C., Powley, E., Whitehouse, D., Lucas, S., Cowling, P. I., Rohlfshagen, P., Tavener, S., Perez, D., Samothrakis, S., Colton, S.: A Survey of Monte Carlo Tree Search Methods. IEEE Transactions on Computational Intelligence and AI in Games, Vol. 4, No. 1, pp (2012) [14] Kocsis, L., Szepesvari, C.: Bandit based Monte-Carlo Planning. European conference on machine learning (ECML2006), pp (2006) [15] Srinivas, N., Krause, A., Kakade, S. M., Seeger, M.: Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design, Proceedings of the 27th International Conference on Machine Learning (ICML 2010), pp (2010) [16] Krause, A., Ong, C. S.: Contextual Gaussian Process Bandit Optimization, Advances in Neural Information Processing Systems, pp (2011) [17] Weinstein, A.: Local Planning for Continuous Markov Decision Processes. Ph. D. thesis (2014) [18] Couetoux, A., Hoock, J., Sokolovska, N., Teytaud, O., Bonnard, N.: Continuous upper confidence trees. International Conference on Learning and Intelligent Optimization, pp (2011) [19] Couetoux, A.: Monte Carlo Tree Search for Continuous and Stochastic Sequential Decision Making Problems. doctoral thesis, Paris university (2013) [20] Chaslot. G., Fiter, C., Hoock, J.P., Rimmel A., Teytaud, O.: Adding expert knowledge and exploration in Monte-Carlo Tree Search, Advances in Computer Games, LNCS, Vol. 6048, pp (2009) [21],, :,, 2014-GI-31, No. 2, pp. 1-5 (2014) [22] Ito, T., Kitasei, Y.: Proposal and Implementation of Digital Curling, 2015 IEEE Conference on Computational Intelligence and Games, pp (2015) [23], : 2015,, 2016-GI-36, No. 2, pp. 1-6 (2016) [24],, :,, Vol. 57, No. 11, pp (2016) 49

55 [25] Yamamoto, M., Kato, S., Iizuka, H.: Digital Curling Strategy on Game Tree Search, 2015 IEEE Conference on Computational Intelligence and Games, pp (2015) [26], : AI pp (2016) [27] Yee, T., Lisy, V., Bowling, M.: Monte Carlo Tree Search in Continuous Action Spaces with Execution Uncertainty. International Joint Conference on Artificial Intelligence (2016) [28] Audibert, J. Y., Munos, R., and Szepesvari, C.: Exploration-exploitation tradeoff using variance estimates in multi-armed bandits, Theoretical Computer Science, 410(19) pp (2009) [29] Zobrist, A,: A New Hashing Method with Applications for Game Playing. ICCA journal, Vol. 13, No. 2, pp (1970) [30] Couetoux, A., Milone, M., Brendel, M., Doghmen, H., Sebag, M., Teytaud, O.: Continuous rapid action value estimates. The 3rd Asian Conference on Machine Learning, Vol. 20, pp (2011) 50

The 19th Game Programming Workshop 2014 SHOT 1,a) 2 UCT SHOT UCT SHOT UCT UCT SHOT UCT An Empirical Evaluation of the Effectiveness of the SHOT algori

The 19th Game Programming Workshop 2014 SHOT 1,a) 2 UCT SHOT UCT SHOT UCT UCT SHOT UCT An Empirical Evaluation of the Effectiveness of the SHOT algori SHOT 1,a) 2 UCT SHOT UCT SHOT UCT UCT SHOT UCT An Empirical Evaluation of the Effectiveness of the SHOT algorithm in Go and Gobang Masahiro Honjo 1,a) Yoshimasa Tsuruoka 2 Abstract: Today, UCT is the most

More information

25 Study on Effectiveness of Monte Calro Tree Search for Single-Player Games

25 Study on Effectiveness of Monte Calro Tree Search for Single-Player Games 25 Study on Effectiveness of Monte Calro Tree Search for Single-Player Games 1165065 2014 2 28 ,,, i Abstract Study on Effectiveness of Monte Calro Tree Search for Single-Player Games Norimasa NASU Currently,

More information

[1] AI [2] Pac-Man Ms. Pac-Man Ms. Pac-Man Pac-Man Ms. Pac-Man IEEE AI Ms. Pac-Man AI [3] AI 2011 UCT[4] [5] 58,990 Ms. Pac-Man AI Ms. Pac-Man 921,360

[1] AI [2] Pac-Man Ms. Pac-Man Ms. Pac-Man Pac-Man Ms. Pac-Man IEEE AI Ms. Pac-Man AI [3] AI 2011 UCT[4] [5] 58,990 Ms. Pac-Man AI Ms. Pac-Man 921,360 TD(λ) Ms. Pac-Man AI 1,a) 2 3 3 Ms. Pac-Man AI Ms. Pac-Man UCT (Upper Confidence Bounds applied to Trees) TD(λ) UCT UCT Progressive bias Progressive bias UCT UCT Ms. Pac-Man UCT Progressive bias TD(λ)

More information

Logistello 1) playout playout 1 5) SIMD Bitboard playout playout Bitboard Bitboard 8 8 = black white 2 2 Bitboard 2 1 6) position rev i

Logistello 1) playout playout 1 5) SIMD Bitboard playout playout Bitboard Bitboard 8 8 = black white 2 2 Bitboard 2 1 6) position rev i SIMD 1 1 1 playout playout Cell B. E. SIMD SIMD playout playout Implementation of an Othello Program Based on Monte-Carlo Tree Search by Using a Multi-Core Processor and SIMD Instructions YUJI KUBOTA,

More information

2015 3

2015 3 JAIST Reposi https://dspace.j Title ターン制ストラテジーゲームにおける候補手の抽象化 によるゲーム木探索の効率化 Author(s) 村山, 公志朗 Citation Issue Date 2015-03 Type Thesis or Dissertation Text version author URL http://hdl.handle.net/10119/12652

More information

Research on decision making in multi-player games with imperfect information

Research on decision making in multi-player games with imperfect information Research on decision making in multi-player games with imperfect information 37-086521 22 2 9 UCT UCT 46 % 60000 9 % 1 1 1.1........................................ 1 1.2.....................................

More information

DQN Pathak Intrinsic Curiosity Module (ICM) () [2] Pathak VizDoom Super Mario Bros Mnih A3C [3] ICM Burda ICM Atari 2600 [4] Seijen Hybrid Reward Arch

DQN Pathak Intrinsic Curiosity Module (ICM) () [2] Pathak VizDoom Super Mario Bros Mnih A3C [3] ICM Burda ICM Atari 2600 [4] Seijen Hybrid Reward Arch Hybrid Reward Architecture 1,a) 1 AI RPG (Rogue-like games) AI AI A3C ICM ICM Deep Reinforcement Learning of Roguelike Games Using Internal Rewards and Hybrid Reward Architecture Yukio Kano 1,a) Yoshimasa

More information

IPSJ SIG Technical Report Vol.2016-GI-35 No /3/9 StarCraft AI Deep Q-Network StarCraft: BroodWar Blizzard Entertainment AI Competition AI Convo

IPSJ SIG Technical Report Vol.2016-GI-35 No /3/9 StarCraft AI Deep Q-Network StarCraft: BroodWar Blizzard Entertainment AI Competition AI Convo StarCraft AI Deep Q-Network StarCraft: BroodWar Blizzard Entertainment AI Competition AI Convolutional Neural Network(CNN) Q Deep Q-Network(DQN) CNN DQN,,, 1. StarCraft: Brood War *1 Blizzard Entertainment

More information

p-9-10.eps

p-9-10.eps Root 08M37189 21 22 1 29 Root Tree Fuego Root Tree Root Root 2 Fuego Root CPU Root 64CPU Chaslot Root Root 1 1 7 1.1................................ 7 1.2................................. 8 1.3..................................

More information

Vol. 52 No (Dec. 2011) Ms. Pac-Man IEEE CIG Ms. Pac-Man Ms. Pac-Man AI AI Ms. Pac-Man Ms. Pac-Man Competition Ms. Pac-Man Monte

Vol. 52 No (Dec. 2011) Ms. Pac-Man IEEE CIG Ms. Pac-Man Ms. Pac-Man AI AI Ms. Pac-Man Ms. Pac-Man Competition Ms. Pac-Man Monte Vol. 52 No. 12 3817 3827 (Dec. 2011) Ms. Pac-Man 1 2 2007 IEEE CIG Ms. Pac-Man Ms. Pac-Man AI AI Ms. Pac-Man Ms. Pac-Man Competition Ms. Pac-Man Monte-Carlo Tree Search in Ms. Pac-Man Nozomu Ikehata 1

More information

The 23rd Game Programming Workshop ,a) 2,3,b) Deep Q-Network Atari2600 Minecraft AI Minecraft hg-dagger/q Imitation Learning and Reinforcement L

The 23rd Game Programming Workshop ,a) 2,3,b) Deep Q-Network Atari2600 Minecraft AI Minecraft hg-dagger/q Imitation Learning and Reinforcement L 1,a) 2,3,b) Deep Q-Network Atari2600 Minecraft AI Minecraft hg-dagger/q Imitation Learning and Reinforcement Learning using Hierarchical Structure Yutaro Fujimura 1,a) Tomoyuki Kaneko 2,3,b) Abstract:

More information

The 18th Game Programming Workshop ,a) 1,b) 1,c) 2,d) 1,e) 1,f) Adapting One-Player Mahjong Players to Four-Player Mahjong

The 18th Game Programming Workshop ,a) 1,b) 1,c) 2,d) 1,e) 1,f) Adapting One-Player Mahjong Players to Four-Player Mahjong 1 4 1,a) 1,b) 1,c) 2,d) 1,e) 1,f) 4 1 1 4 1 4 4 1 4 Adapting One-Player Mahjong Players to Four-Player Mahjong by Recognizing Folding Situations Naoki Mizukami 1,a) Ryotaro Nakahari 1,b) Akira Ura 1,c)

More information

ばらつき抑制のための確率最適制御

ばらつき抑制のための確率最適制御 ( ) http://wwwhayanuemnagoya-uacjp/ fujimoto/ 2011 3 9 11 ( ) 2011/03/09-11 1 / 46 Outline 1 2 3 4 5 ( ) 2011/03/09-11 2 / 46 Outline 1 2 3 4 5 ( ) 2011/03/09-11 3 / 46 (1/2) r + Controller - u Plant y

More information

Run-Based Trieから構成される 決定木の枝刈り法

Run-Based Trieから構成される  決定木の枝刈り法 Run-Based Trie 2 2 25 6 Run-Based Trie Simple Search Run-Based Trie Network A Network B Packet Router Packet Filtering Policy Rule Network A, K Network B Network C, D Action Permit Deny Permit Network

More information

Mastering the Game of Go without Human Knowledge ( ) AI 3 1 AI 1 rev.1 (2017/11/26) 1 6 2

Mastering the Game of Go without Human Knowledge ( ) AI 3 1 AI 1 rev.1 (2017/11/26) 1 6 2 6 2 6.1........................................... 3 6.2....................... 5 6.2.1........................... 5 6.2.2........................... 9 6.2.3................. 11 6.3.......................

More information

これからの強化学習 サンプルページ この本の定価 判型などは, 以下の URL からご覧いただけます. このサンプルページの内容は, 初版 1 刷発行時のものです.

これからの強化学習 サンプルページ この本の定価 判型などは, 以下の URL からご覧いただけます.   このサンプルページの内容は, 初版 1 刷発行時のものです. これからの強化学習 サンプルページ この本の定価 判型などは, 以下の URL からご覧いただけます. http://www.morikita.co.jp/books/mid/088031 このサンプルページの内容は, 初版 1 刷発行時のものです. i ii Sutton Barto 20 1 2 3 4 1 Richard S. Sutton and Andrew G. Barto. Reinforcement

More information

[1], []. AlphaZero 4 TPU 7 elmo 90 8 [3] AlphaZero 1 TPU TPU 64 [3] AlphaZero elmo AlphaZero [3] [4]AlphaZero [3].3 Saliency Map [5] Smooth- Gra

[1], []. AlphaZero 4 TPU 7 elmo 90 8 [3] AlphaZero 1 TPU TPU 64 [3] AlphaZero elmo AlphaZero [3] [4]AlphaZero [3].3 Saliency Map [5] Smooth- Gra 1,a),3,b) Application of Saliency Extraction Methods to Neural Networks in Shogi Taichi Nakayashiki 1,a) Tomoyuki Kaneko,3,b) Abstract: Computer shogi programs defeated human experts and it has been said

More information

AI

AI JAIST Reposi https://dspace.j Title プレイヤの意図や価値観を学習し行動選択するチーム プレイ AI の構成 Author(s) 吉谷, 慧 Citation Issue Date 2013-03 Type Thesis or Dissertation Text version author URL http://hdl.handle.net/10119/11300

More information

4. C i k = 2 k-means C 1 i, C 2 i 5. C i x i p [ f(θ i ; x) = (2π) p 2 Vi 1 2 exp (x µ ] i) t V 1 i (x µ i ) 2 BIC BIC = 2 log L( ˆθ i ; x i C i ) + q

4. C i k = 2 k-means C 1 i, C 2 i 5. C i x i p [ f(θ i ; x) = (2π) p 2 Vi 1 2 exp (x µ ] i) t V 1 i (x µ i ) 2 BIC BIC = 2 log L( ˆθ i ; x i C i ) + q x-means 1 2 2 x-means, x-means k-means Bayesian Information Criterion BIC Watershed x-means Moving Object Extraction Using the Number of Clusters Determined by X-means Clustering Naoki Kubo, 1 Kousuke

More information

3 2 2 (1) (2) (3) (4) 4 4 AdaBoost 2. [11] Onishi&Yoda [8] Iwashita&Stoica [5] 4 [3] 3. 3 (1) (2) (3)

3 2 2 (1) (2) (3) (4) 4 4 AdaBoost 2. [11] Onishi&Yoda [8] Iwashita&Stoica [5] 4 [3] 3. 3 (1) (2) (3) (MIRU2012) 2012 8 820-8502 680-4 E-mail: {d kouno,shimada,endo}@pluto.ai.kyutech.ac.jp (1) (2) (3) (4) 4 AdaBoost 1. Kanade [6] CLAFIC [12] EigenFace [10] 1 1 2 1 [7] 3 2 2 (1) (2) (3) (4) 4 4 AdaBoost

More information

Convolutional Neural Network A Graduation Thesis of College of Engineering, Chubu University Investigation of feature extraction by Convolution

Convolutional Neural Network A Graduation Thesis of College of Engineering, Chubu University Investigation of feature extraction by Convolution Convolutional Neural Network 2014 3 A Graduation Thesis of College of Engineering, Chubu University Investigation of feature extraction by Convolutional Neural Network Fukui Hiroshi 1940 1980 [1] 90 3

More information

[1] Google AlphaGo [2] AI [3], [4] [5], [6] 1 [7] [8], [9] IEEE-CIG 2015 AI [10] GPW AI 2 AI GCCS AI Expectimax [11] A

[1] Google AlphaGo [2] AI [3], [4] [5], [6] 1 [7] [8], [9] IEEE-CIG 2015 AI [10] GPW AI 2 AI GCCS AI Expectimax [11] A 1,a) 1,b) 1,c) 2016 2 20, 2016 9 6 AI AI Expectimax Expectimax AI Expectimax A Method of Game Tree Search in Digital Curling Including Uncertainty Shu Katoh 1,a) Hiroyuki Iizuka 1,b) Masahito Yamamoto

More information

Q [4] 2. [3] [5] ϵ- Q Q CO CO [4] Q Q [1] i = X ln n i + C (1) n i i n n i i i n i = n X i i C exploration exploitation [4] Q Q Q ϵ 1 ϵ 3. [3] [5] [4]

Q [4] 2. [3] [5] ϵ- Q Q CO CO [4] Q Q [1] i = X ln n i + C (1) n i i n n i i i n i = n X i i C exploration exploitation [4] Q Q Q ϵ 1 ϵ 3. [3] [5] [4] 1,a) 2,3,b) Q ϵ- 3 4 Q greedy 3 ϵ- 4 ϵ- Comparation of Methods for Choosing Actions in Werewolf Game Agents Tianhe Wang 1,a) Tomoyuki Kaneko 2,3,b) Abstract: Werewolf, also known as Mafia, is a kind of

More information

IPSJ SIG Technical Report 1,a) 1,b) 1,c) 1,d) 2,e) 2,f) 2,g) 1. [1] [2] 2 [3] Osaka Prefecture University 1 1, Gakuencho, Naka, Sakai,

IPSJ SIG Technical Report 1,a) 1,b) 1,c) 1,d) 2,e) 2,f) 2,g) 1. [1] [2] 2 [3] Osaka Prefecture University 1 1, Gakuencho, Naka, Sakai, 1,a) 1,b) 1,c) 1,d) 2,e) 2,f) 2,g) 1. [1] [2] 2 [3] 1 599 8531 1 1 Osaka Prefecture University 1 1, Gakuencho, Naka, Sakai, Osaka 599 8531, Japan 2 565 0871 Osaka University 1 1, Yamadaoka, Suita, Osaka

More information

IPSJ SIG Technical Report Vol.2014-DBS-159 No.6 Vol.2014-IFAT-115 No /8/1 1,a) 1 1 1,, 1. ([1]) ([2], [3]) A B 1 ([4]) 1 Graduate School of Info

IPSJ SIG Technical Report Vol.2014-DBS-159 No.6 Vol.2014-IFAT-115 No /8/1 1,a) 1 1 1,, 1. ([1]) ([2], [3]) A B 1 ([4]) 1 Graduate School of Info 1,a) 1 1 1,, 1. ([1]) ([2], [3]) A B 1 ([4]) 1 Graduate School of Information Science and Technology, Osaka University a) kawasumi.ryo@ist.osaka-u.ac.jp 1 1 Bucket R*-tree[5] [4] 2 3 4 5 6 2. 2.1 2.2 2.3

More information

TD 2048 TD 1 N N 2048 N TD N N N N N N 2048 N 2048 TD 2048 TD TD TD 2048 TD 2048 minimax 2048, 2048, TD, N i

TD 2048 TD 1 N N 2048 N TD N N N N N N 2048 N 2048 TD 2048 TD TD TD 2048 TD 2048 minimax 2048, 2048, TD, N i 28 2048 2048 TD Computer Players Based on TD Learning for Game 2048 and Its Two-player Variant 2048 2048 TD 2048 TD 1 N N 2048 N TD N N N N N N 2048 N 2048 TD 2048 TD TD TD 2048 TD 2048 minimax 2048, 2048,

More information

(a) Picking up of six components (b) Picking up of three simultaneously. components simultaneously. Fig. 2 An example of the simultaneous pickup. 6 /

(a) Picking up of six components (b) Picking up of three simultaneously. components simultaneously. Fig. 2 An example of the simultaneous pickup. 6 / *1 *1 *1 *2 *2 Optimization of Printed Circuit Board Assembly Prioritizing Simultaneous Pickup in a Placement Machine Toru TSUCHIYA *3, Atsushi YAMASHITA, Toru KANEKO, Yasuhiro KANEKO and Hirokatsu MURAMATSU

More information

三石貴志.indd

三石貴志.indd 流通科学大学論集 - 経済 情報 政策編 - 第 21 巻第 1 号,23-33(2012) SIRMs SIRMs Fuzzy fuzzyapproximate approximatereasoning reasoningusing using Lukasiewicz Łukasiewicz logical Logical operations Operations Takashi Mitsuishi

More information

2007/8 Vol. J90 D No. 8 Stauffer [7] 2 2 I 1 I 2 2 (I 1(x),I 2(x)) 2 [13] I 2 = CI 1 (C >0) (I 1,I 2) (I 1,I 2) Field Monitoring Server

2007/8 Vol. J90 D No. 8 Stauffer [7] 2 2 I 1 I 2 2 (I 1(x),I 2(x)) 2 [13] I 2 = CI 1 (C >0) (I 1,I 2) (I 1,I 2) Field Monitoring Server a) Change Detection Using Joint Intensity Histogram Yasuyo KITA a) 2 (0 255) (I 1 (x),i 2 (x)) I 2 = CI 1 (C>0) (I 1,I 2 ) (I 1,I 2 ) 2 1. [1] 2 [2] [3] [5] [6] [8] Intelligent Systems Research Institute,

More information

2 1 2 2 1 i

2 1 2 2 1 i 25 Improvement of Simulation Accuracy in Monte Carlo Daihinmin Players 1165064 2014 2 28 2 1 2 2 1 i 2 2 1 3 15,000 15,000 69% 2 3 3 ii Abstract Improvement of Simulation Accuracy in Monte Carlo Daihinmin

More information

IHI Robust Path Planning against Position Error for UGVs in Rough Terrain Yuki DOI, Yonghoon JI, Yusuke TAMURA(University of Tokyo), Yuki IKEDA, Atsus

IHI Robust Path Planning against Position Error for UGVs in Rough Terrain Yuki DOI, Yonghoon JI, Yusuke TAMURA(University of Tokyo), Yuki IKEDA, Atsus IHI Robust Path Planning against Position Error for UGVs in Rough Terrain Yuki DOI, Yonghoon JI, Yusuke TAMURA(University of Tokyo), Yuki IKEDA, Atsushi UMEMURA, Yoshiharu KANESHIMA, Hiroki MURAKAMI(IHI

More information

(MIRU2008) HOG Histograms of Oriented Gradients (HOG)

(MIRU2008) HOG Histograms of Oriented Gradients (HOG) (MIRU2008) 2008 7 HOG - - E-mail: katsu0920@me.cs.scitec.kobe-u.ac.jp, {takigu,ariki}@kobe-u.ac.jp Histograms of Oriented Gradients (HOG) HOG Shape Contexts HOG 5.5 Histograms of Oriented Gradients D Human

More information

Convolutional Neural Network CNN CNN [2], [3] CNN Deep Convolutional Neural Network DCNN 2012 ILSVRC 2 10% 9 DCNN [4] 2014 DCNN AI

Convolutional Neural Network CNN CNN [2], [3] CNN Deep Convolutional Neural Network DCNN 2012 ILSVRC 2 10% 9 DCNN [4] 2014 DCNN AI 1 1,2,a) 1 1 2016 2 19, 2016 9 6 1 Convolutional Neural Network; CNN 1 / / 13 1 CNN CNN 2 Convolutional Neural Network Estimating Player s Strength by CNN from One Game Record of Go Nobuo Araki 1,2,a)

More information

人工知能学会研究会資料 SIG-KBS-B Analysis of Voting Behavior in One Night Werewolf 1 2 Ema Nishizaki 1 Tomonobu Ozaki Graduate School of Integrated B

人工知能学会研究会資料 SIG-KBS-B Analysis of Voting Behavior in One Night Werewolf 1 2 Ema Nishizaki 1 Tomonobu Ozaki Graduate School of Integrated B 人工知能学会研究会資料 SIG-KBS-B508-09 Analysis of Voting Behavior in One Night Werewolf 1 2 Ema Nishizaki 1 Tomonobu Ozaki 2 1 1 Graduate School of Integrated Basic Sciences, Nihon University 2 2 College of Humanities

More information

_314I01BM浅谷2.indd

_314I01BM浅谷2.indd 587 ネットワークの表現学習 1 1 1 1 Deep Learning [1] Google [2] Deep Learning [3] [4] 2014 Deepwalk [5] 1 2 [6] [7] [8] 1 2 1 word2vec[9] word2vec 1 http://www.ai-gakkai.or.jp/my-bookmark_vol31-no4 588 31 4 2016

More information

コンピュータ 囲 碁 に 起 きた 革 命 2008 年 3 月 末 パリ 囲 碁 トーナメントのエキシビショ ンでプロ 対 コンピュータの 対 戦 が 実 現 (http://paris2008.jeudego.org/) プロ:タラヌ カタリン 五 段 ( 日 本 棋 院 中 部 総 本 部 所

コンピュータ 囲 碁 に 起 きた 革 命 2008 年 3 月 末 パリ 囲 碁 トーナメントのエキシビショ ンでプロ 対 コンピュータの 対 戦 が 実 現 (http://paris2008.jeudego.org/) プロ:タラヌ カタリン 五 段 ( 日 本 棋 院 中 部 総 本 部 所 コンピュータ 囲 碁 における モンテカルロ 法 ~ 理 論 編 ~ 美 添 一 樹 コンピュータ 囲 碁 に 起 きた 革 命 2008 年 3 月 末 パリ 囲 碁 トーナメントのエキシビショ ンでプロ 対 コンピュータの 対 戦 が 実 現 (http://paris2008.jeudego.org/) プロ:タラヌ カタリン 五 段 ( 日 本 棋 院 中 部 総 本 部 所 属 ) コンピュータ:MoGo

More information

情報処理学会研究報告 IPSJ SIG Technical Report Vol.2009-GI-22 No /6/26 ( ) GPCC (Games and Puzzles Competitions on Computers) 200

情報処理学会研究報告 IPSJ SIG Technical Report Vol.2009-GI-22 No /6/26 ( ) GPCC (Games and Puzzles Competitions on Computers) 200 (007 10 008 10 ) 1 1 GPCC (Games and Puzzles Competitions on Computers) 007 10 7 008 10 008 197 GPCC 8) chair 009 The report of Computer BlokusDuo Championship TSUKIJI Tsuyoshi, 1 OSAKI Yasuhiro, 1 SAKAI

More information

(pdf) (cdf) Matlab χ ( ) F t

(pdf) (cdf) Matlab χ ( ) F t (, ) (univariate) (bivariate) (multi-variate) Matlab Octave Matlab Matlab/Octave --...............3. (pdf) (cdf)...3.4....4.5....4.6....7.7. Matlab...8.7.....9.7.. χ ( )...0.7.3.....7.4. F....7.5. t-...3.8....4.8.....4.8.....5.8.3....6.8.4....8.8.5....8.8.6....8.9....9.9.....9.9.....0.9.3....0.9.4.....9.5.....0....3

More information

[1] SBS [2] SBS Random Forests[3] Random Forests ii

[1] SBS [2] SBS Random Forests[3] Random Forests ii Random Forests 2013 3 A Graduation Thesis of College of Engineering, Chubu University Proposal of an efficient feature selection using the contribution rate of Random Forests Katsuya Shimazaki [1] SBS

More information

DL_UCT

DL_UCT Deep Learning for Real- Time Atari Game Play Using Offline Monte- Carlo Tree Search Planning Guo, X., Singh, S., Lee, H., Lewis, R. L., & Wang, X. (2014). InAdvances in Neural Information Processing Systems

More information

1,a) 1,b) TUBSTAP TUBSTAP Offering New Benchmark Maps for Turn Based Strategy Game Tomihiro Kimura 1,a) Kokolo Ikeda 1,b) Abstract: Tsume-shogi and Ts

1,a) 1,b) TUBSTAP TUBSTAP Offering New Benchmark Maps for Turn Based Strategy Game Tomihiro Kimura 1,a) Kokolo Ikeda 1,b) Abstract: Tsume-shogi and Ts JAIST Reposi https://dspace.j Title ターン制戦略ゲームにおけるベンチマークマップの提 案 Author(s) 木村, 富宏 ; 池田, 心 Citation ゲームプログラミングワークショップ 2016 論文集, 2016: 36-43 Issue Date 2016-10-28 Type Conference Paper Text version author

More information

No. 3 Oct The person to the left of the stool carried the traffic-cone towards the trash-can. α α β α α β α α β α Track2 Track3 Track1 Track0 1

No. 3 Oct The person to the left of the stool carried the traffic-cone towards the trash-can. α α β α α β α α β α Track2 Track3 Track1 Track0 1 ACL2013 TACL 1 ACL2013 Grounded Language Learning from Video Described with Sentences (Yu and Siskind 2013) TACL Transactions of the Association for Computational Linguistics What Makes Writing Great?

More information

IPSJ SIG Technical Report Vol.2017-MUS-116 No /8/24 MachineDancing: 1,a) 1,b) 3 MachineDancing MachineDancing MachineDancing 1 MachineDan

IPSJ SIG Technical Report Vol.2017-MUS-116 No /8/24 MachineDancing: 1,a) 1,b) 3 MachineDancing MachineDancing MachineDancing 1 MachineDan MachineDancing: 1,a) 1,b) 3 MachineDancing 2 1. 3 MachineDancing MachineDancing 1 MachineDancing MachineDancing [1] 1 305 0058 1-1-1 a) s.fukayama@aist.go.jp b) m.goto@aist.go.jp 1 MachineDancing 3 CG

More information

IPSJ SIG Technical Report Vol.2009-CVIM-167 No /6/10 Real AdaBoost HOG 1 1 1, 2 1 Real AdaBoost HOG HOG Real AdaBoost HOG A Method for Reducing

IPSJ SIG Technical Report Vol.2009-CVIM-167 No /6/10 Real AdaBoost HOG 1 1 1, 2 1 Real AdaBoost HOG HOG Real AdaBoost HOG A Method for Reducing Real AdaBoost HOG 1 1 1, 2 1 Real AdaBoost HOG HOG Real AdaBoost HOG A Method for Reducing number of HOG Features based on Real AdaBoost Chika Matsushima, 1 Yuji Yamauchi, 1 Takayoshi Yamashita 1, 2 and

More information

,.,., ( ).,., A, B. A, B,.,...,., Python Long Short Term Memory(LSTM), Unity., Asynchronous method, Deep Q-Network(DQN), LSTM, TORCS. Asynchronous met

,.,., ( ).,., A, B. A, B,.,...,., Python Long Short Term Memory(LSTM), Unity., Asynchronous method, Deep Q-Network(DQN), LSTM, TORCS. Asynchronous met 2016 Future University Hakodate 2016 System Information Science Practice Group Report AI Project Name AI love Deep Learning TORCS Deep Learning Group Name TORCS Deep Learning /Project No. 14-B /Project

More information

知能と情報, Vol.29, No.6, pp

知能と情報, Vol.29, No.6, pp 36 知能と情報知能と情報 ( 日本知能情報ファジィ学会誌 ( ))Vol.29, No.6, pp.226-230(2017) 会告 Zadeh( ザデー ) 先生を偲ぶ会 のご案内 Zadeh( ) とと と 日 2018 1 20 日 ( ) 15:00 17:30(14:30 18:00 ) 2F ( ) 530-8310 1-1-35 TEL: 06-6372-5101 https://www.hankyu-hotel.com/hotel/osakashh/index.html

More information

Fig. 2 28th Ryuou Tournament, Match 5, 59th move. The last move is Black s Rx5f. 1 Tic-Tac-Toe Fig. 1 AsearchtreeofTic-Tac-Toe. [2] [3], [4]

Fig. 2 28th Ryuou Tournament, Match 5, 59th move. The last move is Black s Rx5f. 1 Tic-Tac-Toe Fig. 1 AsearchtreeofTic-Tac-Toe. [2] [3], [4] 1,a) 2 3 2017 4 6, 2017 9 5 Predicting Moves in Comments for Shogi Commentary Generation Hirotaka Kameko 1,a) Shinsuke Mori 2 Yoshimasa Tsuruoka 3 Received: April 6, 2017, Accepted: September 5, 2017 Abstract:

More information

Computational Semantics 1 category specificity Warrington (1975); Warrington & Shallice (1979, 1984) 2 basic level superiority 3 super-ordinate catego

Computational Semantics 1 category specificity Warrington (1975); Warrington & Shallice (1979, 1984) 2 basic level superiority 3 super-ordinate catego Computational Semantics 1 category specificity Warrington (1975); Warrington & Shallice (1979, 1984) 2 basic level superiority 3 super-ordinate category preservation 1 / 13 analogy by vector space Figure

More information

& 3 3 ' ' (., (Pixel), (Light Intensity) (Random Variable). (Joint Probability). V., V = {,,, V }. i x i x = (x, x,, x V ) T. x i i (State Variable),

& 3 3 ' ' (., (Pixel), (Light Intensity) (Random Variable). (Joint Probability). V., V = {,,, V }. i x i x = (x, x,, x V ) T. x i i (State Variable), .... Deeping and Expansion of Large-Scale Random Fields and Probabilistic Image Processing Kazuyuki Tanaka The mathematical frameworks of probabilistic image processing are formulated by means of Markov

More information

Input image Initialize variables Loop for period of oscillation Update height map Make shade image Change property of image Output image Change time L

Input image Initialize variables Loop for period of oscillation Update height map Make shade image Change property of image Output image Change time L 1,a) 1,b) 1/f β Generation Method of Animation from Pictures with Natural Flicker Abstract: Some methods to create animation automatically from one picture have been proposed. There is a method that gives

More information

14 2 5

14 2 5 14 2 5 i ii Surface Reconstruction from Point Cloud of Human Body in Arbitrary Postures Isao MORO Abstract We propose a method for surface reconstruction from point cloud of human body in arbitrary postures.

More information

[2][3][4][5] 4 ( 1 ) ( 2 ) ( 3 ) ( 4 ) 2. Shiratori [2] Shiratori [3] [4] GP [5] [6] [7] [8][9] Kinect Choi [10] 3. 1 c 2016 Information Processing So

[2][3][4][5] 4 ( 1 ) ( 2 ) ( 3 ) ( 4 ) 2. Shiratori [2] Shiratori [3] [4] GP [5] [6] [7] [8][9] Kinect Choi [10] 3. 1 c 2016 Information Processing So 1,a) 2 2 1 2,b) 3,c) A choreographic authoring system reflecting a user s preference Ryo Kakitsuka 1,a) Kosetsu Tsukuda 2 Satoru Fukayama 2 Naoya Iwamoto 1 Masataka Goto 2,b) Shigeo Morishima 3,c) Abstract:

More information

,,.,.,,.,.,.,.,,.,..,,,, i

,,.,.,,.,.,.,.,,.,..,,,, i 22 A person recognition using color information 1110372 2011 2 13 ,,.,.,,.,.,.,.,,.,..,,,, i Abstract A person recognition using color information Tatsumo HOJI Recently, for the purpose of collection of

More information

JFE.dvi

JFE.dvi ,, Department of Civil Engineering, Chuo University Kasuga 1-13-27, Bunkyo-ku, Tokyo 112 8551, JAPAN E-mail : atsu1005@kc.chuo-u.ac.jp E-mail : kawa@civil.chuo-u.ac.jp SATO KOGYO CO., LTD. 12-20, Nihonbashi-Honcho

More information

Optical Flow t t + δt 1 Motion Field 3 3 1) 2) 3) Lucas-Kanade 4) 1 t (x, y) I(x, y, t)

Optical Flow t t + δt 1 Motion Field 3 3 1) 2) 3) Lucas-Kanade 4) 1 t (x, y) I(x, y, t) http://wwwieice-hbkborg/ 2 2 4 2 -- 2 4 2010 9 3 3 4-1 Lucas-Kanade 4-2 Mean Shift 3 4-3 2 c 2013 1/(18) http://wwwieice-hbkborg/ 2 2 4 2 -- 2 -- 4 4--1 2010 9 4--1--1 Optical Flow t t + δt 1 Motion Field

More information

Vol.55 No (Nov. 2014) Hex 1,a) 1,b) 1,c) 1,d) , Hex 2 Hex Hex Hex Hex Hex Hex Hex Development of Computer Hex Strategy

Vol.55 No (Nov. 2014) Hex 1,a) 1,b) 1,c) 1,d) , Hex 2 Hex Hex Hex Hex Hex Hex Hex Development of Computer Hex Strategy Hex 1,a) 1,b) 1,c) 1,d) 2014 2 21, 2014 9 12 Hex 2 Hex Hex Hex Hex Hex Hex Hex Development of Computer Hex Strategy Using Network Characteristics Kei Takada 1,a) Masaya Honjo 1,b) Hiroyuki Iizuka 1,c)

More information

21 Pitman-Yor Pitman- Yor [7] n -gram W w n-gram G Pitman-Yor P Y (d, θ, G 0 ) (1) G P Y (d, θ, G 0 ) (1) Pitman-Yor d, θ, G 0 d 0 d 1 θ Pitman-Yor G

21 Pitman-Yor Pitman- Yor [7] n -gram W w n-gram G Pitman-Yor P Y (d, θ, G 0 ) (1) G P Y (d, θ, G 0 ) (1) Pitman-Yor d, θ, G 0 d 0 d 1 θ Pitman-Yor G ol2013-nl-214 No6 1,a) 2,b) n-gram 1 M [1] (TG: Tree ubstitution Grammar) [2], [3] TG TG 1 2 a) ohno@ilabdoshishaacjp b) khatano@maildoshishaacjp [4], [5] [6] 2 Pitman-Yor 3 Pitman-Yor 1 21 Pitman-Yor

More information

AI 2016 3 AI AI AI AI AI AI COM Computer Player NPC Non-Player Character AI AI AI AI AI AI AI AI TCG AI i Infinite Mario Bros. AI AI ii 1 1 1.1 AI..................... 3 1.1.1 AI................. 3 1.1.2

More information

untitled

untitled c 645 2 1. GM 1959 Lindsey [1] 1960 Howard [2] Howard 1 25 (Markov Decision Process) 3 3 2 3 +1=25 9 Bellman [3] 1 Bellman 1 k 980 8576 27 1 015 0055 84 4 1977 D Esopo and Lefkowitz [4] 1 (SI) Cover and

More information

18 2 20 W/C W/C W/C 4-4-1 0.05 1.0 1000 1. 1 1.1 1 1.2 3 2. 4 2.1 4 (1) 4 (2) 4 2.2 5 (1) 5 (2) 5 2.3 7 3. 8 3.1 8 3.2 ( ) 11 3.3 11 (1) 12 (2) 12 4. 14 4.1 14 4.2 14 (1) 15 (2) 16 (3) 17 4.3 17 5. 19

More information

xx/xx Vol. Jxx A No. xx 1 Fig. 1 PAL(Panoramic Annular Lens) PAL(Panoramic Annular Lens) PAL (2) PAL PAL 2 PAL 3 2 PAL 1 PAL 3 PAL PAL 2. 1 PAL

xx/xx Vol. Jxx A No. xx 1 Fig. 1 PAL(Panoramic Annular Lens) PAL(Panoramic Annular Lens) PAL (2) PAL PAL 2 PAL 3 2 PAL 1 PAL 3 PAL PAL 2. 1 PAL PAL On the Precision of 3D Measurement by Stereo PAL Images Hiroyuki HASE,HirofumiKAWAI,FrankEKPAR, Masaaki YONEDA,andJien KATO PAL 3 PAL Panoramic Annular Lens 1985 Greguss PAL 1 PAL PAL 2 3 2 PAL DP

More information

The 15th Game Programming Workshop 2010 Magic Bitboard Magic Bitboard Bitboard Magic Bitboard Bitboard Magic Bitboard Magic Bitboard Magic Bitbo

The 15th Game Programming Workshop 2010 Magic Bitboard Magic Bitboard Bitboard Magic Bitboard Bitboard Magic Bitboard Magic Bitboard Magic Bitbo Magic Bitboard Magic Bitboard Bitboard Magic Bitboard Bitboard Magic Bitboard 64 81 Magic Bitboard Magic Bitboard Bonanza Proposal and Implementation of Magic Bitboards in Shogi Issei Yamamoto, Shogo Takeuchi,

More information

VRSJ-SIG-MR_okada_79dce8c8.pdf

VRSJ-SIG-MR_okada_79dce8c8.pdf THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS TECHNICAL REPORT OF IEICE. 630-0192 8916-5 E-mail: {kaduya-o,takafumi-t,goshiro,uranishi,miyazaki,kato}@is.naist.jp,.,,.,,,.,,., CG.,,,

More information

AI 1,a) 1,b) 1,c) , AI branching factor AI AI AI Proposal of Challenges and Approaches to Create Effective Artificial Players for Tur

AI 1,a) 1,b) 1,c) , AI branching factor AI AI AI Proposal of Challenges and Approaches to Create Effective Artificial Players for Tur JAIST Reposi https://dspace.j Title 戦術的ターン制ストラテジーゲームにおける AI 構成の ための諸課題とそのアプローチ Author(s) 佐藤, 直之 ; 藤木, 翼 ; 池田, 心 Citation 情報処理学会論文誌, 57(11): 2337-2353 Issue Date 2016-11-15 Type Journal Article Text version

More information

IPSJ SIG Technical Report 1, Instrument Separation in Reverberant Environments Using Crystal Microphone Arrays Nobutaka ITO, 1, 2 Yu KITANO, 1

IPSJ SIG Technical Report 1, Instrument Separation in Reverberant Environments Using Crystal Microphone Arrays Nobutaka ITO, 1, 2 Yu KITANO, 1 1, 2 1 1 1 Instrument Separation in Reverberant Environments Using Crystal Microphone Arrays Nobutaka ITO, 1, 2 Yu KITANO, 1 Nobutaka ONO 1 and Shigeki SAGAYAMA 1 This paper deals with instrument separation

More information

カルマンフィルターによるベータ推定( )

カルマンフィルターによるベータ推定( ) β TOPIX 1 22 β β smoothness priors (the Capital Asset Pricing Model, CAPM) CAPM 1 β β β β smoothness priors :,,. E-mail: koiti@ism.ac.jp., 104 1 TOPIX β Z i = β i Z m + α i (1) Z i Z m α i α i β i (the

More information

鉄鋼協会プレゼン

鉄鋼協会プレゼン NN :~:, 8 Nov., Adaptive H Control for Linear Slider with Friction Compensation positioning mechanism moving table stand manipulator Point to Point Control [G] Continuous Path Control ground Fig. Positoining

More information

A Study on Throw Simulation for Baseball Pitching Machine with Rollers and Its Optimization Shinobu SAKAI*5, Yuichiro KITAGAWA, Ryo KANAI and Juhachi

A Study on Throw Simulation for Baseball Pitching Machine with Rollers and Its Optimization Shinobu SAKAI*5, Yuichiro KITAGAWA, Ryo KANAI and Juhachi A Study on Throw Simulation for Baseball Pitching Machine with Rollers and Its Optimization Shinobu SAKAI*5, Yuichiro KITAGAWA, Ryo KANAI and Juhachi ODA Department of Human and Mechanical Systems Engineering,

More information

4 ( ) NATURE SCIENCE [Battiston 16] 2008 ( ) 5 JPX [ 13] [ 15a, 15b] [ 15,Mizuta 16c] [ 15a, 15b] δt (δt =1) (δt > 1) 4 [ 09, 12] 5 [LeBaron 06,Chen 1

4 ( ) NATURE SCIENCE [Battiston 16] 2008 ( ) 5 JPX [ 13] [ 15a, 15b] [ 15,Mizuta 16c] [ 15a, 15b] δt (δt =1) (δt > 1) 4 [ 09, 12] 5 [LeBaron 06,Chen 1 1 Takanobu Mizuta 2 Kiyoshi Izumi 1 SPARX Asset Management Co., Ltd. 2 School of Engineering, The University of Tokyo 1. 2000 2010 1 () ( ) [Farmer 12, Budish 15] [Budish 15] ( ) [Budish 15] : mizutata@gmail.com

More information

JVRSJ Vol.18 No.3 September, 2013 173 29 2 1 2 1 NPC 2004 1 RTS Real-time Simulation NPC NPC NPC AI NPC 4 AI 2 AI 2 3 4 図 1 ゲームとユーザエクスペリエンス reality a

JVRSJ Vol.18 No.3 September, 2013 173 29 2 1 2 1 NPC 2004 1 RTS Real-time Simulation NPC NPC NPC AI NPC 4 AI 2 AI 2 3 4 図 1 ゲームとユーザエクスペリエンス reality a 28 172 日 本 バーチャルリアリティ 学 会 誌 第 18 巻 3 号 2013 年 9 月 1 [1] 1 [2-5] NPC Non-Player-Character 80 90 NPC 2000 NPC 2 AI AI AI [6-8] RPG AI NPC AI AI RPG AI [5][9][10] 3 AI 2 AI 1 [11] 28 JVRSJ Vol.18 No.3 September,

More information

IS1-09 第 回画像センシングシンポジウム, 横浜,14 年 6 月 2 Hough Forest Hough Forest[6] Random Forest( [5]) Random Forest Hough Forest Hough Forest 2.1 Hough Forest 1 2.2

IS1-09 第 回画像センシングシンポジウム, 横浜,14 年 6 月 2 Hough Forest Hough Forest[6] Random Forest( [5]) Random Forest Hough Forest Hough Forest 2.1 Hough Forest 1 2.2 IS1-09 第 回画像センシングシンポジウム, 横浜,14 年 6 月 MI-Hough Forest () E-mail: ym@vision.cs.chubu.ac.jphf@cs.chubu.ac.jp Abstract Hough Forest Random Forest MI-Hough Forest Multiple Instance Learning Bag Hough Forest

More information

80 X 1, X 2,, X n ( λ ) λ P(X = x) = f (x; λ) = λx e λ, x = 0, 1, 2, x! l(λ) = n f (x i ; λ) = i=1 i=1 n λ x i e λ i=1 x i! = λ n i=1 x i e nλ n i=1 x

80 X 1, X 2,, X n ( λ ) λ P(X = x) = f (x; λ) = λx e λ, x = 0, 1, 2, x! l(λ) = n f (x i ; λ) = i=1 i=1 n λ x i e λ i=1 x i! = λ n i=1 x i e nλ n i=1 x 80 X 1, X 2,, X n ( λ ) λ P(X = x) = f (x; λ) = λx e λ, x = 0, 1, 2, x! l(λ) = n f (x i ; λ) = n λ x i e λ x i! = λ n x i e nλ n x i! n n log l(λ) = log(λ) x i nλ log( x i!) log l(λ) λ = 1 λ n x i n =

More information

世界コンピュータ将棋選手権 [30] CSA CSA 電王戦 [31] Computer Olympiad [32] ICGA コンピュータ将棋対局場 [33],floodgate [34] 24 floodgate floodgate

世界コンピュータ将棋選手権 [30] CSA CSA 電王戦 [31] Computer Olympiad [32] ICGA コンピュータ将棋対局場 [33],floodgate [34] 24 floodgate floodgate 254 30 2 2015 3 ゲームプログラミング ( 将棋を中心に ) 1 竹内聖悟 ( 科学技術振興機構 ERATO 湊離散構造処理系プロジェクト ) 1 1999 [1] 2 2012 松原仁 : ゲーム情報学 :1. ゲーム情報学の現在 ゲームの研究は日本で疎外されなくなったのか [2], 情報処理,Vol. 53, No. 2, pp. 102-106(2012) 小谷善行 : ゲーム情報学

More information

2 ( ) i

2 ( ) i 25 Study on Rating System in Multi-player Games with Imperfect Information 1165069 2014 2 28 2 ( ) i ii Abstract Study on Rating System in Multi-player Games with Imperfect Information Shigehiko MORITA

More information

1 n 1 1 2 2 3 3 3.1............................ 3 3.2............................. 6 3.2.1.............. 6 3.2.2................. 7 3.2.3........................... 10 4 11 4.1..........................

More information

1 IDC Wo rldwide Business Analytics Technology and Services 2013-2017 Forecast 2 24 http://www.soumu.go.jp/johotsusintokei/whitepaper/ja/h24/pdf/n2010000.pdf 3 Manyika, J., Chui, M., Brown, B., Bughin,

More information

258 5) GPS 1 GPS 6) GPS DP 7) 8) 10) GPS GPS 2 3 4 5 2. 2.1 3 1) GPS Global Positioning System

258 5) GPS 1 GPS 6) GPS DP 7) 8) 10) GPS GPS 2 3 4 5 2. 2.1 3 1) GPS Global Positioning System Vol. 52 No. 1 257 268 (Jan. 2011) 1 2, 1 1 measurement. In this paper, a dynamic road map making system is proposed. The proposition system uses probe-cars which has an in-vehicle camera and a GPS receiver.

More information

1 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15. 1. 2. 3. 16 17 18 ( ) ( 19 ( ) CG PC 20 ) I want some rice. I want some lice. 21 22 23 24 2001 9 18 3 2000 4 21 3,. 13,. Science/Technology, Design, Experiments,

More information

2.2 6).,.,.,. Yang, 7).,,.,,. 2.3 SIFT SIFT (Scale-Invariant Feature Transform) 8).,. SIFT,,. SIFT, Mean-Shift 9)., SIFT,., SIFT,. 3.,.,,,,,.,,,., 1,

2.2 6).,.,.,. Yang, 7).,,.,,. 2.3 SIFT SIFT (Scale-Invariant Feature Transform) 8).,. SIFT,,. SIFT, Mean-Shift 9)., SIFT,., SIFT,. 3.,.,,,,,.,,,., 1, 1 1 2,,.,.,,, SIFT.,,. Pitching Motion Analysis Using Image Processing Shinya Kasahara, 1 Issei Fujishiro 1 and Yoshio Ohno 2 At present, analysis of pitching motion from baseball videos is timeconsuming

More information

., White-Box, White-Box. White-Box.,, White-Box., Maple [11], 2. 1, QE, QE, 1 Redlog [7], QEPCAD [9], SyNRAC [8] 3 QE., 2 Brown White-Box. 3 White-Box

., White-Box, White-Box. White-Box.,, White-Box., Maple [11], 2. 1, QE, QE, 1 Redlog [7], QEPCAD [9], SyNRAC [8] 3 QE., 2 Brown White-Box. 3 White-Box White-Box Takayuki Kunihiro Graduate School of Pure and Applied Sciences, University of Tsukuba Hidenao Iwane ( ) / Fujitsu Laboratories Ltd. / National Institute of Informatics. Yumi Wada Graduate School

More information

JAIST Reposi https://dspace.j Title ゲームの主目的達成を意図しない人間らしい行動の分 類と模倣 Author(s) 中川, 絢太 Citation Issue Date 2017-03 Type Thesis or Dissertation Text version author URL http://hdl.handle.net/10119/14157 Rights

More information

1. A0 A B A0 A : A1,...,A5 B : B1,...,B

1. A0 A B A0 A : A1,...,A5 B : B1,...,B 1. A0 A B A0 A : A1,...,A5 B : B1,...,B12 2. 3. 4. 5. A0 A B f : A B 4 (i) f (ii) f (iii) C 2 g, h: C A f g = f h g = h (iv) C 2 g, h: B C g f = h f g = h 4 (1) (i) (iii) (2) (iii) (i) (3) (ii) (iv) (4)

More information

( 9 1 ) 1 2 1.1................................... 2 1.2................................................. 3 1.3............................................... 4 1.4...........................................

More information

Gaze Head Eye (a) deg (b) 45 deg (c) 9 deg 1: - 1(b) - [5], [6] [7] Stahl [8], [9] Fang [1], [11] Itti [12] Itti [13] [7] Fang [1],

Gaze Head Eye (a) deg (b) 45 deg (c) 9 deg 1: - 1(b) - [5], [6] [7] Stahl [8], [9] Fang [1], [11] Itti [12] Itti [13] [7] Fang [1], 1 1 1 Structure from Motion - 1 Ville [1] NAC EMR-9 [2] 1 Osaka University [3], [4] 1 1(a) 1(c) 9 9 9 c 216 Information Processing Society of Japan 1 Gaze Head Eye (a) deg (b) 45 deg (c) 9 deg 1: - 1(b)

More information

Haiku Generation Based on Motif Images Using Deep Learning Koki Yoneda 1 Soichiro Yokoyama 2 Tomohisa Yamashita 2 Hidenori Kawamura Scho

Haiku Generation Based on Motif Images Using Deep Learning Koki Yoneda 1 Soichiro Yokoyama 2 Tomohisa Yamashita 2 Hidenori Kawamura Scho Haiku Generation Based on Motif Images Using Deep Learning 1 2 2 2 Koki Yoneda 1 Soichiro Yokoyama 2 Tomohisa Yamashita 2 Hidenori Kawamura 2 1 1 School of Engineering Hokkaido University 2 2 Graduate

More information

CVaR

CVaR CVaR 20 4 24 3 24 1 31 ,.,.,. Markowitz,., (Value-at-Risk, VaR) (Conditional Value-at-Risk, CVaR). VaR, CVaR VaR. CVaR, CVaR. CVaR,,.,.,,,.,,. 1 5 2 VaR CVaR 6 2.1................................................

More information

ii

ii I05-010 : 19 1 ii k + 1 2 DS 198 20 32 1 1 iii ii iv v vi 1 1 2 2 3 3 3.1.................................... 3 3.2............................. 4 3.3.............................. 6 3.4.......................................

More information

x T = (x 1,, x M ) x T x M K C 1,, C K 22 x w y 1: 2 2

x T = (x 1,, x M ) x T x M K C 1,, C K 22 x w y 1: 2 2 Takio Kurita Neurosceince Research Institute, National Institute of Advanced Indastrial Science and Technology takio-kurita@aistgojp (Support Vector Machine, SVM) 1 (Support Vector Machine, SVM) ( ) 2

More information

2013 M

2013 M 2013 M0110453 2013 : M0110453 20 1 1 1.1............................ 1 1.2.............................. 4 2 5 2.1................................. 6 2.2................................. 8 2.3.................................

More information

Vol. 43 No. 2 Feb. 2002,, MIDI A Probabilistic-model-based Quantization Method for Estimating the Position of Onset Time in a Score Masatoshi Hamanaka

Vol. 43 No. 2 Feb. 2002,, MIDI A Probabilistic-model-based Quantization Method for Estimating the Position of Onset Time in a Score Masatoshi Hamanaka Vol. 43 No. 2 Feb. 2002,, MIDI A Probabilistic-model-based Quantization Method for Estimating the Position of Onset Time in a Score Masatoshi Hamanaka, Masataka Goto,, Hideki Asoh and Nobuyuki Otsu, This

More information

2797 4 5 6 7 2. 2.1 COM COM 4) 5) COM COM 3 4) 5) 2 2.2 COM COM 6) 7) 10) COM Bonanza 6) Bonanza 6 10 20 Hearts COM 7) 10) 52 4 3 Hearts 3 2,000 4,000

2797 4 5 6 7 2. 2.1 COM COM 4) 5) COM COM 3 4) 5) 2 2.2 COM COM 6) 7) 10) COM Bonanza 6) Bonanza 6 10 20 Hearts COM 7) 10) 52 4 3 Hearts 3 2,000 4,000 Vol. 50 No. 12 2796 2806 (Dec. 2009) 1 1, 2 COM TCG COM TCG COM TCG Strategy-acquisition System for Video Trading Card Game Nobuto Fujii 1 and Haruhiro Katayose 1, 2 Behavior and strategy of computers

More information

1 Fig. 1 Extraction of motion,.,,, 4,,, 3., 1, 2. 2.,. CHLAC,. 2.1,. (256 ).,., CHLAC. CHLAC, HLAC. 2.3 (HLAC ) r,.,. HLAC. N. 2 HLAC Fig. 2

1 Fig. 1 Extraction of motion,.,,, 4,,, 3., 1, 2. 2.,. CHLAC,. 2.1,. (256 ).,., CHLAC. CHLAC, HLAC. 2.3 (HLAC ) r,.,. HLAC. N. 2 HLAC Fig. 2 CHLAC 1 2 3 3,. (CHLAC), 1).,.,, CHLAC,.,. Suspicious Behavior Detection based on CHLAC Method Hideaki Imanishi, 1 Toyohiro Hayashi, 2 Shuichi Enokida 3 and Toshiaki Ejima 3 We have proposed a method for

More information

Vol1-CVIM-172 No.7 21/5/ Shan 1) 2 2)3) Yuan 4) Ancuti 5) Agrawal 6) 2.4 Ben-Ezra 7)8) Raskar 9) Image domain Blur image l PSF b / = F(

Vol1-CVIM-172 No.7 21/5/ Shan 1) 2 2)3) Yuan 4) Ancuti 5) Agrawal 6) 2.4 Ben-Ezra 7)8) Raskar 9) Image domain Blur image l PSF b / = F( Vol1-CVIM-172 No.7 21/5/27 1 Proposal on Ringing Detector for Image Restoration Chika Inoshita, Yasuhiro Mukaigawa and Yasushi Yagi 1 A lot of methods have been proposed for restoring blurred images due

More information

,,, 2 ( ), $[2, 4]$, $[21, 25]$, $V$,, 31, 2, $V$, $V$ $V$, 2, (b) $-$,,, (1) : (2) : (3) : $r$ $R$ $r/r$, (4) : 3

,,, 2 ( ), $[2, 4]$, $[21, 25]$, $V$,, 31, 2, $V$, $V$ $V$, 2, (b) $-$,,, (1) : (2) : (3) : $r$ $R$ $r/r$, (4) : 3 1084 1999 124-134 124 3 1 (SUGIHARA Kokichi),,,,, 1, [5, 11, 12, 13], (2, 3 ), -,,,, 2 [5], 3,, 3, 2 2, -, 3,, 1,, 3 2,,, 3 $R$ ( ), $R$ $R$ $V$, $V$ $R$,,,, 3 2 125 1 3,,, 2 ( ), $[2, 4]$, $[21, 25]$,

More information

johnny-paper2nd.dvi

johnny-paper2nd.dvi 13 The Rational Trading by Using Economic Fundamentals AOSHIMA Kentaro 14 2 26 ( ) : : : The Rational Trading by Using Economic Fundamentals AOSHIMA Kentaro abstract: Recently Artificial Markets on which

More information

aca-mk23.dvi

aca-mk23.dvi E-Mail: matsu@nanzan-u.ac.jp [13] [13] 2 ( ) n-gram 1 100 ( ) (Google ) [13] (Breiman[3] ) [13] (Friedman[5, 6]) 2 2.1 [13] 10 20 200 11 10 110 6 10 60 [13] 1: (1892-1927) (1888-1948) (1867-1916) (1862-1922)

More information

1 Kinect for Windows M = [X Y Z] T M = [X Y Z ] T f (u,v) w 3.2 [11] [7] u = f X +u Z 0 δ u (X,Y,Z ) (5) v = f Y Z +v 0 δ v (X,Y,Z ) (6) w = Z +

1 Kinect for Windows M = [X Y Z] T M = [X Y Z ] T f (u,v) w 3.2 [11] [7] u = f X +u Z 0 δ u (X,Y,Z ) (5) v = f Y Z +v 0 δ v (X,Y,Z ) (6) w = Z + 3 3D 1,a) 1 1 Kinect (X, Y) 3D 3D 1. 2010 Microsoft Kinect for Windows SDK( (Kinect) SDK ) 3D [1], [2] [3] [4] [5] [10] 30fps [10] 3 Kinect 3 Kinect Kinect for Windows SDK 3 Microsoft 3 Kinect for Windows

More information

2. Eades 1) Kamada-Kawai 7) Fruchterman 2) 6) ACE 8) HDE 9) Kruskal MDS 13) 11) Kruskal AGI Active Graph Interface 3) Kruskal 5) Kruskal 4) 3. Kruskal

2. Eades 1) Kamada-Kawai 7) Fruchterman 2) 6) ACE 8) HDE 9) Kruskal MDS 13) 11) Kruskal AGI Active Graph Interface 3) Kruskal 5) Kruskal 4) 3. Kruskal 1 2 3 A projection-based method for interactive 3D visualization of complex graphs Masanori Takami, 1 Hiroshi Hosobe 2 and Ken Wakita 3 Proposed is a new interaction technique to manipulate graph layouts

More information