Size: px
Start display at page:

Download ""

Transcription

1 0 3

2

3 i SOM SOM mnsom MPFIM SOAC SOAC

4 ii SOAC SOAC SOAC SOAC

5 iii AANN Auto-Associative Neural Network 3 (5)L-AANN 3 (5) Layer-AANN ASSOM Adaptive Subspace SOM BMU Best Matching Unit BMM Best Matching Module BP Back Propagation CFC Conventional Feedback Controller EM Expectation Maximization ESOM Evolving Self-Organizing Map GTM Generative Topographic Mapping HMM Hidden Markov Model IDML Inverse Dynamics Model Learning MAE Mean Absolute Error MDS Multi Dimensional Scaling MLP Multi-Layer Perceptron mnsom modular network SOM MMRL Multiple Model Reinforcement Learning MOSAIC MOdular Selection And Identification for Control MPFIM Multiple Paired Forward-Inverse Models (G) NG (Growing) Neural Gas NNC Neural Network Controller PCA Principal Component Analysis RBF Radial Basis Function RNN Recurrent Neural Network RP Responsibility Predictor SOAC Self-Organizing Adaptive Controller SOM Self-Organizing Map VQ Vector Quantization WTA Winner-Takes-All

6 iv SOM x i w k X A ξ k φ φ k i ψi k n η σ(n) W X Ψ Modular network x i ŷi k ŷ i w k Ei k Ē i p k i T η ψ H C r mnsom x ij y ij D i f i L (f, g) p( ) ỹ k Ei k ki ψi k ξ k i-th data vector k-th reference vector input data set lattice of SOM coordinate vector in the map space index of the BMU neighborhood function (learning rate) k-th learning rate for i-th data normalized learning rate iteration number [step] learning coefficient neighborhood radius at step n reference matrix (set of reference vectors) input matrix (set of input data) matrix of learning rates i-th vector output of k-th module for i-th input vector total output of the network weight vector of k-th module error of k-th module for i-th input vector total error of the network probability selected k-th module for i-th input vector annealing temperature learning coefficient learning rate set of neighborhoods average distance between neighborhoods j-th input vector belonging to i-th class j-th output vector belonging to i-th class data subset belonging to i-th class function of i-th class distance between function f and function g probability density function (pdf) output of k-th module ensemble average of k-th module for i-th class BMM for i-th class learning rate of k-th module for i-th class coordinate vector of k-th module

7 v MPFIM x t, ẋ t, ẍ t ˆx t, ˆẋ t, ˆẍ t u u k u fb u ff σ f g( ) i g( ) η( ) f w i w δ h( ) K P, K D, K A x l π λ ε SOAC t n K ξ x ˆx x x p actual position, the velocity, and the acceleration at time t desired position, the velocity, and the acceleration at time t total control command control command of k-th inverse model feedback control command feedforward control command standard deviation of Gaussian function of a forward model function of an inverse model function of an RP parameter of a forward model parameter of an inverse model parameter of an RP function of dynamics of a controlled object feedback gains of Proportion, Differential, and Acceleration predicted state prior probability likelihood posterior probability learning coefficient time [sec] iteration number [step] number of modules coordinate vector in the map space state vector desired state predicted state input vector to predictors * index of the BMM p f( ) function of a predictor c f( ) function of a controller p w k c w k σ(t) σ 0 σ τ ψ φ p η c η ε weight vector of k-th predictor weight vector of k-th controller neighborhood radius at time t initial neighborhood radius final neighborhood radius time constant learning rate normalized learning rate (in the execution phase) learning coefficient of a predictor learning coefficient of an NNC damping coefficient

8

9 .

10 Jacobs Jordan [0, ] mixture of experts Narendra [39, 40] Gomi [6] Wolpert [57] Haruno [8, 9] Jacobs mixture of experts Narendra Narendra Kohonen Self-Organizing Map : SOM SOM modular network SOM : mnsom mnsom mnsom Self-Organizing Adaptive Controller : SOAC

11 . 3. SOAC... A B A B. mnsom Jordan Narendra Wolpert

12 4.3.3 SOM.4 MPFIM SOAC 3 SOAC 3. SOAC 3. SOAC 3.3 SOAC SOAC 4 SOAC SOAC SOAC 5 SOAC 3 6 SOAC 7

13 5. SOM Self-Organizing Map : SOM Willshaw von der Malsburg [56] Amari[] Willshaw von der Malsburg [3] Kohonen [7] Kohonen Kohonen [45, 38] [3] [9] [43] [6] SOM Kohonen Kohonen SOM 3 SOM

14 6 SOM unit. SOM Multi Dimensional Scaling : MDS [48] MDS SOM. SOM k-th w k ξ k ξ SOM, 3 SOM d x i = [x i,..., x id ] T I X = {x,..., x i,..., x I } X d X R d A w k = [w, k..., wd k]t K SOM X SOM () () w k () x i Best Matching Unit : BMU Winner-Takes-All : WTA BMU

15 . SOM 7 k BMU k = arg max x T i w k (.) k BMU k = arg min x i w k (.) k Kohonen w k SOM Vector Quantization : VQ * () SOM BMU BMU BMU w k = ηφ k (n)(x i w k ), k A (.3) φ k φ k ξ (n) = exp ( k ) ξk σ(n) (.4) * SOM w k

16 8 (a) (b) (c) BMU BMU BMU unit. (a) σ = 4 (b) σ = (c) σ = η σ(n) σ(n) n. σ η () () SOM.3 SOM (.3) Kohonen Luttrell[9, 30] Bishop[5] Expectation Maximization : EM Generative Topographic Mapping (GTM) (.5) (.5) η

17 . SOM 9 input vector reference vector coordinate vector of each unit input space (d=3) map space (two dimension).3 SOM SOM w k = ψ k i = I ψi k x i (.5) i= φ k i I i = φk i (.6) W = XΨ (.7) W = {w,..., w k } (.8) X = {x,..., x I } (.9) ψ ψ K Ψ =..... (.0) ψi ψi K

18 0... BMU k = arg min x i w k (.) k. 3. φ k ξ (n) = exp ( k ) ξk σ(n) (.) w k = ηφ k (n)(x i w k ), k A (.3). BMUs ki = arg min x i w k (.4) k. * φ k i = exp ( ξk i ξ k σ ) (.5) ψ k i = φ k i I i = φk i (.6) 3. w k = I ψi k x i (.7) i=.. SOM SOM * (.5) ξ k i ξ k i

19 . SOM. d ov d uc g e t h z o h ag w l o o awk f d olf c ion c se l ox og at ow e iger orse ebra h e n e k w l is small medium big nocturnal herbivorous has likes to legs 4 legs hair hooves mane feathers stripes hunt run fly swim dog tiger lion horse zebra wolf cow cat fox hen dove eagle owl hawk duck goose.4 SOM SOM SOM. [55] *3 *3. eagle 0

20 input vector reference vector input vector reference vector x x x (a) x (b).5 SOM (a) (b) 6 6 I = 6 d = 6 6 SOM.4 SOM 00 Cr k = H w k w h (.8) h H h k-th H h Cr k owl hawk [55]

21 . SOM 3 SOM U-matrix dog wolf dog goose SOM d = SOM 4.5 SOM SOM SOM 4 4 SOM SOM K = (a) SOM SOM SOM SOM SOM SOM

22 4 SOM.5(b) SOM K = 5 σ =.8 4 SOM k-means SOM SOM SOM SOM SOM. SOM SOM Jacobs Jordan [0] Jacobs mixture of experts

23 . 5 Jordan[] mixture of experts Jacobs mixturre of experts MLP [8,, 7, 8, 9, 45, 39, 40, 44, 47, 49, 50, 57] [] SOM Gomi [7] mixture of experts Narendra [39, 40] Narendra Wolpert Kawato[57] Multiple Paired Forward-Inverse Models : MPFIM Narendra Narendra MP- FIM MPFIM Layer Auto-Associative Neural Network : 3L-AANN 3L-AANN MLP N D N H

24 6 Output : Error : Error of each submodel Probability of each submodel Output of each submodel Data space Input :.6 N H < N D MLP 3L-AANN 3L-AANN 3L-AANN 3L-AANN Auto-encoder 3L-AANN Principal Component Analysis : PCA PCA I {x i }(x i R N D ) K x i

25 . 7 yi k 3L-AANN x i Ei k Ei k = x i yi k (.9) x i k-th p k i p k i = exp[ Ek i /T ] k exp[ E k i /T ] (.0) T T K ŷ i = p k i yi k (.) k= ŷ i = ŷ k i (.) k = arg max p k i (.3) k Ē i = K p k i Ei k (.4) k= 3L-AANN BP k-th

26 8 w k w k = η Ēi w k (.5) { } = η p k Ei k K i w k + Ei k p k i w k (.6) = ηψ k i k = E k i w k (.7) BP ψ k i ψk i k-th ψ k i ψ k i = p k i { + } T (Ēi Ei k ) i-th (.8) T SOM.3 SOM mnsom.3. Kohonen SOM 3 [7] SOM SOM SOM

27 .3 SOM mnsom 9 SOM [5, 53, 54, 4] SOM modular network SOM : mnsom Kohonen SOM mnsom SOM mnsom SOM SOM SOM Multi-Layer Perceptron : MLP Radial Basis Function : RBF Recurrent Neural Network : RNN SOM Neural Gas : NG [3] MLP MLP mnsom MLPmnSOM [5, 53] mnsom SOM SOM MLP-mnSOM MLP AANN Kohonen [6] SOM Adaptive Subspace SOM : ASSOM 5 AANN 5L-AANN SOM Non-Linear ASSOM : NL-ASSOM [5] ASSOM SOM SOM ASSOM ASSOM 3L-AANN-mnSOM 3 mnsom 5 5L-AANN-mnSOM ASSOM NL-ASSOM

28 0 input output.7 MLP-mnSOM RNN-mnSOM MLP-mnSOM [4, 53, 4, 4] MLP-mnSOM RNN-mnSOM mnsom SOM NG SOM SOM SOM NG SOM [] [3] [5] mnsom mnsom MLP mnsom MLP-mnSOM mnsom.3. mnsom mnsom.7 MLP mnsom MLP MLP-mnSOM

29 .3 SOM mnsom I systems I datasets : unknown : observed K modules system sampling system i system I MLP-mnSOM.8 MLP-mnSOM I J y i = f i (x) (.9) D i = {(x ij, y ij )} (.30) x ij = [x ij,..., x ijdi ] T (.3) y ij = [y ij,..., y ijdo ] T. (.3) f i i-th D i-th x ij y ij i-th j-th d i d o f i I J mnsom 3. f i. 3.. mnsom I Best Matching Module : BMM. BMM

30 SOM 3. mnsom.8 SOM MLP-mnSOM L (f, g) = f(x) g(x) p(x)dx (.33) p( ) mnsom mnsom [53] (.33) f g.3.3 mnsom MLP-mnSOM mnsom 4 BMM BMM MLP Evaluative process (.33) f (.33) E k i = J J y ij ỹij k (.34) ỹij k i-th j-th k-th j=

31 .3 SOM mnsom 3 Competitive process BMM ki = arg min Ei k (.35) k Cooperative process ψ k i = exp [ ξi k ξ k /σ(n) ] I i = exp [ (.36) ξi k ξ k /σ(n) ] σ(n) n I σ σ(n) = σ + (σ 0 σ ) exp ( n ) τ (.37) σ 0 n = 0 σ n = τ σ Adaptive Process Back Propagation : BP w k = η I i= ψ k i E k i w k (.38) w k k-th k-th E k E k = I ψi k Ei k (.39) i= (.39) (.34) (.33) mnsom

32 4 class a b c f f 3 f 5 class class x x class 3 class x x class 5 class x x f f 4 f 6 3. MLP-mnSOM Parameters of MLP Number of input unit Number of hidden units 5 Number of output unit Learning coefficient η 0.05 Parameters of mnsom Map size 00 (0 0) Initial value of neighborhood radius σ Final value of neighborhood radius σ.0 Time constant τ 50 Iteration number 300 k-th g k g k (x) = I ψi k f i (x) (.40) i= 4

33 .3 SOM mnsom 5 i BMM for i-th class 4 module number (a) 9... (b) 00.0 MLP-mnSOM 3 (a) (b) 8 module number... 0 Root mean square error x mnsom mnsom

34 6.3.4 mnsom mnsom 3 3 a, b, c 6 I = y i = f i (x) (.4) f i (x) = ax 3 + bx + cx (.4) mnsom x {.0, 0.99, 0.98,..., 0.99,.0} y i (.4) 0 J = 0 MLP-mnSOM.0..0 (a) mnsom (b) (.40) BMM BMM BMM mnsom ỹ k ŷ k. 6-th mnsom J J ŷ k ỹ k (.43) j=

35 .4 MPFIM 7 current state forward model predicted trajectory desired trajectory inverse model control command controlled object actual trajectory..4 MPFIM [4] Kawato [3, 5] M x x u(t) x(t) x = F (u) u = F (x) ˆx x = F (u) = F (F (ˆx)) = ˆx.

36 8 Albus[] Kuperstein[8] Miller[33] Atkeson[4] [4] Jordan Rumelhart[] Kawato[3, 5] Conventional Feedback Controller : CFC CFC MPFIM.4. Kawato[3, 5]

37 .4 MPFIM 9 inverse model desired trajectory CFC controlled object actual trajectory.3.3 CFC ˆx CFC CFC u fb = K p (ˆx x) + K D (ˆẋ ẋ) + K A (ˆẍ ẍ) (.44) x, ẋ, ẍ K P, K D, K A ˆx ˆẋ ˆẍ i w u ff = i g( i w, ˆx, ˆẋ, ˆẍ) (.45) i g MLP RBF CFC u fb u ff h(x, ẋ, ẍ) = u fb + u ff (.46) h dw/dt = ε( u ff / i w) T u fb (.47)

38 30 Widrow-Hoff u fb û (û u ff ) T (û u ff ) i w dw/dt = ε( u ff / i w) T (û u ff ) (.48) (.47) (.48) CFC u fb (û u ff ) CFC Kawato [3] Miyamura[37] Gomi [6] [36] [7].4. MPFIM Multiple Paired Forward-Inverse Models : MPFIM MOSAIC MOdular Selection And Identification for Control Wolpert Kawato [57] Wolpert Kawato MPFIM Gomi [7] Gomi mixture of experts Wolpert Kawato MPFIM soft-max Gomi Haruno [8, 9] Gomi MPFIM

39 .4 MPFIM 3 contextual signal desired state efference copy of motor command K forward likelyhood model state prediction model forward prediction error likelyhood model model forward likelyhood model responsibility prior model x predictor responsibility prior x predictor responsibility prior module feedforward predictor inverse control command module feedforward model inverse control command module feedforward model inverse control command model x x x posterior x normalization + + control command controlled object next state + - CFC feedback control command.4 MPFIM MPFIM Doya Samejima MPFIM Multiple Model Reinforcement Learning : MMRL [8, 47] Wolpert Kawato MPFIM MPFIM Hidden Markov Model : HMM Haruno [8].4.3 MPFIM MPFIM.4 MPFIM K 3 Responsibility Predictor : RP x k t+ = f g( f w k t, x t, u t ) (.49)

40 3 w k t f g x k t x t l k t l k t = P (x t f w k t, u t, k) = πσ e x t x k t /σ (.50) σ soft-max l k t K k = lk t (.5) 0 MPFIM contextual signal RP y t RP πt k = η(δt k, y t ) (.5) δ k t η λ k t = πt k lt k K k = πk t lt k (.53) λ k t 3 3 RP MPFIM 3 ˆx k-th u k t = i g( i wt k, ˆx t ) (.54)

41 .4 MPFIM 33 K K u t = λ k t u k t = λ k t ig( i wt k, ˆx t ) (.55) k= k= i wt k = ελ k d i g k t d i wt k (û t u t ) ε duk t dvt k λ k t u fb (.56) f wt k = ελ k d f g k t d f wt k (x t x k t ) (.57).4.4 MPFIM MPFIM. π k t π k t = η(δ k t, y t ) (.58). l k t l k t = πσ e x t x k t /σ (.59) 3. λ k t λ k t = πt k lt k K k = πk t lt k (.60) 4. u t K u t = λ k t u k t (.6) k= 5. i wt k = ε duk t dvt k λ k t u fb (.6) f wt k = ελ k d f g k t dwt k (x t x k t ) (.63)

42

43 35 3 SOAC 3. Jacobs Jordan [0] mixture of experts Jordan Jacobs mixture of experts [] mixture of experts Narendra [39, 40]

44 36 3 SOAC Wolpert [57] Narendra Wolpert soft-max Narendra Gomi [7] mixture of experts Kawato [3, 4, 5] Wolpert Multiple Paired Forward-Inverse Models : MPFIM Wolpert Wolpert SOAC SOAC Tokunaga [5, 53, 54] SOM mnsom mnsom mnsom mnsom Nishida [4, 4]

45 3. 37 SOAC modules BMM for object B switch BMM for object A object A object B 3. SOAC BMM BMM BMM SOAC () () (3) () mnsom () SOAC mnsom (3) SOAC mnsom 3. SOAC 3.3 SOAC 3.4

46 38 3 SOAC 3. SOAC SOAC SOM SOAC SOM Best Matching Module : BMM BMM BMM SOAC predictor controller 3. k-th controlled object current state x(t) control signal u(t) t predicted state x k (t + t) * x k (t + t) = p f k (x(t), u(t)) (3.) k-th x(t) desired state ˆx(t) u k (t) u k (t) = c f k (x(t), ˆx(t)) (3.) SOAC * * SOAC * predictor-map controller-map

47 Predictor D x x^ current state x desired state u Controller Predictor Controller predicted state D x ~ k time delay u k Winner Takes All Controlled Object Predictor Controller D u control signal I I I {x i (t), u i (t)}(i =,..., I) mnsom [, 5] mnsom MLP Multi-Layer Perceptron p w k I p E k i = T T 0 x i (t) x k i (t) dt (3.3) x k i (t) p E k i i-th k-th T

48 40 3 SOAC BMM Best Matching Module : BMM BMM i = arg min p Ei k (3.4) k ψ k i ψk i ψ k i = exp[ ξ k ξ i /σ ] I i = exp[ ξk ξ i /σ ] (3.5) ξ k, ξ i k-th i-th BMM σ σ n ( σ(n) = σ + (σ 0 σ ) exp n ) τ (3.6) σ 0 σ τ ψ k i p w k = p η p η I i= ψ k i p E k i p w k (3.7) 4

49 3.4 4 u(t) control signal x(t) current state x(t) ^ desired state - + predictor CFC NNC D time delay ~ x k (t) predicted state u k (t) control signal SOAC Kawato feedback-error-learning [3, 5] Conventional Feedback Controller : CFC SOAC 3.3 CFC Neural Network Controller : NNC NNC CFC x(t) x k (t) p e k (t) = ( ε) p e k (t t) + ε x(t) x k (t) (3.8) p e k (t) 0 < ε ε

50 4 3 SOAC ε ε ε BMM BMM p e k (t) * (t) = arg min p e k (t) (3.9) k BMM φ k = exp[ ξ k ξ /σ ] K k = exp[ ξk ξ /σ ] (3.0) σ BMM BMM NNC CFC cfc u(t) *3 K u(t) = φ k u k (t) + cfc u(t) (3.) k= u k (t) = c f k (ˆx(t)) (3.) cfc u(t) = cfc W (ˆx(t) x(t)). (3.3) cfc W NNC c w k cfc u φ k c w k = c η φ k c f k c w k cfc u (3.4) 3.5 SOAC BMM *3 (3.) BMM

51 p i-th p i p k ψ k i p i p k = I ψi k p i (3.5) i= p parameter-map 3.5. BMM (3.4) BMM BMM BMM (0) = arg min p p k (3.6) k (3.9) BMM BMM 3.5. BMM BMM BMM p 5

52

53 45 4 SOAC SOAC SOAC SOAC Wolpert Multiple Paired Forward-Inverse Models : MPFIM 4. I

54 46 4 SOAC 4. B [kg/s] K [kg/s ] M [kg] p p p p p p p p p p A p B p C p D p E p F B M =.0 [kg] K [kg/s ] M used object parameter unused object parameter B [kg/s], x(t) K u(t) 4. M i ẍ + B i ẋ + K i x = u (4.) x [m] u [N] M i, B i, K i i-th [kg] [kg/s] [kg/s ] 4. p p 9 p A p F p 5 p A 4 runge-kutta h= x ẋ u 3 { 0., 0.05, 0, 0.05, 0.} 5

55 Number of classes 9 Map size 9 9 ( dimension) Initial neighborhood radius σ Final neighborhood radius σ.5 Time constant τ 00 Iteration number N 000 Learning coefficient η =9 5 3 = x p = [x, ẋ, u] T y = ẍ T k-th p w k = [ p w, k p w, k p w3] k T y k = x T p p w k (4.) BMM 4.4 n=0 00 BMM n=000

56 48 4 SOAC p p 3 p p p 3 p p p 3 p p p 3 p 5 p 7 p p 6 iteration number n=0 p 8 p 5 (a) p 9 p 4 n=00 p 4 p 7 p 5 p 6 p 6 p 8 p 9 n=300 p 4 p 7 p 5 p 6 p 8 p 9 n=500 p 4 p 7 p 8 p 9 K [kg/s] K [kg/s] K [kg/s] K [kg/s] B [kg/s ] B [kg/s ] B [kg/s ] B [kg/s ] (b) 4. (a) (b) (a) p i (i =,..., 9) p i i-th BMM

57 module 3 p iteration number n=000 p 4 p 7 0 used object parameter p 7 p 8 p p p 5 p 8 K [kg/s] p 4 p 5 p 6 4 p 3 p 6 (a)... module p 9 3 p p p B [kg/s ] (b) 4. (a) (b) desired position [m] desired velocity [m/s] desired acceleration [m/s ] time [sec] time [sec] time [sec] 4.3

58 50 4 SOAC 4.3 Damping coefficient ε.0 Neighborhood radius σ.5 Learning coefficient η 0.0 Learning time t 9000 [sec] Ornstein-Uhlenbeck [msec] 30 [sec] SOAC Kawato NNC NNC ˆx ˆẋ ˆẍ nnc u k 3 nnc u k = ˆx T c w k. (4.3) ˆx = [ˆx, ˆẋ, ˆẍ] T c w k = [ c w k, c w k, c w k 3] T PDA cfc W = [k x, kẋ, kẍ] = [5, 0, 0.5] p p [sec] = 9000 [sec] 4.3 [msec]

59 4.4 5 K [kg/s] K [kg/s] K [kg/s] B [kg/s ] t = 00 [sec] B [kg/s ] t = 500 [sec] K [kg/s] time t = 0 [sec] K [kg/s] K [kg/s] K [kg/s] B [kg/s ] t = 3000 [sec] B [kg/s ] t = 5000 [sec] B [kg/s ] B [kg/s ] used object parameter t = 9000 [sec] p 7 p 8 p 9 0 p 4 p 5 p p p B [kg/s ] p 6 t = 000 [sec] 4.4

60 5 4 SOAC position [m] trajectory error of CFC [m] time [sec] (a) CFC p A p B p C p D p E p F p A p B p C p D p E p F desired trajectory actual trajectory p A p B p C p D p E p F time [sec] position [m] trajectory error of SOAC [m] time [sec] (b) SOAC desired trajectory actual trajectory p A p B p C p D p E p F time [sec] 4.5 (a) CFC (b) SOAC 4.4 BMM (for i-th class) p B k p K k p M k c B k c K k c M k (p ) (p ) (p 3 ) (p 4 ) (p 5 ) (p 6 ) (p 7 ) (p 8 ) (p 9 ) NNC φ k K k= φk ( c w k )

61 K [kg/s ] object parameter B [kg/s] module BMM (module number) BMM BMM p A p F 6 p A p 5 5 [sec] BMM ε= (a) CFC (b) SOAC 30 [mm] SOAC BMM SOAC SOAC BMM BMM BMM 4.6 BMM

62 54 4 SOAC 4.5 MPFIM SOAC MPFIM SOAC Number of classes 9 9 Damping coefficient ε.0.0 Learning coefficient of predictor p η Learning coefficient of NNC c η p A p B BMM BMM BMM SOAC BMM MPFIM Wolpert [57] Multiple Paired Forward- Inverse Models SOAC MPFIM soft-max MPFIM 4.5. MPFIM SOAC MPFIM RP SOAC

63 forward model (predictor) inverse model (NNC) used object parameter 0 module 9 module 8 module 7 0 module 3 module 7 module K [kg/s ] 6 module 6 module 5 module 4 K [kg/s ] 6 module 5 module module module 3 module module module module 6 module B [kg/s] B [kg/s] module 9 module 8 module 7 0 module 3 module 7 module K [kg/s ] 6 module 6 module 5 module 4 K [kg/s ] 6 module 5 module module module 3 module module module module 6 module B [kg/s] B [kg/s] 8 0 (a) SOAC (b) MPFIM 4.7 SOAC MPFIM (a) SOAC (b) MPFIM SOAC 4.5 MPFIM SOAC σ 9000 [sec]

64 56 4 SOAC (a) SOAC (b) MPFIM 0.5 desired trajectory actual trajectory 0.5 desired trajectory actual trajectory position [m] 0 position [m] time [sec] time [sec] error error trajectory error [m] trajectory error [m] time [sec] time [sec] 4.8 (a) SOAC (b) MPFIM SOAC MPFIM Mean Absolute Error : MAE MAE K = 9 SOAC MPFIM MPFIM SOAC MPFIM

65 times average of RMSEs SOAC MPFIM Number of modules 4.9 SOAC MPFIM 9, 5, 49, 8 MAE [sec] MPFIM SOAC 4.8 MAE SOAC MPFIM MPFIM SOAC 00 MAE SOAC MPFIM SOAC SOAC SOM

66 58 4 SOAC 0 training class untrained class class 3 8 K [kg/s ] 6 4 class class A class B [kg/s] 4.0 MPFIM 9,5,49,8 00 MAE 4.9 SOAC MAE MPFIM MAE MPFIM 4.6 SOAC SOM h(x; θ + θ ) h(x; θ ) + h(x; θ ). (4.4)

67 P 3 m i= Ei /3 m E A SOAC P 3 m i= Ei /3 + m EA First level Second level Third level MPFIM P 3 m i= Ei /3 m EA P 3 m i= Ei /3 + m EA First level Second level Third level h x θ SOAC k-th õ k o i B, K, M 3 o i = [B i, K i, M i ] Model Error m E k i := o i õ k (4.5) SOAC 4.0 3,, 3 A M M = B K (B ) + (K 0) = 8, B, K 0 SOAC MPFIM SOAC 5 MPFIM 3 SOAC MPFIM 4.5.4

68 60 4 SOAC MPFIM 4. SOAC MPFIM 3 SOAC (a) MPFIM (b) (c), (d) 3 (e), (f) 4.6 SOAC -st 3-rd 5-th 3 MPFIM -st -nd 3 3-rd SOAC SOM MPFIM 4.6 A SOAC 4-th MPFIM 3 3-rd SOAC MPFIM soft-max st winner nd winner e = ( ε)e + ε x x (4.6) e = ( ε)e + ε x x (4.7) o ip = e o + e o e + e (4.8) ( ), ( ) MPFIM 3 A /

69 p w = p η p w x x (4.9) (4.9) 4.(e), (f) (e) SOAC 4-th A SOM -st 3-rd 5-th MPFIM (f) SOAC SOAC MPFIM SOAC

70 6 4 SOAC p B, p K, p M c B, c K, c M 4. SOM SOAC NNC SOAC BMM mnsom

71 σ σ.0,.5, σ σ SOM σ =.0 σ σ σ.0 mnsom CFC BMM BMM BMM MPFIM SOAC MPFIM SOAC SOM MPFIM [46] SOAC MPFIM

72 64 4 SOAC MPFIM SOAC SOAC mnsom mnsom [54] MPFIM SOAC 4.5 MPFIM SOAC MPFIM SOAC MPFIM SOAC MPFIM 4.5(c) MPFIM SOAC

73 (d) 9000 [sec] [8, 9] MPFIM SOM SOAC MPFIM

74 66 4 SOAC SOAC MPFIM 0 5 class 3 0 class 3 First level : learning phase K [kg/s ] class 4 model error class A 3 class K [kg/s ] class class A 3 model error class B [kg/s] (a) B [kg/s] (b) 0 interpolated object by soft-max 5 class 3 0 interpolated object by soft-max nd winner class 3 Second level : interpolation K [kg/s ] class st winner 4 model error nd winner class A 3 class K [kg/s ] class st winner 3 class model error class A B [kg/s] (c) B [kg/s] (d) Third level : incremental learning phase K [kg/s ] class 3 class A 4 class 3 class B [kg/s] (e) K [kg/s ] model error class 3 3 class A model errors class class B [kg/s] (f) 4. SOAC MPFIM 3 (a) SOAC (b) MPFIM (c) SOAC (d) MPFIM (e) SOAC (f) MPFIM

75 predictor NNC 0 8 K [kg/s ] B [kg/s] 4. predictor NNC 0 8 K [kg/s ] B [kg/s] 4.3

76 68 4 SOAC predicto r NNC K [kg/s ] class class class class class class K [kg/s ] K [kg/s ] B [kg/s] B [kg/s] B [kg/s].5 K [kg/s ] class class K [kg/s ] class class K [kg/s ] class class B [kg/s] B [kg/s] B [kg/s].5 K [kg/s ] class class K [kg/s ] class class K [kg/s ] class class B [kg/s] B [kg/s] B [kg/s] (a) predictor (b) NNC (c) predictor and NNC 4.4 (a) (b) (c) σ = {.0,.5,.0} (a) (b)nnc (c) NNC

77 forward model (SOAC) inverse model (SOAC) K [kg/s ] B [kg/s] 4 0 (a) 00 K [kg/s ] iteration number [time] B [kg/s] 4 0 (b) time [sec] forward model (MPFIM) inverse model (MPFIM) K [kg/s ] B [kg/s] 4 0 (c) 000 time [sec] 000 K [kg/s ] B [kg/s] 4 0 (d) time [sec] SOAC MPFIM (a) SOAC (b) MPFIM (c) SOAC (d) MPFIM

78

79 7 5 SOAC 5. SOAC SOAC SOAC 5. (M + m)ẍ + ml cos θ θ ml θ sin θ + fẋ = a u (5.) ml cos θ ẍ + (I + ml ) θ mlg sin θ + C θ = 0 (5.) x [m] ẋ [m/s] ẍ [m/s ] θ [rad] θ [rad/s] θ [rad/s ] x = [x, θ, ẋ, θ] T u M [kg] m [kg] l [m] f [kg/s] C [kgm /s] g [m/s ] a [N/V] I

80 7 5 SOAC l m C x u M f m [kg] l [m].5.8 learning phase execution phase p = [0.6, 0.] p A = [.0,.00] p = [., 0.] p B = [.5, 0.9] p 3 = [.8, 0.] p C = [.4,.8] variable parameter p 4 = [0.6,.0] p D = [0.93, 0.94] p i p i = [l i [m], m i [kg]] p 5 = [.,.0] p E = [0.93,.3] l i : length to the mass center p 6 = [.8,.0] p F = [.6,.49] m i : pendulum mass p 7 = [0.6,.8] p G = [.7,.4] p 8 = [.,.8] p H = [.75, 0.64] p 9 = [.8,.8] p I = [.0, 0.0] M cart mass 5.0 [kg] C friction coefficient of pendulum [kgm /s] f friction coefficient of cart 0.0 [kg/s] g gravity acceleration 9.8 [m/s ] a gain 5 [N/V] 5. I = ml /3 [kgm ] 5. 4 runge-kutta h= x(t) u(t) x(t + t) 5 4 x

81 Number of classes 9 Map size 9 9 ( dimension) Initial neighborhood radius σ Final neighborhood radius σ.8 Time constant τ 00 Iteration number N 000 { 0., 0, 0.} 3 t [sec] =9 3 5 = θ = 0 x(t) u(t) t [sec] x(t + t) 5 4 t = 0.0 [sec] 5. p η SOAC SOAC CFC CFC

82 74 5 SOAC 9 73 [rad] [sec] 8 BMM for training data 5. CFC CFC cfc W = [k x, k θ, kẋ, k θ] = [ 0.5, 5.64, 0.67,.03]

83 cart position [m] pendulum angle [rad] actual response desired response time [sec] [sec] 0 (x T Qx + u T Ru)dt (5.3) cfc W k = R B k P k (5.4) P k (4 4) Riccati (A k ) T P k + P k A k + Q P k B k R (B k ) T P k = 0 (5.5) A k B k k-th Q R 4 4 R = BMM

84 76 5 SOAC SOAC module BMM BMM SOAC BMM 5.5 ψi k p i p i SOM p i p k 5.5 BMM

85 m [kg] l [m] BMM BMM BMM p = [l, m ] = [.8, 0.] p = [l, m ] = [0.6,.8] p 0 [sec] p BMM BMM BMM 5.7 BMM BMM 0.9 [sec] BMM

86 78 5 SOAC SOAC module BMM ε = 0.00 ε BMM 0.9 [sec] BMM BMM BMM BMM BMM BMM BMM { P (kff (tc < t ) = U(t c t s ) = s ) 0 (t c t s ) { 0 P (kfb) (tc < t = U(t c t s ) = s ) (t c t s ) (5.6) (5.7)

87 time [sec] feedback selection feedforward selection time [sec] feedback selection feedforward selection BMM BMM BMM P (kff ) k ff P (kfb ) kfb t c t s t s [sec] t s ε t s ε 5.9 p = [l, m ] = [.8, 0.] p = [l, m ] = [0.6,.8] p 0.8[kg]

88 80 5 SOAC pendulum angle [rad] cart position [m] time [sec] 0 feedback selection feedforward selection feedback selection feedforward selection time [sec] 5.8 BMM BMM visual effects error time [sec] 5.9 BMM BMM (5.7) t s = 5.9 [sec] BMM BMM 0 [sec] BMM [sec]

89 estimated parameter.8 estimated parameter true parameter m [kg] m [kg] l [m] l [m] 5.0 BMM 5.6 p = [.60,.50] SOAC BMM BMM p p = [.65,.54] p p p =

90 8 5 SOAC SOAC SOAC SOM SOM mnsom SOAC 0 BMM

91 SOAC BMM SOAC NNC GOMI [6] IDML Inverse Dynamics Model Learning IDML SOAC IDML [34, 35]

92

93 SOAC () () Jacobs Jordan mixture of experts Narendra Wolpert MPFIM () MPFIM () mixture of experts SOAC Narendra Wolpert SOAC Narendra Wolpert MPFIM SOAC mnsom SOAC MPFIM

94 SOAC SOAC SOAC MPFIM SOAC MPFIM SOAC SOAC SOAC

95 6.3 SOAC 87 SOAC 6.3 SOAC mnsom mnsom SOAC SOAC SOAC SOAC mnsom SOM mnsom [5, 53]

96 Jordan Rumelhart [] SOAC mnsom mnsom MPFIM SOAC mnsom [54] SOAC SOAC 6.4 SOAC SOAC mnsom mnsom Tokunaga[5] mnsom MLP RBF RNN SOM NG

97 6.4 SOAC 89 SOM NG [3, 45] SOAC SOM SOM MPFIM MPFIM mnsom SOM NG SOM Evolving SOM : ESOM [7] Furukawa[, 3] SOM NG SOAC SOM SOM SOM SOM SOM SOAC SOM SOM NG SOM NG GNG Growing Neural Gas [0] ESOM SOAC 6.4. mnsom SOAC [54]

98 90 6 SOAC SOAC A B SOAC SOAC A B SOAC MMRL Multiple Model Reinforcement Learning MMRL MPFIM MPFIM [47] Doya [8] MMRL SOAC MPFIM SOAC SOAC SOAC SOM SOAC

99 9 7 SOAC SOAC SOM mnsom SOAC SOAC MPFIM SOAC SOAC SOAC.. SOAC

100 SOAC MPFIM MPFIM SOAC MPFIM 7. MPFIM SOAC MPFIM SOAC SOM MPFIM MPFIM MPFIM SOAC SOAC mnsom MPFIM SOAC SOAC MPFIM

101 SOAC MPFIM COE mnsom

102 94 7 Sandor M. Veres SOAC Veres COE J9 No

103 95 [] J.S. Albus, A new approach to manipulator control : The cerebellar model articulation controller (CMAC), Transactions of the AMSE. Journal of Dynamic Systems, Measurement, and Control, vol.97, 0 7, 975. [] S. Amari, Topographic organization of nerve fields, Bulletin of Mathematical Biology, vol.4, no.3, pp , 980. [3] S. Amari, Field theory of self-organizing neural nets, IEEE Trans. Systems, Man and Cybernetics, vol.3, no.5, pp , 983. [4] C.G. Atkeson, and D.J. Reinkensmeyer, Using associative content-addressable memories to control robots, Proc. IEEE Conference on Decision and Control, pp , Austin, Texas, Dece., 988. [5] C.M. Bishop, M. Svensén, and C.K.I. Williams, GTM: The generative topographic mapping, Neural Computation, vol.0, no., pp.5 34, 998. [6] G. Deboeck T. Kohonen, 999. [7] D. Deng, and N. Kasabov, On-line pattern analysis by evolving self-organizing maps, Neurocomputing, vol.5, pp.87 03, 003. [8] K. Doya, K. Samejima, K. Katagiri, and M. Kawato, Multiple model-based reinforcement learning, Neural Computation, vol.4, no.6, pp , 00. [9] M. Egmont-Petersen, D. de Ridder, and H. Handels, Image processing with neural networks a review, Pattern Recognition, vol.35, pp.79 30, 00.

104 96 [0] B. Fritzke, A growing neural gas network learns topologies, Advances in Neural Information Processing Systems, vol.7, pp.65 63, 995. [], vol.9 no. pp [] T. Furukawa, SOM of SOMs: Self-organizing map which maps a group of selforganizing maps, Lecture Notes in computer Science, vol.3696, pp , 005. [3] T. Furukawa, SOM of SOMs : An Extension of SOM from Map to Homotopy, Lecture Notes in Computer Science (Edited book of 3th International Conference of Neural Information Processing (ICONIP006)), vol.43, pp , 006. [4] T. Furukawa, K. Tokunaga, S. Kaneko, K. Kimotsuki, and S. Yasui, Generalized self-organizing maps (mnsom) for dealing with dynamical systems, Proc. International Symposium on Nonlinear Theory and its Applications, pp.3 34, Fukuoka, Japan, Nove. Dece [5] T. Furukawa, Self-Organizing Homotopy Network, Proc. Workshop on Self- Organizing Maps (WSOM 007), Germany, 007. [6] H. Gomi, and M. Kawato, Neural network control for a closed-loop system using feedback-error-learning, Neural Networks, vol.6, no.7, pp , 993. [7] H. Gomi, and M. Kawato, Recognition of Manipulated Objects by Motor Learning With Modular Architecture Networks, Neural Networks, vol.6, no.4, pp ,993. [8] M. Haruno, D.M. Wolpert, and M. Kawato, MOSAIC model for motor learning and control, Neural Computation, vol.3, pp.0 0, 00. [9] M. Haruno, D.M. Wolpert, and M. Kawato, Multiple Paired Forward-Inverse Models for Human Motor Learning and Control, Advances in neural information processing systems, vol., pp.3 37, 999.

105 97 [0] R.A. Jacobs, M.I. Jordan, S.J. Nowlan, and G.E. Hinton, Adaptive mixtures of local experts, Neural Computation, vol.3, pp.79 87, 99. [] M.I. Jordan, and R.A. Jacobs, Hierarchical mixture of experts and the EM algorithm, Neural Computation, vol.6, pp.8 4, 994. [] M.I. Jordan, and D.E. Rumelhart, Forward models: Supervised learning with a distal teacher, Cognitive Science, vol.6, pp , 99. [3] M. Kawato, Feedback-error-learning neural network for supervised motor learning, Advanced Neural Computers, In Eckmiller R (Ed.), Elsevier, North- Holland, pp , 990. [4], 996 [5] M. Kawato, K. Furukawa, and R. Suzuki, A hierarchical neural network model for the control and learning of voluntary movements, Biological Cybernetics, vol.56, pp. 7, 987. [6] T. Kohonen, S. Kaski, and H. Lappalainen, Self-organized formation of various invariant-feature filters in the adaptive-subspace SOM, Neural Computation, vol.9, no.6, pp.3 344, 997. [7] T. Kohonen, self-organizing maps, Springer-Verlag, 00. [8] M. Kuperstein, Neural model of adaptive hand-eye coordination for single posture, Science, vol.39, 308 3, 988. [9] S.P. Luttrell, Self-organization: A derivation from first principles of a class of learning algorithms, Proc. IEEE Int. Joint Conf. on Neural Networks (IJCNN89), Part I, pp , IEEE Press, 989. [30] S.P. Luttrell, Derivation of a class of training algorithms, IEEE Trans. Neural Networks, vol., no., pp.9 3, 990. [3] T.M. Martinetz, H.J. Ritter, and K.J. Schulten, Three-dimensional neural net for learning visuomotor coordination of a robot arm, IEEE Trans. Neural Net-

106 98 works, vol., no., pp.3 36, 990. [3] T.M. Martinetz, S.G. Berkovich, and K.J. Schulten, Neural-Gas Network for Vector Quantization and its Application to Time-Series Prediction, IEEE Trans. Neural Networks, vol.4, no.4, pp , 993. [33] T.W. Miller, F.H. Glanz, and L.G. Kraft, Application of a general learning algorithm to the control of robotic manipulators, International Journal of Robotics Research, vol.6, no., pp.84 98, 987. [34] T. Minatohara, and T. Furukawa, Self-Organizing Adaptive Controllers: Application to the Inverted Pendulum, Proc. Workshop on Self-Organizing Maps, pp.4 48, France, 005. [35] modular network SOM, vol.05, no.30, pp.49 54, 005. [36] H. Miyamoto, M. Kawato, T. Setoyama, and R. Suzuki, Feedback-error-learning neural network for trajectory control of a robotic manipulator, Neural Networks, vol., pp.5 65, 988. [37] A. Miyamura, and H. Kimura, Stability of feedback error learning scheme, Systems & Control Letters, vol.45, pp , 00. [38] M.A. Motter, and J.C. Principe, Predictive multiple model switching control with the self-organizing map, International Journal of Robust and Nonlinear Control, vol., no., pp.09 05, 00. [39] K.S. Narendra, J. Balakrishnan, and M.K. Ciliz, Adaptation and learning using multiple models, switching, and tuning, IEEE Control Systems Magazine, vol.5, no.3, pp.37 5, 995. [40] K.S. Narendra, and J. Balakrishnan, Adaptive control using multiple models, IEEE Trans. Automatic Control, vol.4, no., pp.7 87, 997. [4] S. Nishida, K. Ishii, and T. Furukawa, An Online Adaptation Control System Using mnsom, Lecture Notes in Computer Science (Edited book of 3th Inter-

107 99 national Conference of Neural Information Processing (ICONIP006)), vol.43, pp , 006. [4],,, - : -, 3, pp.05 3, 006. [43] P. Pajunen, A. Hyvärinen, and J. Karhunen, Nonlinear blind source separation by self-organizing maps, Proc. International Conference on Neural Information Processing (ICONIP 96), vol. pp.07 0, 996. [44] K. Pawelzik, J. Kohlmorgen, and K.-R. Müller, Annealed competition of experts for a segmentation and classification of switching dynamics, Neural Computation, vol.8, no., pp , 996. [45] J.C. Principe, L. Wang, and M.A. Motter, Local Dynamic Modeling with Self- Organizing Maps and Applications to Nonlinear System Identification and Control, Proc. IEEE, vol.86, no., pp.40 58, 998. [46] K. Rose, E. Gurewitz, and G.C. Fox, Statistical mechanics and phase transitions in clustering, Physical Review Letters, vol.65, no.8, pp , 990. [47] vol.j84-d-ii, no.9, pp.09 06, 00. [48] J.W. Sammon, A Nonlinear Mapping for Data Structure Analysis, IEEE Trans. Computers, vol. 8, no.5, pp , 969. [49] 3 - -,, vol.j79-d-ii, no.7, pp.9 300, 996. [50] S. Suzuki, and H. Ando, A modular network scheme for unsupervised 3D object recognition, Neurocomputing, vol.3, pp.5 8, 000. [5] K. Tokunaga, and T. Furukawa, Nonlinear ASSOM constituted of autoassociative neural modules, Proc. Workshop on Self-Organizing Maps, pp ,

108 [5] K. Tokunaga, T. Furukawa, and S. Yasui, Modular Network SOM: Self- Organizing Maps in Function Space, Neural Information Processing Letters and Reviews, vol.9, pp.5, 005. [53] SOM, vol. no. pp [54],, vol.35, pp.75 80, 006. [55], 999. [56] D.J. Willshaw, and C. von der Malsburg, How patterned neural connections can be set up by self-organization, Proc. Roy. Soc. Lond. B, vol.94, pp , 976. [57] D.M. Wolpert, and M. Kawato, Multiple paired forward and inverse models for motor control, Neural Networks, vol., pp.37 39, 998.

109 0 I accepted. II.. T. Minatohara, T. Furukawa, Self-Organizing Adaptive Controllers: Application to the Inverted Pendulum, Proc. Workshop on Self-Organizing Maps, pp.4 48, France, T. Minatohara, T. Furukawa, A proposal of self-organizing adaptive controller (SOAC), Proc. International Conference on Brain-inspired Information Technology, Japan, T. Minatohara, T. Furukawa, An adaptive controller based on modular network SOM, Proc. Postech-Kyutech Joint Workshop on Neuroinformatics, Korea, 005. III.. modular network SOM, vol.05, no.30, pp.49 54, 005.

Introduction of Self-Organizing Map * 1 Ver. 1.00.00 (2017 6 3 ) *1 E-mail: furukawa@brain.kyutech.ac.jp i 1 1 1.1................................ 2 1.2...................................... 4 1.3.......................

More information

18 2 20 W/C W/C W/C 4-4-1 0.05 1.0 1000 1. 1 1.1 1 1.2 3 2. 4 2.1 4 (1) 4 (2) 4 2.2 5 (1) 5 (2) 5 2.3 7 3. 8 3.1 8 3.2 ( ) 11 3.3 11 (1) 12 (2) 12 4. 14 4.1 14 4.2 14 (1) 15 (2) 16 (3) 17 4.3 17 5. 19

More information

Convolutional Neural Network A Graduation Thesis of College of Engineering, Chubu University Investigation of feature extraction by Convolution

Convolutional Neural Network A Graduation Thesis of College of Engineering, Chubu University Investigation of feature extraction by Convolution Convolutional Neural Network 2014 3 A Graduation Thesis of College of Engineering, Chubu University Investigation of feature extraction by Convolutional Neural Network Fukui Hiroshi 1940 1980 [1] 90 3

More information

TC1-31st Fuzzy System Symposium (Chofu, September -, 15) cremental Neural Networ (SOINN) [5] Enhanced SOINN (ESOINN) [] ESOINN GNG Deng Evolving Self-

TC1-31st Fuzzy System Symposium (Chofu, September -, 15) cremental Neural Networ (SOINN) [5] Enhanced SOINN (ESOINN) [] ESOINN GNG Deng Evolving Self- TC1-31st Fuzzy System Symposium (Chofu, September -, 15) Proposing a Growing Self-Organizing Map Based on a Learning Theory of a Gaussian Mixture Model Kazuhiro Tounaga National Fisheries University Abstract:

More information

& 3 3 ' ' (., (Pixel), (Light Intensity) (Random Variable). (Joint Probability). V., V = {,,, V }. i x i x = (x, x,, x V ) T. x i i (State Variable),

& 3 3 ' ' (., (Pixel), (Light Intensity) (Random Variable). (Joint Probability). V., V = {,,, V }. i x i x = (x, x,, x V ) T. x i i (State Variable), .... Deeping and Expansion of Large-Scale Random Fields and Probabilistic Image Processing Kazuyuki Tanaka The mathematical frameworks of probabilistic image processing are formulated by means of Markov

More information

鉄鋼協会プレゼン

鉄鋼協会プレゼン NN :~:, 8 Nov., Adaptive H Control for Linear Slider with Friction Compensation positioning mechanism moving table stand manipulator Point to Point Control [G] Continuous Path Control ground Fig. Positoining

More information

2008 : 80725872 1 2 2 3 2.1.......................................... 3 2.2....................................... 3 2.3......................................... 4 2.4 ()..................................

More information

SICE東北支部研究集会資料(2012年)

SICE東北支部研究集会資料(2012年) 77 (..3) 77- Simulation of Disturbance Compensation Control of Dual Manipulator for an Inverted Pendulum Robot Using The Extended State Observer Luis Canete Kenta Nagano, Takuma Sato, Luis Canete,Takayuki

More information

Grund.dvi

Grund.dvi 24 24 23 411M133 i 1 1 1.1........................................ 1 2 4 2.1...................................... 4 2.2.................................. 6 2.2.1........................... 6 2.2.2 viterbi...........................

More information

130 Oct Radial Basis Function RBF Efficient Market Hypothesis Fama ) 4) 1 Fig. 1 Utility function. 2 Fig. 2 Value function. (1) (2)

130 Oct Radial Basis Function RBF Efficient Market Hypothesis Fama ) 4) 1 Fig. 1 Utility function. 2 Fig. 2 Value function. (1) (2) Vol. 47 No. SIG 14(TOM 15) Oct. 2006 RBF 2 Effect of Stock Investor Agent According to Framing Effect to Stock Exchange in Artificial Stock Market Zhai Fei, Shen Kan, Yusuke Namikawa and Eisuke Kita Several

More information

第 55 回自動制御連合講演会 2012 年 11 月 17 日,18 日京都大学 1K403 ( ) Interpolation for the Gas Source Detection using the Parameter Estimation in a Sensor Network S. T

第 55 回自動制御連合講演会 2012 年 11 月 17 日,18 日京都大学 1K403 ( ) Interpolation for the Gas Source Detection using the Parameter Estimation in a Sensor Network S. T 第 55 回自動制御連合講演会 212 年 11 月 日, 日京都大学 1K43 () Interpolation for the Gas Source Detection using the Parameter Estimation in a Sensor Network S. Tokumoto, T. Namerikawa (Keio Univ. ) Abstract The purpose of

More information

untitled

untitled K-Means 1 5 2 K-Means 7 2.1 K-Means.............................. 7 2.2 K-Means.......................... 8 2.3................... 9 3 K-Means 11 3.1.................................. 11 3.2..................................

More information

1 Fig. 1 Extraction of motion,.,,, 4,,, 3., 1, 2. 2.,. CHLAC,. 2.1,. (256 ).,., CHLAC. CHLAC, HLAC. 2.3 (HLAC ) r,.,. HLAC. N. 2 HLAC Fig. 2

1 Fig. 1 Extraction of motion,.,,, 4,,, 3., 1, 2. 2.,. CHLAC,. 2.1,. (256 ).,., CHLAC. CHLAC, HLAC. 2.3 (HLAC ) r,.,. HLAC. N. 2 HLAC Fig. 2 CHLAC 1 2 3 3,. (CHLAC), 1).,.,, CHLAC,.,. Suspicious Behavior Detection based on CHLAC Method Hideaki Imanishi, 1 Toyohiro Hayashi, 2 Shuichi Enokida 3 and Toshiaki Ejima 3 We have proposed a method for

More information

情報化社会に関する全国調査中間報告書

情報化社会に関する全国調査中間報告書 9 1 1990 1998 25.2% 2000 38.6% 2001 50.1% 2002 3 57.2% 2001 12 60.5% 2002 3 49.5% 2001 12 44.0% 2002 1 1992 0 2 1993 1 2 1994 84 37 1995 467 283 1996 1411 1080 1997 1621 1057 1998 1700 1098 1999 3036 1666

More information

(MIRU2008) HOG Histograms of Oriented Gradients (HOG)

(MIRU2008) HOG Histograms of Oriented Gradients (HOG) (MIRU2008) 2008 7 HOG - - E-mail: katsu0920@me.cs.scitec.kobe-u.ac.jp, {takigu,ariki}@kobe-u.ac.jp Histograms of Oriented Gradients (HOG) HOG Shape Contexts HOG 5.5 Histograms of Oriented Gradients D Human

More information

Real AdaBoost HOG 2009 3 A Graduation Thesis of College of Engineering, Chubu University Efficient Reducing Method of HOG Features for Human Detection based on Real AdaBoost Chika Matsushima ITS Graphics

More information

IPSJ SIG Technical Report Vol.2010-CVIM-170 No /1/ Visual Recognition of Wire Harnesses for Automated Wiring Masaki Yoneda, 1 Ta

IPSJ SIG Technical Report Vol.2010-CVIM-170 No /1/ Visual Recognition of Wire Harnesses for Automated Wiring Masaki Yoneda, 1 Ta 1 1 1 1 2 1. Visual Recognition of Wire Harnesses for Automated Wiring Masaki Yoneda, 1 Takayuki Okatani 1 and Koichiro Deguchi 1 This paper presents a method for recognizing the pose of a wire harness

More information

4. C i k = 2 k-means C 1 i, C 2 i 5. C i x i p [ f(θ i ; x) = (2π) p 2 Vi 1 2 exp (x µ ] i) t V 1 i (x µ i ) 2 BIC BIC = 2 log L( ˆθ i ; x i C i ) + q

4. C i k = 2 k-means C 1 i, C 2 i 5. C i x i p [ f(θ i ; x) = (2π) p 2 Vi 1 2 exp (x µ ] i) t V 1 i (x µ i ) 2 BIC BIC = 2 log L( ˆθ i ; x i C i ) + q x-means 1 2 2 x-means, x-means k-means Bayesian Information Criterion BIC Watershed x-means Moving Object Extraction Using the Number of Clusters Determined by X-means Clustering Naoki Kubo, 1 Kousuke

More information

IPSJ SIG Technical Report Vol.2015-MUS-107 No /5/23 HARK-Binaural Raspberry Pi 2 1,a) ( ) HARK 2 HARK-Binaural A/D Raspberry Pi 2 1.

IPSJ SIG Technical Report Vol.2015-MUS-107 No /5/23 HARK-Binaural Raspberry Pi 2 1,a) ( ) HARK 2 HARK-Binaural A/D Raspberry Pi 2 1. HARK-Binaural Raspberry Pi 2 1,a) 1 1 1 2 3 () HARK 2 HARK-Binaural A/D Raspberry Pi 2 1. [1,2] [2 5] () HARK (Honda Research Institute Japan audition for robots with Kyoto University) *1 GUI ( 1) Python

More information

xx/xx Vol. Jxx A No. xx 1 Fig. 1 PAL(Panoramic Annular Lens) PAL(Panoramic Annular Lens) PAL (2) PAL PAL 2 PAL 3 2 PAL 1 PAL 3 PAL PAL 2. 1 PAL

xx/xx Vol. Jxx A No. xx 1 Fig. 1 PAL(Panoramic Annular Lens) PAL(Panoramic Annular Lens) PAL (2) PAL PAL 2 PAL 3 2 PAL 1 PAL 3 PAL PAL 2. 1 PAL PAL On the Precision of 3D Measurement by Stereo PAL Images Hiroyuki HASE,HirofumiKAWAI,FrankEKPAR, Masaaki YONEDA,andJien KATO PAL 3 PAL Panoramic Annular Lens 1985 Greguss PAL 1 PAL PAL 2 3 2 PAL DP

More information

21 David Marr Marr Marr Marr 3 1. 1

21 David Marr Marr Marr Marr 3 1. 1 21 David Marr Marr Marr Marr 3 1. 1 2 2. 2.1. 3.1.1. 3 (1) (2) () (4) (5) 3.1.2. 3.1.4. 1970 1984 Doya K. What are the computations of the cerebellum, the basal ganglia, and the cerebral cortex. Neural

More information

[1] SBS [2] SBS Random Forests[3] Random Forests ii

[1] SBS [2] SBS Random Forests[3] Random Forests ii Random Forests 2013 3 A Graduation Thesis of College of Engineering, Chubu University Proposal of an efficient feature selection using the contribution rate of Random Forests Katsuya Shimazaki [1] SBS

More information

mt_4.dvi

mt_4.dvi ( ) 2006 1 PI 1 1 1.1................................. 1 1.2................................... 1 2 2 2.1...................................... 2 2.1.1.......................... 2 2.1.2..............................

More information

2007/8 Vol. J90 D No. 8 Stauffer [7] 2 2 I 1 I 2 2 (I 1(x),I 2(x)) 2 [13] I 2 = CI 1 (C >0) (I 1,I 2) (I 1,I 2) Field Monitoring Server

2007/8 Vol. J90 D No. 8 Stauffer [7] 2 2 I 1 I 2 2 (I 1(x),I 2(x)) 2 [13] I 2 = CI 1 (C >0) (I 1,I 2) (I 1,I 2) Field Monitoring Server a) Change Detection Using Joint Intensity Histogram Yasuyo KITA a) 2 (0 255) (I 1 (x),i 2 (x)) I 2 = CI 1 (C>0) (I 1,I 2 ) (I 1,I 2 ) 2 1. [1] 2 [2] [3] [5] [6] [8] Intelligent Systems Research Institute,

More information

1, 2, 2, 2, 2 Recovery Motion Learning for Single-Armed Mobile Robot in Drive System s Fault Tauku ITO 1, Hitoshi KONO 2, Yusuke TAMURA 2, Atsushi YAM

1, 2, 2, 2, 2 Recovery Motion Learning for Single-Armed Mobile Robot in Drive System s Fault Tauku ITO 1, Hitoshi KONO 2, Yusuke TAMURA 2, Atsushi YAM 1, 2, 2, 2, 2 Recovery Motion Learning for Single-Armed Mobile Robot in Drive System s Fault Tauku ITO 1, Hitoshi KONO 2, Yusuke TAMURA 2, Atsushi YAMASHITA 2 and Hajime ASAMA 2 1 Department of Precision

More information

Computer Security Symposium October ,a) 1,b) Microsoft Kinect Kinect, Takafumi Mori 1,a) Hiroaki Kikuchi 1,b) [1] 1 Meiji U

Computer Security Symposium October ,a) 1,b) Microsoft Kinect Kinect, Takafumi Mori 1,a) Hiroaki Kikuchi 1,b) [1] 1 Meiji U Computer Security Symposium 017 3-5 October 017 1,a) 1,b) Microsoft Kinect Kinect, Takafumi Mori 1,a) Hiroaki Kikuchi 1,b) 1. 017 5 [1] 1 Meiji University Graduate School of Advanced Mathematical Science

More information

三石貴志.indd

三石貴志.indd 流通科学大学論集 - 経済 情報 政策編 - 第 21 巻第 1 号,23-33(2012) SIRMs SIRMs Fuzzy fuzzyapproximate approximatereasoning reasoningusing using Lukasiewicz Łukasiewicz logical Logical operations Operations Takashi Mitsuishi

More information

it-ken_open.key

it-ken_open.key 深層学習技術の進展 ImageNet Classification 画像認識 音声認識 自然言語処理 機械翻訳 深層学習技術は これらの分野において 特に圧倒的な強みを見せている Figure (Left) Eight ILSVRC-2010 test Deep images and the cited4: from: ``ImageNet Classification with Networks et

More information

SICE東北支部研究集会資料(2012年)

SICE東北支部研究集会資料(2012年) 77 (..3) 77- A study on disturbance compensation control of a wheeled inverted pendulum robot during arm manipulation using Extended State Observer Luis Canete Takuma Sato, Kenta Nagano,Luis Canete,Takayuki

More information

5 Armitage x 1,, x n y i = 10x i + 3 y i = log x i {x i } {y i } 1.2 n i i x ij i j y ij, z ij i j 2 1 y = a x + b ( cm) x ij (i j )

5 Armitage x 1,, x n y i = 10x i + 3 y i = log x i {x i } {y i } 1.2 n i i x ij i j y ij, z ij i j 2 1 y = a x + b ( cm) x ij (i j ) 5 Armitage. x,, x n y i = 0x i + 3 y i = log x i x i y i.2 n i i x ij i j y ij, z ij i j 2 y = a x + b 2 2. ( cm) x ij (i j ) (i) x, x 2 σ 2 x,, σ 2 x,2 σ x,, σ x,2 t t x * (ii) (i) m y ij = x ij /00 y

More information

Studies of Foot Form for Footwear Design (Part 9) : Characteristics of the Foot Form of Young and Elder Women Based on their Sizes of Ball Joint Girth

Studies of Foot Form for Footwear Design (Part 9) : Characteristics of the Foot Form of Young and Elder Women Based on their Sizes of Ball Joint Girth Studies of Foot Form for Footwear Design (Part 9) : Characteristics of the Foot Form of Young and Elder Women Based on their Sizes of Ball Joint Girth and Foot Breadth Akiko Yamamoto Fukuoka Women's University,

More information

28 Horizontal angle correction using straight line detection in an equirectangular image

28 Horizontal angle correction using straight line detection in an equirectangular image 28 Horizontal angle correction using straight line detection in an equirectangular image 1170283 2017 3 1 2 i Abstract Horizontal angle correction using straight line detection in an equirectangular image

More information

MmUm+FopX m Mm+Mop F-Mm(Fop-Mopum)M m+mop MSuS+FX S M S+MOb Fs-Ms(Mobus-Fex)M s+mob Fig. 1 Particle model of single degree of freedom master/ slave sy

MmUm+FopX m Mm+Mop F-Mm(Fop-Mopum)M m+mop MSuS+FX S M S+MOb Fs-Ms(Mobus-Fex)M s+mob Fig. 1 Particle model of single degree of freedom master/ slave sy Analysis and Improvement of Digital Control Stability for Master-Slave Manipulator System Koichi YOSHIDA* and Tetsuro YABUTA* Some bilateral controls of master-slave system have been designed, which can

More information

letter by letter reading read R, E, A, D 1

letter by letter reading read R, E, A, D 1 3 2009 10 14 1 1.1 1 1.2 1 letter by letter reading read R, E, A, D 1 1.3 1.4 Exner s writing center hypergraphia, micrographia hypergraphia micrographia 2 3 phonological dyslexia surface dyslexia deep

More information

2797 4 5 6 7 2. 2.1 COM COM 4) 5) COM COM 3 4) 5) 2 2.2 COM COM 6) 7) 10) COM Bonanza 6) Bonanza 6 10 20 Hearts COM 7) 10) 52 4 3 Hearts 3 2,000 4,000

2797 4 5 6 7 2. 2.1 COM COM 4) 5) COM COM 3 4) 5) 2 2.2 COM COM 6) 7) 10) COM Bonanza 6) Bonanza 6 10 20 Hearts COM 7) 10) 52 4 3 Hearts 3 2,000 4,000 Vol. 50 No. 12 2796 2806 (Dec. 2009) 1 1, 2 COM TCG COM TCG COM TCG Strategy-acquisition System for Video Trading Card Game Nobuto Fujii 1 and Haruhiro Katayose 1, 2 Behavior and strategy of computers

More information

A Navigation Algorithm for Avoidance of Moving and Stationary Obstacles for Mobile Robot Masaaki TOMITA*3 and Motoji YAMAMOTO Department of Production

A Navigation Algorithm for Avoidance of Moving and Stationary Obstacles for Mobile Robot Masaaki TOMITA*3 and Motoji YAMAMOTO Department of Production A Navigation Algorithm for Avoidance of Moving and Stationary Obstacles for Mobile Robot Masaaki TOMITA*3 and Motoji YAMAMOTO Department of Production System Engineering, Kyushu Polytecnic College, 1665-1

More information

untitled

untitled IT E- IT http://www.ipa.go.jp/security/ CERT/CC http://www.cert.org/stats/#alerts IPA IPA 2004 52,151 IT 2003 12 Yahoo 451 40 2002 4 18 IT 1/14 2.1 DoS(Denial of Access) IDS(Intrusion Detection System)

More information

2003/3 Vol. J86 D II No.3 2.3. 4. 5. 6. 2. 1 1 Fig. 1 An exterior view of eye scanner. CCD [7] 640 480 1 CCD PC USB PC 2 334 PC USB RS-232C PC 3 2.1 2

2003/3 Vol. J86 D II No.3 2.3. 4. 5. 6. 2. 1 1 Fig. 1 An exterior view of eye scanner. CCD [7] 640 480 1 CCD PC USB PC 2 334 PC USB RS-232C PC 3 2.1 2 Curved Document Imaging with Eye Scanner Toshiyuki AMANO, Tsutomu ABE, Osamu NISHIKAWA, Tetsuo IYODA, and Yukio SATO 1. Shape From Shading SFS [1] [2] 3 2 Department of Electrical and Computer Engineering,

More information

IS1-09 第 回画像センシングシンポジウム, 横浜,14 年 6 月 2 Hough Forest Hough Forest[6] Random Forest( [5]) Random Forest Hough Forest Hough Forest 2.1 Hough Forest 1 2.2

IS1-09 第 回画像センシングシンポジウム, 横浜,14 年 6 月 2 Hough Forest Hough Forest[6] Random Forest( [5]) Random Forest Hough Forest Hough Forest 2.1 Hough Forest 1 2.2 IS1-09 第 回画像センシングシンポジウム, 横浜,14 年 6 月 MI-Hough Forest () E-mail: ym@vision.cs.chubu.ac.jphf@cs.chubu.ac.jp Abstract Hough Forest Random Forest MI-Hough Forest Multiple Instance Learning Bag Hough Forest

More information

5D1 SY0004/14/ SICE 1, 2 Dynamically Consistent Motion Design of Humanoid Robots even at the Limit of Kinematics Kenya TANAKA 1 and Tomo

5D1 SY0004/14/ SICE 1, 2 Dynamically Consistent Motion Design of Humanoid Robots even at the Limit of Kinematics Kenya TANAKA 1 and Tomo 5D1 SY4/14/-485 214 SICE 1, 2 Dynamically Consistent Motion Design of Humanoid Robots even at the Limit of Kinematics Kenya TANAKA 1 and Tomomichi SUGIHARA 2 1 School of Engineering, Osaka University 2-1

More information

Microsoft Word - toyoshima-deim2011.doc

Microsoft Word - toyoshima-deim2011.doc DEIM Forum 2011 E9-4 252-0882 5322 252-0882 5322 E-mail: t09651yt, sashiori, kiyoki @sfc.keio.ac.jp CBIR A Meaning Recognition System for Sign-Logo by Color-Shape-Based Similarity Computations for Images

More information

SEJulyMs更新V7

SEJulyMs更新V7 1 2 ( ) Quantitative Characteristics of Software Process (Is There any Myth, Mystery or Anomaly? No Silver Bullet?) Zenya Koono and Hui Chen A process creates a product. This paper reviews various samples

More information

Study on Throw Accuracy for Baseball Pitching Machine with Roller (Study of Seam of Ball and Roller) Shinobu SAKAI*5, Juhachi ODA, Kengo KAWATA and Yu

Study on Throw Accuracy for Baseball Pitching Machine with Roller (Study of Seam of Ball and Roller) Shinobu SAKAI*5, Juhachi ODA, Kengo KAWATA and Yu Study on Throw Accuracy for Baseball Pitching Machine with Roller (Study of Seam of Ball and Roller) Shinobu SAKAI*5, Juhachi ODA, Kengo KAWATA and Yuichiro KITAGAWA Department of Human and Mechanical

More information

,4) 1 P% P%P=2.5 5%!%! (1) = (2) l l Figure 1 A compilation flow of the proposing sampling based architecture simulation

,4) 1 P% P%P=2.5 5%!%! (1) = (2) l l Figure 1 A compilation flow of the proposing sampling based architecture simulation 1 1 1 1 SPEC CPU 2000 EQUAKE 1.6 50 500 A Parallelizing Compiler Cooperative Multicore Architecture Simulator with Changeover Mechanism of Simulation Modes GAKUHO TAGUCHI 1 YOUICHI ABE 1 KEIJI KIMURA 1

More information

2). 3) 4) 1.2 NICTNICT DCRA Dihedral Corner Reflector micro-arraysdcra DCRA DCRA DCRA 3D DCRA PC USB PC PC ON / OFF Velleman K8055 K8055 K8055

2). 3) 4) 1.2 NICTNICT DCRA Dihedral Corner Reflector micro-arraysdcra DCRA DCRA DCRA 3D DCRA PC USB PC PC ON / OFF Velleman K8055 K8055 K8055 1 1 1 2 DCRA 1. 1.1 1) 1 Tactile Interface with Air Jets for Floating Images Aya Higuchi, 1 Nomin, 1 Sandor Markon 1 and Satoshi Maekawa 2 The new optical device DCRA can display floating images in free

More information

kiyo5_1-masuzawa.indd

kiyo5_1-masuzawa.indd .pp. A Study on Wind Forecast using Self-Organizing Map FUJIMATSU Seiichiro, SUMI Yasuaki, UETA Takuya, KOBAYASHI Asuka, TSUKUTANI Takao, FUKUI Yutaka SOM SOM Elman SOM SOM Elman SOM Abstract : Now a small

More information

1 Table 1: Identification by color of voxel Voxel Mode of expression Nothing Other 1 Orange 2 Blue 3 Yellow 4 SSL Humanoid SSL-Vision 3 3 [, 21] 8 325

1 Table 1: Identification by color of voxel Voxel Mode of expression Nothing Other 1 Orange 2 Blue 3 Yellow 4 SSL Humanoid SSL-Vision 3 3 [, 21] 8 325 社団法人人工知能学会 Japanese Society for Artificial Intelligence 人工知能学会研究会資料 JSAI Technical Report SIG-Challenge-B3 (5/5) RoboCup SSL Humanoid A Proposal and its Application of Color Voxel Server for RoboCup SSL

More information

x T = (x 1,, x M ) x T x M K C 1,, C K 22 x w y 1: 2 2

x T = (x 1,, x M ) x T x M K C 1,, C K 22 x w y 1: 2 2 Takio Kurita Neurosceince Research Institute, National Institute of Advanced Indastrial Science and Technology takio-kurita@aistgojp (Support Vector Machine, SVM) 1 (Support Vector Machine, SVM) ( ) 2

More information

4d_06.dvi

4d_06.dvi Learning and Recognition of Time-Series Data Based on Self-Organizing Incremental Neural Network Shogo OKADA and Osamu HASEGAWA Self-Organizing Incremental Neural Network (SOINN) DP [12] DP SOINN HMM (Hidden

More information

IPSJ SIG Technical Report Vol.2017-MUS-116 No /8/24 MachineDancing: 1,a) 1,b) 3 MachineDancing MachineDancing MachineDancing 1 MachineDan

IPSJ SIG Technical Report Vol.2017-MUS-116 No /8/24 MachineDancing: 1,a) 1,b) 3 MachineDancing MachineDancing MachineDancing 1 MachineDan MachineDancing: 1,a) 1,b) 3 MachineDancing 2 1. 3 MachineDancing MachineDancing 1 MachineDancing MachineDancing [1] 1 305 0058 1-1-1 a) s.fukayama@aist.go.jp b) m.goto@aist.go.jp 1 MachineDancing 3 CG

More information

Accuracy Improvement by Compound Discriminant Functions for Resembling Character Recognition Takashi NAKAJIMA, Tetsushi WAKABAYASHI, Fumitaka KIMURA,

Accuracy Improvement by Compound Discriminant Functions for Resembling Character Recognition Takashi NAKAJIMA, Tetsushi WAKABAYASHI, Fumitaka KIMURA, Journal Article / 学 術 雑 誌 論 文 混 合 識 別 関 数 による 類 似 文 字 認 識 の 高 精 度 化 Accuracy improvement by compoun for resembling character recogn 中 嶋, 孝 ; 若 林, 哲 史 ; 木 村, 文 隆 ; 三 宅, 康 二 Nakajima, Takashi; Wakabayashi,

More information

Gaze Head Eye (a) deg (b) 45 deg (c) 9 deg 1: - 1(b) - [5], [6] [7] Stahl [8], [9] Fang [1], [11] Itti [12] Itti [13] [7] Fang [1],

Gaze Head Eye (a) deg (b) 45 deg (c) 9 deg 1: - 1(b) - [5], [6] [7] Stahl [8], [9] Fang [1], [11] Itti [12] Itti [13] [7] Fang [1], 1 1 1 Structure from Motion - 1 Ville [1] NAC EMR-9 [2] 1 Osaka University [3], [4] 1 1(a) 1(c) 9 9 9 c 216 Information Processing Society of Japan 1 Gaze Head Eye (a) deg (b) 45 deg (c) 9 deg 1: - 1(b)

More information

ばらつき抑制のための確率最適制御

ばらつき抑制のための確率最適制御 ( ) http://wwwhayanuemnagoya-uacjp/ fujimoto/ 2011 3 9 11 ( ) 2011/03/09-11 1 / 46 Outline 1 2 3 4 5 ( ) 2011/03/09-11 2 / 46 Outline 1 2 3 4 5 ( ) 2011/03/09-11 3 / 46 (1/2) r + Controller - u Plant y

More information

Microsoft PowerPoint - SSII_harada pptx

Microsoft PowerPoint - SSII_harada pptx The state of the world The gathered data The processed data w d r I( W; D) I( W; R) The data processing theorem states that data processing can only destroy information. David J.C. MacKay. Information

More information

No. 3 Oct The person to the left of the stool carried the traffic-cone towards the trash-can. α α β α α β α α β α Track2 Track3 Track1 Track0 1

No. 3 Oct The person to the left of the stool carried the traffic-cone towards the trash-can. α α β α α β α α β α Track2 Track3 Track1 Track0 1 ACL2013 TACL 1 ACL2013 Grounded Language Learning from Video Described with Sentences (Yu and Siskind 2013) TACL Transactions of the Association for Computational Linguistics What Makes Writing Great?

More information

TCP/IP IEEE Bluetooth LAN TCP TCP BEC FEC M T M R M T 2. 2 [5] AODV [4]DSR [3] 1 MS 100m 5 /100m 2 MD 2 c 2009 Information Processing Society of

TCP/IP IEEE Bluetooth LAN TCP TCP BEC FEC M T M R M T 2. 2 [5] AODV [4]DSR [3] 1 MS 100m 5 /100m 2 MD 2 c 2009 Information Processing Society of IEEE802.11 [1]Bluetooth [2] 1 1 (1) [6] Ack (Ack) BEC FEC (BEC) BEC FEC 100 20 BEC FEC 6.19% 14.1% High Throughput and Highly Reliable Transmission in MANET Masaaki Kosugi 1 and Hiroaki Higaki 1 1. LAN

More information

258 5) GPS 1 GPS 6) GPS DP 7) 8) 10) GPS GPS 2 3 4 5 2. 2.1 3 1) GPS Global Positioning System

258 5) GPS 1 GPS 6) GPS DP 7) 8) 10) GPS GPS 2 3 4 5 2. 2.1 3 1) GPS Global Positioning System Vol. 52 No. 1 257 268 (Jan. 2011) 1 2, 1 1 measurement. In this paper, a dynamic road map making system is proposed. The proposition system uses probe-cars which has an in-vehicle camera and a GPS receiver.

More information

(3.6 ) (4.6 ) 2. [3], [6], [12] [7] [2], [5], [11] [14] [9] [8] [10] (1) Voodoo 3 : 3 Voodoo[1] 3 ( 3D ) (2) : Voodoo 3D (3) : 3D (Welc

(3.6 ) (4.6 ) 2. [3], [6], [12] [7] [2], [5], [11] [14] [9] [8] [10] (1) Voodoo 3 : 3 Voodoo[1] 3 ( 3D ) (2) : Voodoo 3D (3) : 3D (Welc 1,a) 1,b) Obstacle Detection from Monocular On-Vehicle Camera in units of Delaunay Triangles Abstract: An algorithm to detect obstacles by using a monocular on-vehicle video camera is developed. Since

More information

1 Kinect for Windows M = [X Y Z] T M = [X Y Z ] T f (u,v) w 3.2 [11] [7] u = f X +u Z 0 δ u (X,Y,Z ) (5) v = f Y Z +v 0 δ v (X,Y,Z ) (6) w = Z +

1 Kinect for Windows M = [X Y Z] T M = [X Y Z ] T f (u,v) w 3.2 [11] [7] u = f X +u Z 0 δ u (X,Y,Z ) (5) v = f Y Z +v 0 δ v (X,Y,Z ) (6) w = Z + 3 3D 1,a) 1 1 Kinect (X, Y) 3D 3D 1. 2010 Microsoft Kinect for Windows SDK( (Kinect) SDK ) 3D [1], [2] [3] [4] [5] [10] 30fps [10] 3 Kinect 3 Kinect Kinect for Windows SDK 3 Microsoft 3 Kinect for Windows

More information

JFE.dvi

JFE.dvi ,, Department of Civil Engineering, Chuo University Kasuga 1-13-27, Bunkyo-ku, Tokyo 112 8551, JAPAN E-mail : atsu1005@kc.chuo-u.ac.jp E-mail : kawa@civil.chuo-u.ac.jp SATO KOGYO CO., LTD. 12-20, Nihonbashi-Honcho

More information

1.0, λ. Holt-Winters t + h,ỹ t ỹ t+h t = ỹ t + hf t.,,.,,,., Hassan [5],,,.,,,,,,Hassan EM,, [6] [8].,,,,Stenger [9]. Baum-Welch, Baum-Welch (Incremen

1.0, λ. Holt-Winters t + h,ỹ t ỹ t+h t = ỹ t + hf t.,,.,,,., Hassan [5],,,.,,,,,,Hassan EM,, [6] [8].,,,,Stenger [9]. Baum-Welch, Baum-Welch (Incremen DEIM Forum 2009 E8-4 HMM 184 8584 3-7-2 E-mail: kei.wakabayashi.bq@gs-eng.hosei.ac.jp, miurat@k.hosei.ac.jp, (HMM)., EM HMM, Baum-Welch,,,, Forecasting Time-Series on Data Stream using Incremental Hidden

More information

1(a) (b),(c) - [5], [6] Itti [12] [13] gaze eyeball head 2: [time] [7] Stahl [8], [9] Fang [1], [11] 3 -

1(a) (b),(c) - [5], [6] Itti [12] [13] gaze eyeball head 2: [time] [7] Stahl [8], [9] Fang [1], [11] 3 - Vol216-CVIM-22 No18 216/5/12 1 1 1 Structure from Motion - 1 8% Tobii Pro TX3 NAC EMR ACTUS Eye Tribe Tobii Pro Glass NAC EMR-9 Pupil Headset Ville [1] EMR-9 [2] 1 Osaka University Gaze Head Eye (a) deg

More information

& Vol.5 No (Oct. 2015) TV 1,2,a) , Augmented TV TV AR Augmented Reality 3DCG TV Estimation of TV Screen Position and Ro

& Vol.5 No (Oct. 2015) TV 1,2,a) , Augmented TV TV AR Augmented Reality 3DCG TV Estimation of TV Screen Position and Ro TV 1,2,a) 1 2 2015 1 26, 2015 5 21 Augmented TV TV AR Augmented Reality 3DCG TV Estimation of TV Screen Position and Rotation Using Mobile Device Hiroyuki Kawakita 1,2,a) Toshio Nakagawa 1 Makoto Sato

More information

udc-2.dvi

udc-2.dvi 13 0.5 2 0.5 2 1 15 2001 16 2009 12 18 14 No.39, 2010 8 2009b 2009a Web Web Q&A 2006 2007a20082009 2007b200720082009 20072008 2009 2009 15 1 2 2 2.1 18 21 1 4 2 3 1(a) 1(b) 1(c) 1(d) 1) 18 16 17 21 10

More information

A Feasibility Study of Direct-Mapping-Type Parallel Processing Method to Solve Linear Equations in Load Flow Calculations Hiroaki Inayoshi, Non-member

A Feasibility Study of Direct-Mapping-Type Parallel Processing Method to Solve Linear Equations in Load Flow Calculations Hiroaki Inayoshi, Non-member A Feasibility Study of Direct-Mapping-Type Parallel Processing Method to Solve Linear Equations in Load Flow Calculations Hiroaki Inayoshi, Non-member (University of Tsukuba), Yasuharu Ohsawa, Member (Kobe

More information

基礎数学I

基礎数学I I & II ii ii........... 22................. 25 12............... 28.................. 28.................... 31............. 32.................. 34 3 1 9.................... 1....................... 1............

More information

IPSJ SIG Technical Report Vol.2009-CVIM-167 No /6/10 Real AdaBoost HOG 1 1 1, 2 1 Real AdaBoost HOG HOG Real AdaBoost HOG A Method for Reducing

IPSJ SIG Technical Report Vol.2009-CVIM-167 No /6/10 Real AdaBoost HOG 1 1 1, 2 1 Real AdaBoost HOG HOG Real AdaBoost HOG A Method for Reducing Real AdaBoost HOG 1 1 1, 2 1 Real AdaBoost HOG HOG Real AdaBoost HOG A Method for Reducing number of HOG Features based on Real AdaBoost Chika Matsushima, 1 Yuji Yamauchi, 1 Takayoshi Yamashita 1, 2 and

More information

a) Extraction of Similarities and Differences in Human Behavior Using Singular Value Decomposition Kenichi MISHIMA, Sayaka KANATA, Hiroaki NAKANISHI a

a) Extraction of Similarities and Differences in Human Behavior Using Singular Value Decomposition Kenichi MISHIMA, Sayaka KANATA, Hiroaki NAKANISHI a a) Extraction of Similarities and Differences in Human Behavior Using Singular Value Decomposition Kenichi MISHIMA, Sayaka KANATA, Hiroaki NAKANISHI a), Tetsuo SAWARAGI, and Yukio HORIGUCHI 1. Johansson

More information

1 4 1 ( ) ( ) ( ) ( ) () 1 4 2

1 4 1 ( ) ( ) ( ) ( ) () 1 4 2 7 1995, 2017 7 21 1 2 2 3 3 4 4 6 (1).................................... 6 (2)..................................... 6 (3) t................. 9 5 11 (1)......................................... 11 (2)

More information

(a) (b) (c) Canny (d) 1 ( x α, y α ) 3 (x α, y α ) (a) A 2 + B 2 + C 2 + D 2 + E 2 + F 2 = 1 (3) u ξ α u (A, B, C, D, E, F ) (4) ξ α (x 2 α, 2x α y α,

(a) (b) (c) Canny (d) 1 ( x α, y α ) 3 (x α, y α ) (a) A 2 + B 2 + C 2 + D 2 + E 2 + F 2 = 1 (3) u ξ α u (A, B, C, D, E, F ) (4) ξ α (x 2 α, 2x α y α, [II] Optimization Computation for 3-D Understanding of Images [II]: Ellipse Fitting 1. (1) 2. (2) (edge detection) (edge) (zero-crossing) Canny (Canny operator) (3) 1(a) [I] [II] [III] [IV ] E-mail sugaya@iim.ics.tut.ac.jp

More information

1 IDC Wo rldwide Business Analytics Technology and Services 2013-2017 Forecast 2 24 http://www.soumu.go.jp/johotsusintokei/whitepaper/ja/h24/pdf/n2010000.pdf 3 Manyika, J., Chui, M., Brown, B., Bughin,

More information

yoo_graduation_thesis.dvi

yoo_graduation_thesis.dvi 200 3 A Graduation Thesis of College of Engineering, Chubu University Keypoint Matching of Range Data from Features of Shape and Appearance Yohsuke Murai 1 1 2 2.5D 3 2.1 : : : : : : : : : : : : : : :

More information

3 2 2 (1) (2) (3) (4) 4 4 AdaBoost 2. [11] Onishi&Yoda [8] Iwashita&Stoica [5] 4 [3] 3. 3 (1) (2) (3)

3 2 2 (1) (2) (3) (4) 4 4 AdaBoost 2. [11] Onishi&Yoda [8] Iwashita&Stoica [5] 4 [3] 3. 3 (1) (2) (3) (MIRU2012) 2012 8 820-8502 680-4 E-mail: {d kouno,shimada,endo}@pluto.ai.kyutech.ac.jp (1) (2) (3) (4) 4 AdaBoost 1. Kanade [6] CLAFIC [12] EigenFace [10] 1 1 2 1 [7] 3 2 2 (1) (2) (3) (4) 4 4 AdaBoost

More information

(check matrices and minimum distances) H : a check matrix of C the minimum distance d = (the minimum # of column vectors of H which are linearly depen

(check matrices and minimum distances) H : a check matrix of C the minimum distance d = (the minimum # of column vectors of H which are linearly depen Hamming (Hamming codes) c 1 # of the lines in F q c through the origin n = qc 1 q 1 Choose a direction vector h i for each line. No two vectors are colinear. A linearly dependent system of h i s consists

More information

untitled

untitled - - GRIPS 1 traceroute IP Autonomous System Level http://opte.org/ GRIPS 2 Network Science http://opte.org http://research.lumeta.com/ches/map http://www.caida.org/home http://www.imdb.com http://citeseer.ist.psu.edu

More information

2.2 (a) = 1, M = 9, p i 1 = p i = p i+1 = 0 (b) = 1, M = 9, p i 1 = 0, p i = 1, p i+1 = 1 1: M 2 M 2 w i [j] w i [j] = 1 j= w i w i = (w i [ ],, w i [

2.2 (a) = 1, M = 9, p i 1 = p i = p i+1 = 0 (b) = 1, M = 9, p i 1 = 0, p i = 1, p i+1 = 1 1: M 2 M 2 w i [j] w i [j] = 1 j= w i w i = (w i [ ],, w i [ RI-002 Encoding-oriented video generation algorithm based on control with high temporal resolution Yukihiro BANDOH, Seishi TAKAMURA, Atsushi SHIMIZU 1 1T / CMOS [1] 4K (4096 2160 /) 900 Hz 50Hz,60Hz 240Hz

More information

Fig. 3 Flow diagram of image processing. Black rectangle in the photo indicates the processing area (128 x 32 pixels).

Fig. 3 Flow diagram of image processing. Black rectangle in the photo indicates the processing area (128 x 32 pixels). Fig. 1 The scheme of glottal area as a function of time Fig. 3 Flow diagram of image processing. Black rectangle in the photo indicates the processing area (128 x 32 pixels). Fig, 4 Parametric representation

More information

THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS TECHNICAL REPORT OF IEICE.

THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS TECHNICAL REPORT OF IEICE. THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS TECHNICAL REPORT OF IEICE. E-mail: {ytamura,takai,tkato,tm}@vision.kuee.kyoto-u.ac.jp Abstract Current Wave Pattern Analysis for Anomaly

More information

I

I I 6 4 10 1 1 1.1............... 1 1................ 1 1.3.................... 1.4............... 1.4.1.............. 1.4................. 1.4.3........... 3 1.4.4.. 3 1.5.......... 3 1.5.1..............

More information

数学の基礎訓練I

数学の基礎訓練I I 9 6 13 1 1 1.1............... 1 1................ 1 1.3.................... 1.4............... 1.4.1.............. 1.4................. 3 1.4.3........... 3 1.4.4.. 3 1.5.......... 3 1.5.1..............

More information

9 8 7 (x-1.0)*(x-1.0) *(x-1.0) (a) f(a) (b) f(a) Figure 1: f(a) a =1.0 (1) a 1.0 f(1.0)

9 8 7 (x-1.0)*(x-1.0) *(x-1.0) (a) f(a) (b) f(a) Figure 1: f(a) a =1.0 (1) a 1.0 f(1.0) E-mail: takio-kurita@aist.go.jp 1 ( ) CPU ( ) 2 1. a f(a) =(a 1.0) 2 (1) a ( ) 1(a) f(a) a (1) a f(a) a =2(a 1.0) (2) 2 0 a f(a) a =2(a 1.0) = 0 (3) 1 9 8 7 (x-1.0)*(x-1.0) 6 4 2.0*(x-1.0) 6 2 5 4 0 3-2

More information

基礎から学ぶトラヒック理論 サンプルページ この本の定価 判型などは, 以下の URL からご覧いただけます. このサンプルページの内容は, 初版 1 刷発行時のものです.

基礎から学ぶトラヒック理論 サンプルページ この本の定価 判型などは, 以下の URL からご覧いただけます.   このサンプルページの内容は, 初版 1 刷発行時のものです. 基礎から学ぶトラヒック理論 サンプルページ この本の定価 判型などは, 以下の URL からご覧いただけます. http://www.morikita.co.jp/books/mid/085221 このサンプルページの内容は, 初版 1 刷発行時のものです. i +α 3 1 2 4 5 1 2 ii 3 4 5 6 7 8 9 9.3 2014 6 iii 1 1 2 5 2.1 5 2.2 7

More information

,,, 2 ( ), $[2, 4]$, $[21, 25]$, $V$,, 31, 2, $V$, $V$ $V$, 2, (b) $-$,,, (1) : (2) : (3) : $r$ $R$ $r/r$, (4) : 3

,,, 2 ( ), $[2, 4]$, $[21, 25]$, $V$,, 31, 2, $V$, $V$ $V$, 2, (b) $-$,,, (1) : (2) : (3) : $r$ $R$ $r/r$, (4) : 3 1084 1999 124-134 124 3 1 (SUGIHARA Kokichi),,,,, 1, [5, 11, 12, 13], (2, 3 ), -,,,, 2 [5], 3,, 3, 2 2, -, 3,, 1,, 3 2,,, 3 $R$ ( ), $R$ $R$ $V$, $V$ $R$,,,, 3 2 125 1 3,,, 2 ( ), $[2, 4]$, $[21, 25]$,

More information

Optical Flow t t + δt 1 Motion Field 3 3 1) 2) 3) Lucas-Kanade 4) 1 t (x, y) I(x, y, t)

Optical Flow t t + δt 1 Motion Field 3 3 1) 2) 3) Lucas-Kanade 4) 1 t (x, y) I(x, y, t) http://wwwieice-hbkborg/ 2 2 4 2 -- 2 4 2010 9 3 3 4-1 Lucas-Kanade 4-2 Mean Shift 3 4-3 2 c 2013 1/(18) http://wwwieice-hbkborg/ 2 2 4 2 -- 2 -- 4 4--1 2010 9 4--1--1 Optical Flow t t + δt 1 Motion Field

More information

通信容量制約を考慮したフィードバック制御 - 電子情報通信学会 情報理論研究会(IT) 若手研究者のための講演会

通信容量制約を考慮したフィードバック制御 -  電子情報通信学会 情報理論研究会(IT)  若手研究者のための講演会 IT 1 2 1 2 27 11 24 15:20 16:05 ( ) 27 11 24 1 / 49 1 1940 Witsenhausen 2 3 ( ) 27 11 24 2 / 49 1940 2 gun director Warren Weaver, NDRC (National Defence Research Committee) Final report D-2 project #2,

More information

THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS TECHNICAL REPORT OF IEICE {s-kasihr, wakamiya,

THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS TECHNICAL REPORT OF IEICE {s-kasihr, wakamiya, THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS TECHNICAL REPORT OF IEICE. 565-0871 1 5 E-mail: {s-kasihr, wakamiya, murata}@ist.osaka-u.ac.jp PC 70% Design, implementation, and evaluation

More information

IPSJ SIG Technical Report Vol.2012-MUS-96 No /8/10 MIDI Modeling Performance Indeterminacies for Polyphonic Midi Score Following and

IPSJ SIG Technical Report Vol.2012-MUS-96 No /8/10 MIDI Modeling Performance Indeterminacies for Polyphonic Midi Score Following and MIDI 1 2 3 2 1 Modeling Performance Indeterminacies for Polyphonic Midi Score Following and Its Application to Automatic Accompaniment Nakamura Eita 1 Yamamoto Ryuichi 2 Saito Yasuyuki 3 Sako Shinji 2

More information

, (GPS: Global Positioning Systemg),.,, (LBS: Local Based Services).. GPS,.,. RFID LAN,.,.,.,,,.,..,.,.,,, i

, (GPS: Global Positioning Systemg),.,, (LBS: Local Based Services).. GPS,.,. RFID LAN,.,.,.,,,.,..,.,.,,, i 25 Estimation scheme of indoor positioning using difference of times which chirp signals arrive 114348 214 3 6 , (GPS: Global Positioning Systemg),.,, (LBS: Local Based Services).. GPS,.,. RFID LAN,.,.,.,,,.,..,.,.,,,

More information

75 unit: mm Fig. Structure of model three-phase stacked transformer cores (a) Alternate-lap joint (b) Step-lap joint 3 4)

75 unit: mm Fig. Structure of model three-phase stacked transformer cores (a) Alternate-lap joint (b) Step-lap joint 3 4) 3 * 35 (3), 7 Analysis of Local Magnetic Properties and Acoustic Noise in Three-Phase Stacked Transformer Core Model Masayoshi Ishida Kenichi Sadahiro Seiji Okabe 3.7 T 5 Hz..4 3 Synopsis: Methods of local

More information

Isogai, T., Building a dynamic correlation network for fat-tailed financial asset returns, Applied Network Science (7):-24, 206,

Isogai, T., Building a dynamic correlation network for fat-tailed financial asset returns, Applied Network Science (7):-24, 206, H28. (TMU) 206 8 29 / 34 2 3 4 5 6 Isogai, T., Building a dynamic correlation network for fat-tailed financial asset returns, Applied Network Science (7):-24, 206, http://link.springer.com/article/0.007/s409-06-0008-x

More information

IPSJ SIG Technical Report NetMAS NetMAS NetMAS One-dimensional Pedestrian Model for Fast Evacuation Simulator Shunsuke Soeda, 1 Tomohisa Yam

IPSJ SIG Technical Report NetMAS NetMAS NetMAS One-dimensional Pedestrian Model for Fast Evacuation Simulator Shunsuke Soeda, 1 Tomohisa Yam 1 1 1 1 1 NetMAS NetMAS NetMAS One-dimensional Model for Fast Evacuation Simulator Shunsuke Soeda, 1 Tomohisa Yamashita, 1 Masaki Onishi, 1 Ikushi Yoda 1 and Itsuki Noda 1 We propose the one-dimentional

More information

本文6(599) (Page 601)

本文6(599) (Page 601) (MIRU2008) 2008 7 525 8577 1 1 1 E-mail: matsuzaki@i.ci.ritsumei.ac.jp, shimada@ci.ritsumei.ac.jp Object Recognition by Observing Grasping Scene from Image Sequence Hironori KASAHARA, Jun MATSUZAKI, Nobutaka

More information

IPSJ SIG Technical Report Vol.2009-DPS-141 No.23 Vol.2009-GN-73 No.23 Vol.2009-EIP-46 No /11/27 t-room t-room 2 Development of

IPSJ SIG Technical Report Vol.2009-DPS-141 No.23 Vol.2009-GN-73 No.23 Vol.2009-EIP-46 No /11/27 t-room t-room 2 Development of t-room 1 2 2 2 2 1 1 2 t-room 2 Development of Assistant System for Ensemble in t-room Yosuke Irie, 1 Shigemi Aoyagi, 2 Toshihiro Takada, 2 Keiji Hirata, 2 Katsuhiko Kaji, 2 Shigeru Katagiri 1 and Miho

More information

26 Development of Learning Support System for Fixation of Basketball Shoot Form

26 Development of Learning Support System for Fixation of Basketball Shoot Form 26 Development of Learning Support System for Fixation of Basketball Shoot Form 1175094 ,.,,.,,.,,.,,,.,,,,.,,,.,,,,, Kinect i Abstract Development of Learning Support System for Fixation of Basketball

More information

OR2017_curlingRating.dvi

OR2017_curlingRating.dvi 1998 (World Curling Federation, WCF) 1959 Scotch Cup (5 5) 5 WCF : 1. 1998 (World Curling Federation, WCF) 1959 2 Scotch Cup[8] (5 5) 5 WCF ( ) ( ) 1 2. 1 1: Major international competitons of curling

More information

2 (March 13, 2010) N Λ a = i,j=1 x i ( d (a) i,j x j ), Λ h = N i,j=1 x i ( d (h) i,j x j ) B a B h B a = N i,j=1 ν i d (a) i,j, B h = x j N i,j=1 ν i

2 (March 13, 2010) N Λ a = i,j=1 x i ( d (a) i,j x j ), Λ h = N i,j=1 x i ( d (h) i,j x j ) B a B h B a = N i,j=1 ν i d (a) i,j, B h = x j N i,j=1 ν i 1. A. M. Turing [18] 60 Turing A. Gierer H. Meinhardt [1] : (GM) ) a t = D a a xx µa + ρ (c a2 h + ρ 0 (0 < x < l, t > 0) h t = D h h xx νh + c ρ a 2 (0 < x < l, t > 0) a x = h x = 0 (x = 0, l) a = a(x,

More information

johnny-paper2nd.dvi

johnny-paper2nd.dvi 13 The Rational Trading by Using Economic Fundamentals AOSHIMA Kentaro 14 2 26 ( ) : : : The Rational Trading by Using Economic Fundamentals AOSHIMA Kentaro abstract: Recently Artificial Markets on which

More information

IPSJ SIG Technical Report Vol.2014-ARC-213 No.24 Vol.2014-HPC-147 No /12/10 GPU 1,a) 1,b) 1,c) 1,d) GPU GPU Structure Of Array Array Of

IPSJ SIG Technical Report Vol.2014-ARC-213 No.24 Vol.2014-HPC-147 No /12/10 GPU 1,a) 1,b) 1,c) 1,d) GPU GPU Structure Of Array Array Of GPU 1,a) 1,b) 1,c) 1,d) GPU 1 GPU Structure Of Array Array Of Structure 1. MPS(Moving Particle Semi-Implicit) [1] SPH(Smoothed Particle Hydrodynamics) [] DEM(Distinct Element Method)[] [] 1 Tokyo Institute

More information

1 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15. 1. 2. 3. 16 17 18 ( ) ( 19 ( ) CG PC 20 ) I want some rice. I want some lice. 21 22 23 24 2001 9 18 3 2000 4 21 3,. 13,. Science/Technology, Design, Experiments,

More information

RTM RTM Risk terrain terrain RTM RTM 48

RTM RTM Risk terrain terrain RTM RTM 48 Risk Terrain Model I Risk Terrain Model RTM,,, 47 RTM RTM Risk terrain terrain RTM RTM 48 II, RTM CSV,,, RTM Caplan and Kennedy RTM Risk Terrain Modeling Diagnostics RTMDx RTMDx RTMDx III 49 - SNS 50 0

More information