|
|
|
- えみ かなり
- 9 years ago
- Views:
Transcription
1 0 3
2
3 i SOM SOM mnsom MPFIM SOAC SOAC
4 ii SOAC SOAC SOAC SOAC
5 iii AANN Auto-Associative Neural Network 3 (5)L-AANN 3 (5) Layer-AANN ASSOM Adaptive Subspace SOM BMU Best Matching Unit BMM Best Matching Module BP Back Propagation CFC Conventional Feedback Controller EM Expectation Maximization ESOM Evolving Self-Organizing Map GTM Generative Topographic Mapping HMM Hidden Markov Model IDML Inverse Dynamics Model Learning MAE Mean Absolute Error MDS Multi Dimensional Scaling MLP Multi-Layer Perceptron mnsom modular network SOM MMRL Multiple Model Reinforcement Learning MOSAIC MOdular Selection And Identification for Control MPFIM Multiple Paired Forward-Inverse Models (G) NG (Growing) Neural Gas NNC Neural Network Controller PCA Principal Component Analysis RBF Radial Basis Function RNN Recurrent Neural Network RP Responsibility Predictor SOAC Self-Organizing Adaptive Controller SOM Self-Organizing Map VQ Vector Quantization WTA Winner-Takes-All
6 iv SOM x i w k X A ξ k φ φ k i ψi k n η σ(n) W X Ψ Modular network x i ŷi k ŷ i w k Ei k Ē i p k i T η ψ H C r mnsom x ij y ij D i f i L (f, g) p( ) ỹ k Ei k ki ψi k ξ k i-th data vector k-th reference vector input data set lattice of SOM coordinate vector in the map space index of the BMU neighborhood function (learning rate) k-th learning rate for i-th data normalized learning rate iteration number [step] learning coefficient neighborhood radius at step n reference matrix (set of reference vectors) input matrix (set of input data) matrix of learning rates i-th vector output of k-th module for i-th input vector total output of the network weight vector of k-th module error of k-th module for i-th input vector total error of the network probability selected k-th module for i-th input vector annealing temperature learning coefficient learning rate set of neighborhoods average distance between neighborhoods j-th input vector belonging to i-th class j-th output vector belonging to i-th class data subset belonging to i-th class function of i-th class distance between function f and function g probability density function (pdf) output of k-th module ensemble average of k-th module for i-th class BMM for i-th class learning rate of k-th module for i-th class coordinate vector of k-th module
7 v MPFIM x t, ẋ t, ẍ t ˆx t, ˆẋ t, ˆẍ t u u k u fb u ff σ f g( ) i g( ) η( ) f w i w δ h( ) K P, K D, K A x l π λ ε SOAC t n K ξ x ˆx x x p actual position, the velocity, and the acceleration at time t desired position, the velocity, and the acceleration at time t total control command control command of k-th inverse model feedback control command feedforward control command standard deviation of Gaussian function of a forward model function of an inverse model function of an RP parameter of a forward model parameter of an inverse model parameter of an RP function of dynamics of a controlled object feedback gains of Proportion, Differential, and Acceleration predicted state prior probability likelihood posterior probability learning coefficient time [sec] iteration number [step] number of modules coordinate vector in the map space state vector desired state predicted state input vector to predictors * index of the BMM p f( ) function of a predictor c f( ) function of a controller p w k c w k σ(t) σ 0 σ τ ψ φ p η c η ε weight vector of k-th predictor weight vector of k-th controller neighborhood radius at time t initial neighborhood radius final neighborhood radius time constant learning rate normalized learning rate (in the execution phase) learning coefficient of a predictor learning coefficient of an NNC damping coefficient
8
9 .
10 Jacobs Jordan [0, ] mixture of experts Narendra [39, 40] Gomi [6] Wolpert [57] Haruno [8, 9] Jacobs mixture of experts Narendra Narendra Kohonen Self-Organizing Map : SOM SOM modular network SOM : mnsom mnsom mnsom Self-Organizing Adaptive Controller : SOAC
11 . 3. SOAC... A B A B. mnsom Jordan Narendra Wolpert
12 4.3.3 SOM.4 MPFIM SOAC 3 SOAC 3. SOAC 3. SOAC 3.3 SOAC SOAC 4 SOAC SOAC SOAC 5 SOAC 3 6 SOAC 7
13 5. SOM Self-Organizing Map : SOM Willshaw von der Malsburg [56] Amari[] Willshaw von der Malsburg [3] Kohonen [7] Kohonen Kohonen [45, 38] [3] [9] [43] [6] SOM Kohonen Kohonen SOM 3 SOM
14 6 SOM unit. SOM Multi Dimensional Scaling : MDS [48] MDS SOM. SOM k-th w k ξ k ξ SOM, 3 SOM d x i = [x i,..., x id ] T I X = {x,..., x i,..., x I } X d X R d A w k = [w, k..., wd k]t K SOM X SOM () () w k () x i Best Matching Unit : BMU Winner-Takes-All : WTA BMU
15 . SOM 7 k BMU k = arg max x T i w k (.) k BMU k = arg min x i w k (.) k Kohonen w k SOM Vector Quantization : VQ * () SOM BMU BMU BMU w k = ηφ k (n)(x i w k ), k A (.3) φ k φ k ξ (n) = exp ( k ) ξk σ(n) (.4) * SOM w k
16 8 (a) (b) (c) BMU BMU BMU unit. (a) σ = 4 (b) σ = (c) σ = η σ(n) σ(n) n. σ η () () SOM.3 SOM (.3) Kohonen Luttrell[9, 30] Bishop[5] Expectation Maximization : EM Generative Topographic Mapping (GTM) (.5) (.5) η
17 . SOM 9 input vector reference vector coordinate vector of each unit input space (d=3) map space (two dimension).3 SOM SOM w k = ψ k i = I ψi k x i (.5) i= φ k i I i = φk i (.6) W = XΨ (.7) W = {w,..., w k } (.8) X = {x,..., x I } (.9) ψ ψ K Ψ =..... (.0) ψi ψi K
18 0... BMU k = arg min x i w k (.) k. 3. φ k ξ (n) = exp ( k ) ξk σ(n) (.) w k = ηφ k (n)(x i w k ), k A (.3). BMUs ki = arg min x i w k (.4) k. * φ k i = exp ( ξk i ξ k σ ) (.5) ψ k i = φ k i I i = φk i (.6) 3. w k = I ψi k x i (.7) i=.. SOM SOM * (.5) ξ k i ξ k i
19 . SOM. d ov d uc g e t h z o h ag w l o o awk f d olf c ion c se l ox og at ow e iger orse ebra h e n e k w l is small medium big nocturnal herbivorous has likes to legs 4 legs hair hooves mane feathers stripes hunt run fly swim dog tiger lion horse zebra wolf cow cat fox hen dove eagle owl hawk duck goose.4 SOM SOM SOM. [55] *3 *3. eagle 0
20 input vector reference vector input vector reference vector x x x (a) x (b).5 SOM (a) (b) 6 6 I = 6 d = 6 6 SOM.4 SOM 00 Cr k = H w k w h (.8) h H h k-th H h Cr k owl hawk [55]
21 . SOM 3 SOM U-matrix dog wolf dog goose SOM d = SOM 4.5 SOM SOM SOM 4 4 SOM SOM K = (a) SOM SOM SOM SOM SOM SOM
22 4 SOM.5(b) SOM K = 5 σ =.8 4 SOM k-means SOM SOM SOM SOM SOM. SOM SOM Jacobs Jordan [0] Jacobs mixture of experts
23 . 5 Jordan[] mixture of experts Jacobs mixturre of experts MLP [8,, 7, 8, 9, 45, 39, 40, 44, 47, 49, 50, 57] [] SOM Gomi [7] mixture of experts Narendra [39, 40] Narendra Wolpert Kawato[57] Multiple Paired Forward-Inverse Models : MPFIM Narendra Narendra MP- FIM MPFIM Layer Auto-Associative Neural Network : 3L-AANN 3L-AANN MLP N D N H
24 6 Output : Error : Error of each submodel Probability of each submodel Output of each submodel Data space Input :.6 N H < N D MLP 3L-AANN 3L-AANN 3L-AANN 3L-AANN Auto-encoder 3L-AANN Principal Component Analysis : PCA PCA I {x i }(x i R N D ) K x i
25 . 7 yi k 3L-AANN x i Ei k Ei k = x i yi k (.9) x i k-th p k i p k i = exp[ Ek i /T ] k exp[ E k i /T ] (.0) T T K ŷ i = p k i yi k (.) k= ŷ i = ŷ k i (.) k = arg max p k i (.3) k Ē i = K p k i Ei k (.4) k= 3L-AANN BP k-th
26 8 w k w k = η Ēi w k (.5) { } = η p k Ei k K i w k + Ei k p k i w k (.6) = ηψ k i k = E k i w k (.7) BP ψ k i ψk i k-th ψ k i ψ k i = p k i { + } T (Ēi Ei k ) i-th (.8) T SOM.3 SOM mnsom.3. Kohonen SOM 3 [7] SOM SOM SOM
27 .3 SOM mnsom 9 SOM [5, 53, 54, 4] SOM modular network SOM : mnsom Kohonen SOM mnsom SOM mnsom SOM SOM SOM Multi-Layer Perceptron : MLP Radial Basis Function : RBF Recurrent Neural Network : RNN SOM Neural Gas : NG [3] MLP MLP mnsom MLPmnSOM [5, 53] mnsom SOM SOM MLP-mnSOM MLP AANN Kohonen [6] SOM Adaptive Subspace SOM : ASSOM 5 AANN 5L-AANN SOM Non-Linear ASSOM : NL-ASSOM [5] ASSOM SOM SOM ASSOM ASSOM 3L-AANN-mnSOM 3 mnsom 5 5L-AANN-mnSOM ASSOM NL-ASSOM
28 0 input output.7 MLP-mnSOM RNN-mnSOM MLP-mnSOM [4, 53, 4, 4] MLP-mnSOM RNN-mnSOM mnsom SOM NG SOM SOM SOM NG SOM [] [3] [5] mnsom mnsom MLP mnsom MLP-mnSOM mnsom.3. mnsom mnsom.7 MLP mnsom MLP MLP-mnSOM
29 .3 SOM mnsom I systems I datasets : unknown : observed K modules system sampling system i system I MLP-mnSOM.8 MLP-mnSOM I J y i = f i (x) (.9) D i = {(x ij, y ij )} (.30) x ij = [x ij,..., x ijdi ] T (.3) y ij = [y ij,..., y ijdo ] T. (.3) f i i-th D i-th x ij y ij i-th j-th d i d o f i I J mnsom 3. f i. 3.. mnsom I Best Matching Module : BMM. BMM
30 SOM 3. mnsom.8 SOM MLP-mnSOM L (f, g) = f(x) g(x) p(x)dx (.33) p( ) mnsom mnsom [53] (.33) f g.3.3 mnsom MLP-mnSOM mnsom 4 BMM BMM MLP Evaluative process (.33) f (.33) E k i = J J y ij ỹij k (.34) ỹij k i-th j-th k-th j=
31 .3 SOM mnsom 3 Competitive process BMM ki = arg min Ei k (.35) k Cooperative process ψ k i = exp [ ξi k ξ k /σ(n) ] I i = exp [ (.36) ξi k ξ k /σ(n) ] σ(n) n I σ σ(n) = σ + (σ 0 σ ) exp ( n ) τ (.37) σ 0 n = 0 σ n = τ σ Adaptive Process Back Propagation : BP w k = η I i= ψ k i E k i w k (.38) w k k-th k-th E k E k = I ψi k Ei k (.39) i= (.39) (.34) (.33) mnsom
32 4 class a b c f f 3 f 5 class class x x class 3 class x x class 5 class x x f f 4 f 6 3. MLP-mnSOM Parameters of MLP Number of input unit Number of hidden units 5 Number of output unit Learning coefficient η 0.05 Parameters of mnsom Map size 00 (0 0) Initial value of neighborhood radius σ Final value of neighborhood radius σ.0 Time constant τ 50 Iteration number 300 k-th g k g k (x) = I ψi k f i (x) (.40) i= 4
33 .3 SOM mnsom 5 i BMM for i-th class 4 module number (a) 9... (b) 00.0 MLP-mnSOM 3 (a) (b) 8 module number... 0 Root mean square error x mnsom mnsom
34 6.3.4 mnsom mnsom 3 3 a, b, c 6 I = y i = f i (x) (.4) f i (x) = ax 3 + bx + cx (.4) mnsom x {.0, 0.99, 0.98,..., 0.99,.0} y i (.4) 0 J = 0 MLP-mnSOM.0..0 (a) mnsom (b) (.40) BMM BMM BMM mnsom ỹ k ŷ k. 6-th mnsom J J ŷ k ỹ k (.43) j=
35 .4 MPFIM 7 current state forward model predicted trajectory desired trajectory inverse model control command controlled object actual trajectory..4 MPFIM [4] Kawato [3, 5] M x x u(t) x(t) x = F (u) u = F (x) ˆx x = F (u) = F (F (ˆx)) = ˆx.
36 8 Albus[] Kuperstein[8] Miller[33] Atkeson[4] [4] Jordan Rumelhart[] Kawato[3, 5] Conventional Feedback Controller : CFC CFC MPFIM.4. Kawato[3, 5]
37 .4 MPFIM 9 inverse model desired trajectory CFC controlled object actual trajectory.3.3 CFC ˆx CFC CFC u fb = K p (ˆx x) + K D (ˆẋ ẋ) + K A (ˆẍ ẍ) (.44) x, ẋ, ẍ K P, K D, K A ˆx ˆẋ ˆẍ i w u ff = i g( i w, ˆx, ˆẋ, ˆẍ) (.45) i g MLP RBF CFC u fb u ff h(x, ẋ, ẍ) = u fb + u ff (.46) h dw/dt = ε( u ff / i w) T u fb (.47)
38 30 Widrow-Hoff u fb û (û u ff ) T (û u ff ) i w dw/dt = ε( u ff / i w) T (û u ff ) (.48) (.47) (.48) CFC u fb (û u ff ) CFC Kawato [3] Miyamura[37] Gomi [6] [36] [7].4. MPFIM Multiple Paired Forward-Inverse Models : MPFIM MOSAIC MOdular Selection And Identification for Control Wolpert Kawato [57] Wolpert Kawato MPFIM Gomi [7] Gomi mixture of experts Wolpert Kawato MPFIM soft-max Gomi Haruno [8, 9] Gomi MPFIM
39 .4 MPFIM 3 contextual signal desired state efference copy of motor command K forward likelyhood model state prediction model forward prediction error likelyhood model model forward likelyhood model responsibility prior model x predictor responsibility prior x predictor responsibility prior module feedforward predictor inverse control command module feedforward model inverse control command module feedforward model inverse control command model x x x posterior x normalization + + control command controlled object next state + - CFC feedback control command.4 MPFIM MPFIM Doya Samejima MPFIM Multiple Model Reinforcement Learning : MMRL [8, 47] Wolpert Kawato MPFIM MPFIM Hidden Markov Model : HMM Haruno [8].4.3 MPFIM MPFIM.4 MPFIM K 3 Responsibility Predictor : RP x k t+ = f g( f w k t, x t, u t ) (.49)
40 3 w k t f g x k t x t l k t l k t = P (x t f w k t, u t, k) = πσ e x t x k t /σ (.50) σ soft-max l k t K k = lk t (.5) 0 MPFIM contextual signal RP y t RP πt k = η(δt k, y t ) (.5) δ k t η λ k t = πt k lt k K k = πk t lt k (.53) λ k t 3 3 RP MPFIM 3 ˆx k-th u k t = i g( i wt k, ˆx t ) (.54)
41 .4 MPFIM 33 K K u t = λ k t u k t = λ k t ig( i wt k, ˆx t ) (.55) k= k= i wt k = ελ k d i g k t d i wt k (û t u t ) ε duk t dvt k λ k t u fb (.56) f wt k = ελ k d f g k t d f wt k (x t x k t ) (.57).4.4 MPFIM MPFIM. π k t π k t = η(δ k t, y t ) (.58). l k t l k t = πσ e x t x k t /σ (.59) 3. λ k t λ k t = πt k lt k K k = πk t lt k (.60) 4. u t K u t = λ k t u k t (.6) k= 5. i wt k = ε duk t dvt k λ k t u fb (.6) f wt k = ελ k d f g k t dwt k (x t x k t ) (.63)
42
43 35 3 SOAC 3. Jacobs Jordan [0] mixture of experts Jordan Jacobs mixture of experts [] mixture of experts Narendra [39, 40]
44 36 3 SOAC Wolpert [57] Narendra Wolpert soft-max Narendra Gomi [7] mixture of experts Kawato [3, 4, 5] Wolpert Multiple Paired Forward-Inverse Models : MPFIM Wolpert Wolpert SOAC SOAC Tokunaga [5, 53, 54] SOM mnsom mnsom mnsom mnsom Nishida [4, 4]
45 3. 37 SOAC modules BMM for object B switch BMM for object A object A object B 3. SOAC BMM BMM BMM SOAC () () (3) () mnsom () SOAC mnsom (3) SOAC mnsom 3. SOAC 3.3 SOAC 3.4
46 38 3 SOAC 3. SOAC SOAC SOM SOAC SOM Best Matching Module : BMM BMM BMM SOAC predictor controller 3. k-th controlled object current state x(t) control signal u(t) t predicted state x k (t + t) * x k (t + t) = p f k (x(t), u(t)) (3.) k-th x(t) desired state ˆx(t) u k (t) u k (t) = c f k (x(t), ˆx(t)) (3.) SOAC * * SOAC * predictor-map controller-map
47 Predictor D x x^ current state x desired state u Controller Predictor Controller predicted state D x ~ k time delay u k Winner Takes All Controlled Object Predictor Controller D u control signal I I I {x i (t), u i (t)}(i =,..., I) mnsom [, 5] mnsom MLP Multi-Layer Perceptron p w k I p E k i = T T 0 x i (t) x k i (t) dt (3.3) x k i (t) p E k i i-th k-th T
48 40 3 SOAC BMM Best Matching Module : BMM BMM i = arg min p Ei k (3.4) k ψ k i ψk i ψ k i = exp[ ξ k ξ i /σ ] I i = exp[ ξk ξ i /σ ] (3.5) ξ k, ξ i k-th i-th BMM σ σ n ( σ(n) = σ + (σ 0 σ ) exp n ) τ (3.6) σ 0 σ τ ψ k i p w k = p η p η I i= ψ k i p E k i p w k (3.7) 4
49 3.4 4 u(t) control signal x(t) current state x(t) ^ desired state - + predictor CFC NNC D time delay ~ x k (t) predicted state u k (t) control signal SOAC Kawato feedback-error-learning [3, 5] Conventional Feedback Controller : CFC SOAC 3.3 CFC Neural Network Controller : NNC NNC CFC x(t) x k (t) p e k (t) = ( ε) p e k (t t) + ε x(t) x k (t) (3.8) p e k (t) 0 < ε ε
50 4 3 SOAC ε ε ε BMM BMM p e k (t) * (t) = arg min p e k (t) (3.9) k BMM φ k = exp[ ξ k ξ /σ ] K k = exp[ ξk ξ /σ ] (3.0) σ BMM BMM NNC CFC cfc u(t) *3 K u(t) = φ k u k (t) + cfc u(t) (3.) k= u k (t) = c f k (ˆx(t)) (3.) cfc u(t) = cfc W (ˆx(t) x(t)). (3.3) cfc W NNC c w k cfc u φ k c w k = c η φ k c f k c w k cfc u (3.4) 3.5 SOAC BMM *3 (3.) BMM
51 p i-th p i p k ψ k i p i p k = I ψi k p i (3.5) i= p parameter-map 3.5. BMM (3.4) BMM BMM BMM (0) = arg min p p k (3.6) k (3.9) BMM BMM 3.5. BMM BMM BMM p 5
52
53 45 4 SOAC SOAC SOAC SOAC Wolpert Multiple Paired Forward-Inverse Models : MPFIM 4. I
54 46 4 SOAC 4. B [kg/s] K [kg/s ] M [kg] p p p p p p p p p p A p B p C p D p E p F B M =.0 [kg] K [kg/s ] M used object parameter unused object parameter B [kg/s], x(t) K u(t) 4. M i ẍ + B i ẋ + K i x = u (4.) x [m] u [N] M i, B i, K i i-th [kg] [kg/s] [kg/s ] 4. p p 9 p A p F p 5 p A 4 runge-kutta h= x ẋ u 3 { 0., 0.05, 0, 0.05, 0.} 5
55 Number of classes 9 Map size 9 9 ( dimension) Initial neighborhood radius σ Final neighborhood radius σ.5 Time constant τ 00 Iteration number N 000 Learning coefficient η =9 5 3 = x p = [x, ẋ, u] T y = ẍ T k-th p w k = [ p w, k p w, k p w3] k T y k = x T p p w k (4.) BMM 4.4 n=0 00 BMM n=000
56 48 4 SOAC p p 3 p p p 3 p p p 3 p p p 3 p 5 p 7 p p 6 iteration number n=0 p 8 p 5 (a) p 9 p 4 n=00 p 4 p 7 p 5 p 6 p 6 p 8 p 9 n=300 p 4 p 7 p 5 p 6 p 8 p 9 n=500 p 4 p 7 p 8 p 9 K [kg/s] K [kg/s] K [kg/s] K [kg/s] B [kg/s ] B [kg/s ] B [kg/s ] B [kg/s ] (b) 4. (a) (b) (a) p i (i =,..., 9) p i i-th BMM
57 module 3 p iteration number n=000 p 4 p 7 0 used object parameter p 7 p 8 p p p 5 p 8 K [kg/s] p 4 p 5 p 6 4 p 3 p 6 (a)... module p 9 3 p p p B [kg/s ] (b) 4. (a) (b) desired position [m] desired velocity [m/s] desired acceleration [m/s ] time [sec] time [sec] time [sec] 4.3
58 50 4 SOAC 4.3 Damping coefficient ε.0 Neighborhood radius σ.5 Learning coefficient η 0.0 Learning time t 9000 [sec] Ornstein-Uhlenbeck [msec] 30 [sec] SOAC Kawato NNC NNC ˆx ˆẋ ˆẍ nnc u k 3 nnc u k = ˆx T c w k. (4.3) ˆx = [ˆx, ˆẋ, ˆẍ] T c w k = [ c w k, c w k, c w k 3] T PDA cfc W = [k x, kẋ, kẍ] = [5, 0, 0.5] p p [sec] = 9000 [sec] 4.3 [msec]
59 4.4 5 K [kg/s] K [kg/s] K [kg/s] B [kg/s ] t = 00 [sec] B [kg/s ] t = 500 [sec] K [kg/s] time t = 0 [sec] K [kg/s] K [kg/s] K [kg/s] B [kg/s ] t = 3000 [sec] B [kg/s ] t = 5000 [sec] B [kg/s ] B [kg/s ] used object parameter t = 9000 [sec] p 7 p 8 p 9 0 p 4 p 5 p p p B [kg/s ] p 6 t = 000 [sec] 4.4
60 5 4 SOAC position [m] trajectory error of CFC [m] time [sec] (a) CFC p A p B p C p D p E p F p A p B p C p D p E p F desired trajectory actual trajectory p A p B p C p D p E p F time [sec] position [m] trajectory error of SOAC [m] time [sec] (b) SOAC desired trajectory actual trajectory p A p B p C p D p E p F time [sec] 4.5 (a) CFC (b) SOAC 4.4 BMM (for i-th class) p B k p K k p M k c B k c K k c M k (p ) (p ) (p 3 ) (p 4 ) (p 5 ) (p 6 ) (p 7 ) (p 8 ) (p 9 ) NNC φ k K k= φk ( c w k )
61 K [kg/s ] object parameter B [kg/s] module BMM (module number) BMM BMM p A p F 6 p A p 5 5 [sec] BMM ε= (a) CFC (b) SOAC 30 [mm] SOAC BMM SOAC SOAC BMM BMM BMM 4.6 BMM
62 54 4 SOAC 4.5 MPFIM SOAC MPFIM SOAC Number of classes 9 9 Damping coefficient ε.0.0 Learning coefficient of predictor p η Learning coefficient of NNC c η p A p B BMM BMM BMM SOAC BMM MPFIM Wolpert [57] Multiple Paired Forward- Inverse Models SOAC MPFIM soft-max MPFIM 4.5. MPFIM SOAC MPFIM RP SOAC
63 forward model (predictor) inverse model (NNC) used object parameter 0 module 9 module 8 module 7 0 module 3 module 7 module K [kg/s ] 6 module 6 module 5 module 4 K [kg/s ] 6 module 5 module module module 3 module module module module 6 module B [kg/s] B [kg/s] module 9 module 8 module 7 0 module 3 module 7 module K [kg/s ] 6 module 6 module 5 module 4 K [kg/s ] 6 module 5 module module module 3 module module module module 6 module B [kg/s] B [kg/s] 8 0 (a) SOAC (b) MPFIM 4.7 SOAC MPFIM (a) SOAC (b) MPFIM SOAC 4.5 MPFIM SOAC σ 9000 [sec]
64 56 4 SOAC (a) SOAC (b) MPFIM 0.5 desired trajectory actual trajectory 0.5 desired trajectory actual trajectory position [m] 0 position [m] time [sec] time [sec] error error trajectory error [m] trajectory error [m] time [sec] time [sec] 4.8 (a) SOAC (b) MPFIM SOAC MPFIM Mean Absolute Error : MAE MAE K = 9 SOAC MPFIM MPFIM SOAC MPFIM
65 times average of RMSEs SOAC MPFIM Number of modules 4.9 SOAC MPFIM 9, 5, 49, 8 MAE [sec] MPFIM SOAC 4.8 MAE SOAC MPFIM MPFIM SOAC 00 MAE SOAC MPFIM SOAC SOAC SOM
66 58 4 SOAC 0 training class untrained class class 3 8 K [kg/s ] 6 4 class class A class B [kg/s] 4.0 MPFIM 9,5,49,8 00 MAE 4.9 SOAC MAE MPFIM MAE MPFIM 4.6 SOAC SOM h(x; θ + θ ) h(x; θ ) + h(x; θ ). (4.4)
67 P 3 m i= Ei /3 m E A SOAC P 3 m i= Ei /3 + m EA First level Second level Third level MPFIM P 3 m i= Ei /3 m EA P 3 m i= Ei /3 + m EA First level Second level Third level h x θ SOAC k-th õ k o i B, K, M 3 o i = [B i, K i, M i ] Model Error m E k i := o i õ k (4.5) SOAC 4.0 3,, 3 A M M = B K (B ) + (K 0) = 8, B, K 0 SOAC MPFIM SOAC 5 MPFIM 3 SOAC MPFIM 4.5.4
68 60 4 SOAC MPFIM 4. SOAC MPFIM 3 SOAC (a) MPFIM (b) (c), (d) 3 (e), (f) 4.6 SOAC -st 3-rd 5-th 3 MPFIM -st -nd 3 3-rd SOAC SOM MPFIM 4.6 A SOAC 4-th MPFIM 3 3-rd SOAC MPFIM soft-max st winner nd winner e = ( ε)e + ε x x (4.6) e = ( ε)e + ε x x (4.7) o ip = e o + e o e + e (4.8) ( ), ( ) MPFIM 3 A /
69 p w = p η p w x x (4.9) (4.9) 4.(e), (f) (e) SOAC 4-th A SOM -st 3-rd 5-th MPFIM (f) SOAC SOAC MPFIM SOAC
70 6 4 SOAC p B, p K, p M c B, c K, c M 4. SOM SOAC NNC SOAC BMM mnsom
71 σ σ.0,.5, σ σ SOM σ =.0 σ σ σ.0 mnsom CFC BMM BMM BMM MPFIM SOAC MPFIM SOAC SOM MPFIM [46] SOAC MPFIM
72 64 4 SOAC MPFIM SOAC SOAC mnsom mnsom [54] MPFIM SOAC 4.5 MPFIM SOAC MPFIM SOAC MPFIM SOAC MPFIM 4.5(c) MPFIM SOAC
73 (d) 9000 [sec] [8, 9] MPFIM SOM SOAC MPFIM
74 66 4 SOAC SOAC MPFIM 0 5 class 3 0 class 3 First level : learning phase K [kg/s ] class 4 model error class A 3 class K [kg/s ] class class A 3 model error class B [kg/s] (a) B [kg/s] (b) 0 interpolated object by soft-max 5 class 3 0 interpolated object by soft-max nd winner class 3 Second level : interpolation K [kg/s ] class st winner 4 model error nd winner class A 3 class K [kg/s ] class st winner 3 class model error class A B [kg/s] (c) B [kg/s] (d) Third level : incremental learning phase K [kg/s ] class 3 class A 4 class 3 class B [kg/s] (e) K [kg/s ] model error class 3 3 class A model errors class class B [kg/s] (f) 4. SOAC MPFIM 3 (a) SOAC (b) MPFIM (c) SOAC (d) MPFIM (e) SOAC (f) MPFIM
75 predictor NNC 0 8 K [kg/s ] B [kg/s] 4. predictor NNC 0 8 K [kg/s ] B [kg/s] 4.3
76 68 4 SOAC predicto r NNC K [kg/s ] class class class class class class K [kg/s ] K [kg/s ] B [kg/s] B [kg/s] B [kg/s].5 K [kg/s ] class class K [kg/s ] class class K [kg/s ] class class B [kg/s] B [kg/s] B [kg/s].5 K [kg/s ] class class K [kg/s ] class class K [kg/s ] class class B [kg/s] B [kg/s] B [kg/s] (a) predictor (b) NNC (c) predictor and NNC 4.4 (a) (b) (c) σ = {.0,.5,.0} (a) (b)nnc (c) NNC
77 forward model (SOAC) inverse model (SOAC) K [kg/s ] B [kg/s] 4 0 (a) 00 K [kg/s ] iteration number [time] B [kg/s] 4 0 (b) time [sec] forward model (MPFIM) inverse model (MPFIM) K [kg/s ] B [kg/s] 4 0 (c) 000 time [sec] 000 K [kg/s ] B [kg/s] 4 0 (d) time [sec] SOAC MPFIM (a) SOAC (b) MPFIM (c) SOAC (d) MPFIM
78
79 7 5 SOAC 5. SOAC SOAC SOAC 5. (M + m)ẍ + ml cos θ θ ml θ sin θ + fẋ = a u (5.) ml cos θ ẍ + (I + ml ) θ mlg sin θ + C θ = 0 (5.) x [m] ẋ [m/s] ẍ [m/s ] θ [rad] θ [rad/s] θ [rad/s ] x = [x, θ, ẋ, θ] T u M [kg] m [kg] l [m] f [kg/s] C [kgm /s] g [m/s ] a [N/V] I
80 7 5 SOAC l m C x u M f m [kg] l [m].5.8 learning phase execution phase p = [0.6, 0.] p A = [.0,.00] p = [., 0.] p B = [.5, 0.9] p 3 = [.8, 0.] p C = [.4,.8] variable parameter p 4 = [0.6,.0] p D = [0.93, 0.94] p i p i = [l i [m], m i [kg]] p 5 = [.,.0] p E = [0.93,.3] l i : length to the mass center p 6 = [.8,.0] p F = [.6,.49] m i : pendulum mass p 7 = [0.6,.8] p G = [.7,.4] p 8 = [.,.8] p H = [.75, 0.64] p 9 = [.8,.8] p I = [.0, 0.0] M cart mass 5.0 [kg] C friction coefficient of pendulum [kgm /s] f friction coefficient of cart 0.0 [kg/s] g gravity acceleration 9.8 [m/s ] a gain 5 [N/V] 5. I = ml /3 [kgm ] 5. 4 runge-kutta h= x(t) u(t) x(t + t) 5 4 x
81 Number of classes 9 Map size 9 9 ( dimension) Initial neighborhood radius σ Final neighborhood radius σ.8 Time constant τ 00 Iteration number N 000 { 0., 0, 0.} 3 t [sec] =9 3 5 = θ = 0 x(t) u(t) t [sec] x(t + t) 5 4 t = 0.0 [sec] 5. p η SOAC SOAC CFC CFC
82 74 5 SOAC 9 73 [rad] [sec] 8 BMM for training data 5. CFC CFC cfc W = [k x, k θ, kẋ, k θ] = [ 0.5, 5.64, 0.67,.03]
83 cart position [m] pendulum angle [rad] actual response desired response time [sec] [sec] 0 (x T Qx + u T Ru)dt (5.3) cfc W k = R B k P k (5.4) P k (4 4) Riccati (A k ) T P k + P k A k + Q P k B k R (B k ) T P k = 0 (5.5) A k B k k-th Q R 4 4 R = BMM
84 76 5 SOAC SOAC module BMM BMM SOAC BMM 5.5 ψi k p i p i SOM p i p k 5.5 BMM
85 m [kg] l [m] BMM BMM BMM p = [l, m ] = [.8, 0.] p = [l, m ] = [0.6,.8] p 0 [sec] p BMM BMM BMM 5.7 BMM BMM 0.9 [sec] BMM
86 78 5 SOAC SOAC module BMM ε = 0.00 ε BMM 0.9 [sec] BMM BMM BMM BMM BMM BMM BMM { P (kff (tc < t ) = U(t c t s ) = s ) 0 (t c t s ) { 0 P (kfb) (tc < t = U(t c t s ) = s ) (t c t s ) (5.6) (5.7)
87 time [sec] feedback selection feedforward selection time [sec] feedback selection feedforward selection BMM BMM BMM P (kff ) k ff P (kfb ) kfb t c t s t s [sec] t s ε t s ε 5.9 p = [l, m ] = [.8, 0.] p = [l, m ] = [0.6,.8] p 0.8[kg]
88 80 5 SOAC pendulum angle [rad] cart position [m] time [sec] 0 feedback selection feedforward selection feedback selection feedforward selection time [sec] 5.8 BMM BMM visual effects error time [sec] 5.9 BMM BMM (5.7) t s = 5.9 [sec] BMM BMM 0 [sec] BMM [sec]
89 estimated parameter.8 estimated parameter true parameter m [kg] m [kg] l [m] l [m] 5.0 BMM 5.6 p = [.60,.50] SOAC BMM BMM p p = [.65,.54] p p p =
90 8 5 SOAC SOAC SOAC SOM SOM mnsom SOAC 0 BMM
91 SOAC BMM SOAC NNC GOMI [6] IDML Inverse Dynamics Model Learning IDML SOAC IDML [34, 35]
92
93 SOAC () () Jacobs Jordan mixture of experts Narendra Wolpert MPFIM () MPFIM () mixture of experts SOAC Narendra Wolpert SOAC Narendra Wolpert MPFIM SOAC mnsom SOAC MPFIM
94 SOAC SOAC SOAC MPFIM SOAC MPFIM SOAC SOAC SOAC
95 6.3 SOAC 87 SOAC 6.3 SOAC mnsom mnsom SOAC SOAC SOAC SOAC mnsom SOM mnsom [5, 53]
96 Jordan Rumelhart [] SOAC mnsom mnsom MPFIM SOAC mnsom [54] SOAC SOAC 6.4 SOAC SOAC mnsom mnsom Tokunaga[5] mnsom MLP RBF RNN SOM NG
97 6.4 SOAC 89 SOM NG [3, 45] SOAC SOM SOM MPFIM MPFIM mnsom SOM NG SOM Evolving SOM : ESOM [7] Furukawa[, 3] SOM NG SOAC SOM SOM SOM SOM SOM SOAC SOM SOM NG SOM NG GNG Growing Neural Gas [0] ESOM SOAC 6.4. mnsom SOAC [54]
98 90 6 SOAC SOAC A B SOAC SOAC A B SOAC MMRL Multiple Model Reinforcement Learning MMRL MPFIM MPFIM [47] Doya [8] MMRL SOAC MPFIM SOAC SOAC SOAC SOM SOAC
99 9 7 SOAC SOAC SOM mnsom SOAC SOAC MPFIM SOAC SOAC SOAC.. SOAC
100 SOAC MPFIM MPFIM SOAC MPFIM 7. MPFIM SOAC MPFIM SOAC SOM MPFIM MPFIM MPFIM SOAC SOAC mnsom MPFIM SOAC SOAC MPFIM
101 SOAC MPFIM COE mnsom
102 94 7 Sandor M. Veres SOAC Veres COE J9 No
103 95 [] J.S. Albus, A new approach to manipulator control : The cerebellar model articulation controller (CMAC), Transactions of the AMSE. Journal of Dynamic Systems, Measurement, and Control, vol.97, 0 7, 975. [] S. Amari, Topographic organization of nerve fields, Bulletin of Mathematical Biology, vol.4, no.3, pp , 980. [3] S. Amari, Field theory of self-organizing neural nets, IEEE Trans. Systems, Man and Cybernetics, vol.3, no.5, pp , 983. [4] C.G. Atkeson, and D.J. Reinkensmeyer, Using associative content-addressable memories to control robots, Proc. IEEE Conference on Decision and Control, pp , Austin, Texas, Dece., 988. [5] C.M. Bishop, M. Svensén, and C.K.I. Williams, GTM: The generative topographic mapping, Neural Computation, vol.0, no., pp.5 34, 998. [6] G. Deboeck T. Kohonen, 999. [7] D. Deng, and N. Kasabov, On-line pattern analysis by evolving self-organizing maps, Neurocomputing, vol.5, pp.87 03, 003. [8] K. Doya, K. Samejima, K. Katagiri, and M. Kawato, Multiple model-based reinforcement learning, Neural Computation, vol.4, no.6, pp , 00. [9] M. Egmont-Petersen, D. de Ridder, and H. Handels, Image processing with neural networks a review, Pattern Recognition, vol.35, pp.79 30, 00.
104 96 [0] B. Fritzke, A growing neural gas network learns topologies, Advances in Neural Information Processing Systems, vol.7, pp.65 63, 995. [], vol.9 no. pp [] T. Furukawa, SOM of SOMs: Self-organizing map which maps a group of selforganizing maps, Lecture Notes in computer Science, vol.3696, pp , 005. [3] T. Furukawa, SOM of SOMs : An Extension of SOM from Map to Homotopy, Lecture Notes in Computer Science (Edited book of 3th International Conference of Neural Information Processing (ICONIP006)), vol.43, pp , 006. [4] T. Furukawa, K. Tokunaga, S. Kaneko, K. Kimotsuki, and S. Yasui, Generalized self-organizing maps (mnsom) for dealing with dynamical systems, Proc. International Symposium on Nonlinear Theory and its Applications, pp.3 34, Fukuoka, Japan, Nove. Dece [5] T. Furukawa, Self-Organizing Homotopy Network, Proc. Workshop on Self- Organizing Maps (WSOM 007), Germany, 007. [6] H. Gomi, and M. Kawato, Neural network control for a closed-loop system using feedback-error-learning, Neural Networks, vol.6, no.7, pp , 993. [7] H. Gomi, and M. Kawato, Recognition of Manipulated Objects by Motor Learning With Modular Architecture Networks, Neural Networks, vol.6, no.4, pp ,993. [8] M. Haruno, D.M. Wolpert, and M. Kawato, MOSAIC model for motor learning and control, Neural Computation, vol.3, pp.0 0, 00. [9] M. Haruno, D.M. Wolpert, and M. Kawato, Multiple Paired Forward-Inverse Models for Human Motor Learning and Control, Advances in neural information processing systems, vol., pp.3 37, 999.
105 97 [0] R.A. Jacobs, M.I. Jordan, S.J. Nowlan, and G.E. Hinton, Adaptive mixtures of local experts, Neural Computation, vol.3, pp.79 87, 99. [] M.I. Jordan, and R.A. Jacobs, Hierarchical mixture of experts and the EM algorithm, Neural Computation, vol.6, pp.8 4, 994. [] M.I. Jordan, and D.E. Rumelhart, Forward models: Supervised learning with a distal teacher, Cognitive Science, vol.6, pp , 99. [3] M. Kawato, Feedback-error-learning neural network for supervised motor learning, Advanced Neural Computers, In Eckmiller R (Ed.), Elsevier, North- Holland, pp , 990. [4], 996 [5] M. Kawato, K. Furukawa, and R. Suzuki, A hierarchical neural network model for the control and learning of voluntary movements, Biological Cybernetics, vol.56, pp. 7, 987. [6] T. Kohonen, S. Kaski, and H. Lappalainen, Self-organized formation of various invariant-feature filters in the adaptive-subspace SOM, Neural Computation, vol.9, no.6, pp.3 344, 997. [7] T. Kohonen, self-organizing maps, Springer-Verlag, 00. [8] M. Kuperstein, Neural model of adaptive hand-eye coordination for single posture, Science, vol.39, 308 3, 988. [9] S.P. Luttrell, Self-organization: A derivation from first principles of a class of learning algorithms, Proc. IEEE Int. Joint Conf. on Neural Networks (IJCNN89), Part I, pp , IEEE Press, 989. [30] S.P. Luttrell, Derivation of a class of training algorithms, IEEE Trans. Neural Networks, vol., no., pp.9 3, 990. [3] T.M. Martinetz, H.J. Ritter, and K.J. Schulten, Three-dimensional neural net for learning visuomotor coordination of a robot arm, IEEE Trans. Neural Net-
106 98 works, vol., no., pp.3 36, 990. [3] T.M. Martinetz, S.G. Berkovich, and K.J. Schulten, Neural-Gas Network for Vector Quantization and its Application to Time-Series Prediction, IEEE Trans. Neural Networks, vol.4, no.4, pp , 993. [33] T.W. Miller, F.H. Glanz, and L.G. Kraft, Application of a general learning algorithm to the control of robotic manipulators, International Journal of Robotics Research, vol.6, no., pp.84 98, 987. [34] T. Minatohara, and T. Furukawa, Self-Organizing Adaptive Controllers: Application to the Inverted Pendulum, Proc. Workshop on Self-Organizing Maps, pp.4 48, France, 005. [35] modular network SOM, vol.05, no.30, pp.49 54, 005. [36] H. Miyamoto, M. Kawato, T. Setoyama, and R. Suzuki, Feedback-error-learning neural network for trajectory control of a robotic manipulator, Neural Networks, vol., pp.5 65, 988. [37] A. Miyamura, and H. Kimura, Stability of feedback error learning scheme, Systems & Control Letters, vol.45, pp , 00. [38] M.A. Motter, and J.C. Principe, Predictive multiple model switching control with the self-organizing map, International Journal of Robust and Nonlinear Control, vol., no., pp.09 05, 00. [39] K.S. Narendra, J. Balakrishnan, and M.K. Ciliz, Adaptation and learning using multiple models, switching, and tuning, IEEE Control Systems Magazine, vol.5, no.3, pp.37 5, 995. [40] K.S. Narendra, and J. Balakrishnan, Adaptive control using multiple models, IEEE Trans. Automatic Control, vol.4, no., pp.7 87, 997. [4] S. Nishida, K. Ishii, and T. Furukawa, An Online Adaptation Control System Using mnsom, Lecture Notes in Computer Science (Edited book of 3th Inter-
107 99 national Conference of Neural Information Processing (ICONIP006)), vol.43, pp , 006. [4],,, - : -, 3, pp.05 3, 006. [43] P. Pajunen, A. Hyvärinen, and J. Karhunen, Nonlinear blind source separation by self-organizing maps, Proc. International Conference on Neural Information Processing (ICONIP 96), vol. pp.07 0, 996. [44] K. Pawelzik, J. Kohlmorgen, and K.-R. Müller, Annealed competition of experts for a segmentation and classification of switching dynamics, Neural Computation, vol.8, no., pp , 996. [45] J.C. Principe, L. Wang, and M.A. Motter, Local Dynamic Modeling with Self- Organizing Maps and Applications to Nonlinear System Identification and Control, Proc. IEEE, vol.86, no., pp.40 58, 998. [46] K. Rose, E. Gurewitz, and G.C. Fox, Statistical mechanics and phase transitions in clustering, Physical Review Letters, vol.65, no.8, pp , 990. [47] vol.j84-d-ii, no.9, pp.09 06, 00. [48] J.W. Sammon, A Nonlinear Mapping for Data Structure Analysis, IEEE Trans. Computers, vol. 8, no.5, pp , 969. [49] 3 - -,, vol.j79-d-ii, no.7, pp.9 300, 996. [50] S. Suzuki, and H. Ando, A modular network scheme for unsupervised 3D object recognition, Neurocomputing, vol.3, pp.5 8, 000. [5] K. Tokunaga, and T. Furukawa, Nonlinear ASSOM constituted of autoassociative neural modules, Proc. Workshop on Self-Organizing Maps, pp ,
108 [5] K. Tokunaga, T. Furukawa, and S. Yasui, Modular Network SOM: Self- Organizing Maps in Function Space, Neural Information Processing Letters and Reviews, vol.9, pp.5, 005. [53] SOM, vol. no. pp [54],, vol.35, pp.75 80, 006. [55], 999. [56] D.J. Willshaw, and C. von der Malsburg, How patterned neural connections can be set up by self-organization, Proc. Roy. Soc. Lond. B, vol.94, pp , 976. [57] D.M. Wolpert, and M. Kawato, Multiple paired forward and inverse models for motor control, Neural Networks, vol., pp.37 39, 998.
109 0 I accepted. II.. T. Minatohara, T. Furukawa, Self-Organizing Adaptive Controllers: Application to the Inverted Pendulum, Proc. Workshop on Self-Organizing Maps, pp.4 48, France, T. Minatohara, T. Furukawa, A proposal of self-organizing adaptive controller (SOAC), Proc. International Conference on Brain-inspired Information Technology, Japan, T. Minatohara, T. Furukawa, An adaptive controller based on modular network SOM, Proc. Postech-Kyutech Joint Workshop on Neuroinformatics, Korea, 005. III.. modular network SOM, vol.05, no.30, pp.49 54, 005.
Convolutional Neural Network A Graduation Thesis of College of Engineering, Chubu University Investigation of feature extraction by Convolution
Convolutional Neural Network 2014 3 A Graduation Thesis of College of Engineering, Chubu University Investigation of feature extraction by Convolutional Neural Network Fukui Hiroshi 1940 1980 [1] 90 3
TC1-31st Fuzzy System Symposium (Chofu, September -, 15) cremental Neural Networ (SOINN) [5] Enhanced SOINN (ESOINN) [] ESOINN GNG Deng Evolving Self-
TC1-31st Fuzzy System Symposium (Chofu, September -, 15) Proposing a Growing Self-Organizing Map Based on a Learning Theory of a Gaussian Mixture Model Kazuhiro Tounaga National Fisheries University Abstract:
& 3 3 ' ' (., (Pixel), (Light Intensity) (Random Variable). (Joint Probability). V., V = {,,, V }. i x i x = (x, x,, x V ) T. x i i (State Variable),
.... Deeping and Expansion of Large-Scale Random Fields and Probabilistic Image Processing Kazuyuki Tanaka The mathematical frameworks of probabilistic image processing are formulated by means of Markov
SICE東北支部研究集会資料(2012年)
77 (..3) 77- Simulation of Disturbance Compensation Control of Dual Manipulator for an Inverted Pendulum Robot Using The Extended State Observer Luis Canete Kenta Nagano, Takuma Sato, Luis Canete,Takayuki
untitled
K-Means 1 5 2 K-Means 7 2.1 K-Means.............................. 7 2.2 K-Means.......................... 8 2.3................... 9 3 K-Means 11 3.1.................................. 11 3.2..................................
1 Fig. 1 Extraction of motion,.,,, 4,,, 3., 1, 2. 2.,. CHLAC,. 2.1,. (256 ).,., CHLAC. CHLAC, HLAC. 2.3 (HLAC ) r,.,. HLAC. N. 2 HLAC Fig. 2
CHLAC 1 2 3 3,. (CHLAC), 1).,.,, CHLAC,.,. Suspicious Behavior Detection based on CHLAC Method Hideaki Imanishi, 1 Toyohiro Hayashi, 2 Shuichi Enokida 3 and Toshiaki Ejima 3 We have proposed a method for
(MIRU2008) HOG Histograms of Oriented Gradients (HOG)
(MIRU2008) 2008 7 HOG - - E-mail: [email protected], {takigu,ariki}@kobe-u.ac.jp Histograms of Oriented Gradients (HOG) HOG Shape Contexts HOG 5.5 Histograms of Oriented Gradients D Human
IPSJ SIG Technical Report Vol.2015-MUS-107 No /5/23 HARK-Binaural Raspberry Pi 2 1,a) ( ) HARK 2 HARK-Binaural A/D Raspberry Pi 2 1.
HARK-Binaural Raspberry Pi 2 1,a) 1 1 1 2 3 () HARK 2 HARK-Binaural A/D Raspberry Pi 2 1. [1,2] [2 5] () HARK (Honda Research Institute Japan audition for robots with Kyoto University) *1 GUI ( 1) Python
xx/xx Vol. Jxx A No. xx 1 Fig. 1 PAL(Panoramic Annular Lens) PAL(Panoramic Annular Lens) PAL (2) PAL PAL 2 PAL 3 2 PAL 1 PAL 3 PAL PAL 2. 1 PAL
PAL On the Precision of 3D Measurement by Stereo PAL Images Hiroyuki HASE,HirofumiKAWAI,FrankEKPAR, Masaaki YONEDA,andJien KATO PAL 3 PAL Panoramic Annular Lens 1985 Greguss PAL 1 PAL PAL 2 3 2 PAL DP
[1] SBS [2] SBS Random Forests[3] Random Forests ii
Random Forests 2013 3 A Graduation Thesis of College of Engineering, Chubu University Proposal of an efficient feature selection using the contribution rate of Random Forests Katsuya Shimazaki [1] SBS
mt_4.dvi
( ) 2006 1 PI 1 1 1.1................................. 1 1.2................................... 1 2 2 2.1...................................... 2 2.1.1.......................... 2 2.1.2..............................
2007/8 Vol. J90 D No. 8 Stauffer [7] 2 2 I 1 I 2 2 (I 1(x),I 2(x)) 2 [13] I 2 = CI 1 (C >0) (I 1,I 2) (I 1,I 2) Field Monitoring Server
a) Change Detection Using Joint Intensity Histogram Yasuyo KITA a) 2 (0 255) (I 1 (x),i 2 (x)) I 2 = CI 1 (C>0) (I 1,I 2 ) (I 1,I 2 ) 2 1. [1] 2 [2] [3] [5] [6] [8] Intelligent Systems Research Institute,
Studies of Foot Form for Footwear Design (Part 9) : Characteristics of the Foot Form of Young and Elder Women Based on their Sizes of Ball Joint Girth
Studies of Foot Form for Footwear Design (Part 9) : Characteristics of the Foot Form of Young and Elder Women Based on their Sizes of Ball Joint Girth and Foot Breadth Akiko Yamamoto Fukuoka Women's University,
28 Horizontal angle correction using straight line detection in an equirectangular image
28 Horizontal angle correction using straight line detection in an equirectangular image 1170283 2017 3 1 2 i Abstract Horizontal angle correction using straight line detection in an equirectangular image
MmUm+FopX m Mm+Mop F-Mm(Fop-Mopum)M m+mop MSuS+FX S M S+MOb Fs-Ms(Mobus-Fex)M s+mob Fig. 1 Particle model of single degree of freedom master/ slave sy
Analysis and Improvement of Digital Control Stability for Master-Slave Manipulator System Koichi YOSHIDA* and Tetsuro YABUTA* Some bilateral controls of master-slave system have been designed, which can
Study on Throw Accuracy for Baseball Pitching Machine with Roller (Study of Seam of Ball and Roller) Shinobu SAKAI*5, Juhachi ODA, Kengo KAWATA and Yu
Study on Throw Accuracy for Baseball Pitching Machine with Roller (Study of Seam of Ball and Roller) Shinobu SAKAI*5, Juhachi ODA, Kengo KAWATA and Yuichiro KITAGAWA Department of Human and Mechanical
2). 3) 4) 1.2 NICTNICT DCRA Dihedral Corner Reflector micro-arraysdcra DCRA DCRA DCRA 3D DCRA PC USB PC PC ON / OFF Velleman K8055 K8055 K8055
1 1 1 2 DCRA 1. 1.1 1) 1 Tactile Interface with Air Jets for Floating Images Aya Higuchi, 1 Nomin, 1 Sandor Markon 1 and Satoshi Maekawa 2 The new optical device DCRA can display floating images in free
Gaze Head Eye (a) deg (b) 45 deg (c) 9 deg 1: - 1(b) - [5], [6] [7] Stahl [8], [9] Fang [1], [11] Itti [12] Itti [13] [7] Fang [1],
1 1 1 Structure from Motion - 1 Ville [1] NAC EMR-9 [2] 1 Osaka University [3], [4] 1 1(a) 1(c) 9 9 9 c 216 Information Processing Society of Japan 1 Gaze Head Eye (a) deg (b) 45 deg (c) 9 deg 1: - 1(b)
ばらつき抑制のための確率最適制御
( ) http://wwwhayanuemnagoya-uacjp/ fujimoto/ 2011 3 9 11 ( ) 2011/03/09-11 1 / 46 Outline 1 2 3 4 5 ( ) 2011/03/09-11 2 / 46 Outline 1 2 3 4 5 ( ) 2011/03/09-11 3 / 46 (1/2) r + Controller - u Plant y
Microsoft PowerPoint - SSII_harada pptx
The state of the world The gathered data The processed data w d r I( W; D) I( W; R) The data processing theorem states that data processing can only destroy information. David J.C. MacKay. Information
258 5) GPS 1 GPS 6) GPS DP 7) 8) 10) GPS GPS 2 3 4 5 2. 2.1 3 1) GPS Global Positioning System
Vol. 52 No. 1 257 268 (Jan. 2011) 1 2, 1 1 measurement. In this paper, a dynamic road map making system is proposed. The proposition system uses probe-cars which has an in-vehicle camera and a GPS receiver.
(3.6 ) (4.6 ) 2. [3], [6], [12] [7] [2], [5], [11] [14] [9] [8] [10] (1) Voodoo 3 : 3 Voodoo[1] 3 ( 3D ) (2) : Voodoo 3D (3) : 3D (Welc
1,a) 1,b) Obstacle Detection from Monocular On-Vehicle Camera in units of Delaunay Triangles Abstract: An algorithm to detect obstacles by using a monocular on-vehicle video camera is developed. Since
1 Kinect for Windows M = [X Y Z] T M = [X Y Z ] T f (u,v) w 3.2 [11] [7] u = f X +u Z 0 δ u (X,Y,Z ) (5) v = f Y Z +v 0 δ v (X,Y,Z ) (6) w = Z +
3 3D 1,a) 1 1 Kinect (X, Y) 3D 3D 1. 2010 Microsoft Kinect for Windows SDK( (Kinect) SDK ) 3D [1], [2] [3] [4] [5] [10] 30fps [10] 3 Kinect 3 Kinect Kinect for Windows SDK 3 Microsoft 3 Kinect for Windows
JFE.dvi
,, Department of Civil Engineering, Chuo University Kasuga 1-13-27, Bunkyo-ku, Tokyo 112 8551, JAPAN E-mail : [email protected] E-mail : [email protected] SATO KOGYO CO., LTD. 12-20, Nihonbashi-Honcho
1(a) (b),(c) - [5], [6] Itti [12] [13] gaze eyeball head 2: [time] [7] Stahl [8], [9] Fang [1], [11] 3 -
Vol216-CVIM-22 No18 216/5/12 1 1 1 Structure from Motion - 1 8% Tobii Pro TX3 NAC EMR ACTUS Eye Tribe Tobii Pro Glass NAC EMR-9 Pupil Headset Ville [1] EMR-9 [2] 1 Osaka University Gaze Head Eye (a) deg
& Vol.5 No (Oct. 2015) TV 1,2,a) , Augmented TV TV AR Augmented Reality 3DCG TV Estimation of TV Screen Position and Ro
TV 1,2,a) 1 2 2015 1 26, 2015 5 21 Augmented TV TV AR Augmented Reality 3DCG TV Estimation of TV Screen Position and Rotation Using Mobile Device Hiroyuki Kawakita 1,2,a) Toshio Nakagawa 1 Makoto Sato
A Feasibility Study of Direct-Mapping-Type Parallel Processing Method to Solve Linear Equations in Load Flow Calculations Hiroaki Inayoshi, Non-member
A Feasibility Study of Direct-Mapping-Type Parallel Processing Method to Solve Linear Equations in Load Flow Calculations Hiroaki Inayoshi, Non-member (University of Tsukuba), Yasuharu Ohsawa, Member (Kobe
(a) (b) (c) Canny (d) 1 ( x α, y α ) 3 (x α, y α ) (a) A 2 + B 2 + C 2 + D 2 + E 2 + F 2 = 1 (3) u ξ α u (A, B, C, D, E, F ) (4) ξ α (x 2 α, 2x α y α,
[II] Optimization Computation for 3-D Understanding of Images [II]: Ellipse Fitting 1. (1) 2. (2) (edge detection) (edge) (zero-crossing) Canny (Canny operator) (3) 1(a) [I] [II] [III] [IV ] E-mail [email protected]
1 IDC Wo rldwide Business Analytics Technology and Services 2013-2017 Forecast 2 24 http://www.soumu.go.jp/johotsusintokei/whitepaper/ja/h24/pdf/n2010000.pdf 3 Manyika, J., Chui, M., Brown, B., Bughin,
3 2 2 (1) (2) (3) (4) 4 4 AdaBoost 2. [11] Onishi&Yoda [8] Iwashita&Stoica [5] 4 [3] 3. 3 (1) (2) (3)
(MIRU2012) 2012 8 820-8502 680-4 E-mail: {d kouno,shimada,endo}@pluto.ai.kyutech.ac.jp (1) (2) (3) (4) 4 AdaBoost 1. Kanade [6] CLAFIC [12] EigenFace [10] 1 1 2 1 [7] 3 2 2 (1) (2) (3) (4) 4 4 AdaBoost
Fig. 3 Flow diagram of image processing. Black rectangle in the photo indicates the processing area (128 x 32 pixels).
Fig. 1 The scheme of glottal area as a function of time Fig. 3 Flow diagram of image processing. Black rectangle in the photo indicates the processing area (128 x 32 pixels). Fig, 4 Parametric representation
THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS TECHNICAL REPORT OF IEICE.
THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS TECHNICAL REPORT OF IEICE. E-mail: {ytamura,takai,tkato,tm}@vision.kuee.kyoto-u.ac.jp Abstract Current Wave Pattern Analysis for Anomaly
数学の基礎訓練I
I 9 6 13 1 1 1.1............... 1 1................ 1 1.3.................... 1.4............... 1.4.1.............. 1.4................. 3 1.4.3........... 3 1.4.4.. 3 1.5.......... 3 1.5.1..............
THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS TECHNICAL REPORT OF IEICE {s-kasihr, wakamiya,
THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS TECHNICAL REPORT OF IEICE. 565-0871 1 5 E-mail: {s-kasihr, wakamiya, murata}@ist.osaka-u.ac.jp PC 70% Design, implementation, and evaluation
IPSJ SIG Technical Report Vol.2012-MUS-96 No /8/10 MIDI Modeling Performance Indeterminacies for Polyphonic Midi Score Following and
MIDI 1 2 3 2 1 Modeling Performance Indeterminacies for Polyphonic Midi Score Following and Its Application to Automatic Accompaniment Nakamura Eita 1 Yamamoto Ryuichi 2 Saito Yasuyuki 3 Sako Shinji 2
, (GPS: Global Positioning Systemg),.,, (LBS: Local Based Services).. GPS,.,. RFID LAN,.,.,.,,,.,..,.,.,,, i
25 Estimation scheme of indoor positioning using difference of times which chirp signals arrive 114348 214 3 6 , (GPS: Global Positioning Systemg),.,, (LBS: Local Based Services).. GPS,.,. RFID LAN,.,.,.,,,.,..,.,.,,,
75 unit: mm Fig. Structure of model three-phase stacked transformer cores (a) Alternate-lap joint (b) Step-lap joint 3 4)
3 * 35 (3), 7 Analysis of Local Magnetic Properties and Acoustic Noise in Three-Phase Stacked Transformer Core Model Masayoshi Ishida Kenichi Sadahiro Seiji Okabe 3.7 T 5 Hz..4 3 Synopsis: Methods of local
1 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15. 1. 2. 3. 16 17 18 ( ) ( 19 ( ) CG PC 20 ) I want some rice. I want some lice. 21 22 23 24 2001 9 18 3 2000 4 21 3,. 13,. Science/Technology, Design, Experiments,
