c /(50)

Size: px
Start display at page:

Download "1-1 1-2 1-3 1-4 1-5 1-6 1-7 1-8 1-9 1-10 1-11 c 2012 2/(50)"

Transcription

1 S c /(50)

2 c /(50)

3 S McCullock Pitts 1950 Rosenblatt ) c /(50)

4 10 2)3) u = (u 1,, u n ) z = (z 1,, z n ) W = (w ij ) τ du(t) dt = u + W f(u) + s (1 1) s (1 1) 1 2) Hopfield Amit Amari-Hopfield Wilson-Cowan Amari c /(50)

5 Hopfield Aihara Tuda 4) Rosenblatt 1967 Amari Rumelhart 2) Amari Fukumizu c /(50)

6 Bayes Fukumizu Watanbe 5) Boosting 6) 6) von der Malsburg Willshaw Amari Kohonen 7) 2 8) Barto Sutton c /(50)

7 ) D.E. Rumelhart and D.E. McClleland, Parallel Distributed Processing, vol.i, II, MIT Press, ),,, ) P. Dayan and L.F. Abbott, Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems, MIT Press, ),,, ),,,, ),,,,, ) T. Kohonen, Self-Organizing Maps, Springer, ) A. Cichocki and S. Amari, Adaptive Blind Signal and Image Processing, Wiley, c /(50)

8 S x = (x 1,..., x N ) T R N y R NX y = f(u), u = w ix i = w x i=1 (1 2) w R N x u = w x f u y tanh S u = w x+θ u θ w = (w T, θ) T, x = (x T, 1) T w, x u = w x (1 2) θ 2 y = f(u), du dt = u + w x (1 3) N c /(50)

9 x = f(u), du dt = u + W x (1 4) x = (x 1,..., x N ) T, u = (u 1,..., u N ) T f u W = (w ij ) ij w ij j i W 0 t = 0, 1,... x t+1 = f(u t), u t = W x t (1 5) f W y ±1 P (y = 1) = f(u), P (y = 1) = 1 f(u); u = w x (1 6) N P (x i x) x f W W T = W (1 4) (1 5) 1) W f(u) = (1 + tanh βu)/2 (1 6) N H(x) = x T W x/2 c /(50)

10 P (x) exp[ βh(x)] (1 7) 2) W p { 1, 2,..., p i R N } W 3, 4, 5) 6) W W = 1 p px i ( i ) T (1 8) i=1 i 1/2 ±1 p p Amit 7) N p 0.14N q(x) (1 7) W D(q p) = P x q(x) log[q(x)/p(x)] 2) W dw dt = xx T q xx T p (1 9) p p z x = (x T, z T ) T p(x) = X z p( x), p( x) e βh( x), H( x) = 1 2 xt W x (1 10) p(z x) = p( x)/p(x) c /(50)

11 dw dt = x x T q(x)p(z x) x x T p( x) (1 11) 8) 9) 1) J.J. Hopfield, Neurons with graded response have collective computational properties like those of two-state neurons, Proc. Natl. Acad. Sci. USA, vol.81, pp , May ) D.H. Ackley, G.E. Hinton, and T.J. Sejnowski, A learning algorithm for Boltzmann machines, Cognitive Science, vol.9, pp , Jan.-March ) K. Nakano, Associatron a model of associative memory, IEEE Trans. Systems, Man, and Cybernetics, vol.smc-12, pp , July ) T. Kohonen, Correlation matrix memories, IEEE Trans. Computers, vol.c-21, pp , April ) J.A. Anderson, A simple neural network generating an interactive memory, Mathematical Biosciences, vol.14, pp , Aug ) J.J. Hopfield, Neural networks and physical systems with emergent collective computational abilities, Proc. Natl. Acad. Sci. USA, vol.79, pp , April ) D.J. Amit, H. Gutfreund, and H. Sompolinsky, Storing infinite numbers of patterns in a spin-glass model of neural networks, Physical Review Letters, vol.55, pp , Sep ) D. Saad and M. Opper (eds.), Advanced Mean Field Methods: Theory and Practice, MIT Press, Cambridge, ) G.E. Hinton and R.R. Salakhutdinov, Reducing the dimensionality of data with neural networks, Science, vol.313, pp , July c /(50)

12 S N " 1 X P (S w, ) = Z(w, ) exp w ij S i S j + X # θ i S i (1 12) i>j i S i = ±1, i = 1, 2,..., N, Z(w, ) = P S exp h P i>j wijsisj + P i θisi i S f(s) f(s) = P S f(s)p (S w, ) N f(s) = S i S i S i (1 12) Callen * 0 S i = 1+ X w ijs j + θ i A j i (1 13) w ij = w ji P j i wijsj + θi Si local field tanh ( ) 0 1 S i X w ij S j + θ i A j i (1 14) P j i wij Sj + θi mean field S i S j S i (i = 1, 2,..., N) c /(50)

13 mean field approximation cluster variation method 1) (1 12) F(Q) = X X Q(S) w ij S i S j + X! θ i S i + X Q(S) ln Q(S) (1 15) S i>j i S Q(S) (1 12) Q(S) (1 15) Q(S) = P (S w, ) ln Z(w, ) Q(S) (1 15) S i Q(S) = 2 N Q N i=1 (1 + Si Si) (1 15) (1 14) S (1 15) (1 15) S i S j Bethe S i S j S i b ij (S i, S j ) b i (S i ) b ij (S i, S j ) b i (S i ) belief (1 15) F Bethe ({b ij, b i }) = X X b ij(s i, S j) b ij (S i, S j ) ln exp [w ij S i S j + θ i S i ] (ij) S i,s j + X (1 c i ) X b i (S i ) ln bi(si) exp [θ i S i ] i S i (1 16) (ij) w ij = w ji c i S i F Bethe ({b ij, b i }) b ij (S i, S j ) b i (S i ) reducibility c /(50)

14 X S3 4 1 S j b ij (S i, S j ) = b i (S i ) (1 17) (1 17) (1 16) 0 1 F Bethe ({b ij, b i}) + X X X λ i j(s X b ij(s i, S j) b i(s i) A(1 18) i Sj S i j N (i) λ i j (S i ) (1 17) b ij(s i, S j) b i(s i) N (i) S i w ij h h j i k j i k l l (a) (b) (a): S i c i = 4 (b): (1 21) i j m i j j i m h i m k i m l i (1 18) e c 1 P i 1 k N (i) λ i k(s i )+λ i j (S i ) X Sj e w ij S i S j e θ j S j e λ j i(s j ) (1 19) S i = ±1 m i j e θ is i λ i j (S i ) (1 + m i j S i )/2 λ i j (S i ) c /(50)

15 (1 19) m i j 0 m i j = i + X 1 tanh 1 (tanh(w ik )m k i ) A (1 20) k N (i)\j 1 1(b) N (i)\j N (i) j S i (1 20) 0 S i X S i b i (S i ) = i + S i X 1 tanh 1 (tanh(w ij )m j i ) A (1 21) j N (i) (1 21) (1 21) probability propagation belief propagation 2) 3) w (1 12) w (1 12) replica method K-SAT SAT/UNSAT 4) (1 12) w θ P (w) = Q i>j P (w ij) P ( ) = Q i P (θ i) 1 N [ln Z(w, )] = 1 N Z Y i>j dw ij P (w ij ) Y i dθ i P (θ i ) ln Z(w, ) (1 22) (1 22) (1 12) ln Z(w, ) w (1 22) c /(50)

16 n = 1, 2,... " X X n X Z n (w, ) = exp w ijsi a Sj a + X!# θ isi a S 1,S 2,...,S n a=1 i>j i (1 23) w n = 1, 2,... [Z n (w, )] (1 22) n = 1, 2,... [Z n (w, )] n n 1 N [ln Z(w, )] = lim n 0 1 N n ln [Zn (w, )] (1 24) (1 22) (1 23) S 1, S 2,..., S n w n w 5) 1) R. Kikuchi, A theory of cooperative phenomena, Physical Review, vol.81, no.6, pp , March ) Y. Kabashima and D. Saad, Belief propagation vs. TAP for decoding corrupted messages, Europhysics Letters, vol.44, no.5, pp , Dec ) J.S. Yedidia, W.T. Freeman and Y. Weiss, Constructing Free Energy Approximations and Generalized Belief Propagation Algorithms, IEEE Trans. Information Theory, vo.l.51, no.7, pp , July ),,,, p.206, ) M. Talagrand, Spin Glasses: A Challenge for Mathematicians. Cavity and Mean Field Models, Springer-Verlag, Berlin, p.586, c /(50)

17 S perceptron Frank Rosenblatt ) layer elementary perceptron S sensory A association 1 R response 1 2 hidden layer multi-layer perceptron, MLP x y (simple perceptron) 1 x = (x 1,, x n ) y y = f( P i w ix i θ) w i x i connection weight θ threshold value f tanh linear perceptron f x i θ 1 0 P x i wixi θ = c /(50)

18 linearly separable 1 θ x (i) θ w w w x (i) t (i) 1 0 N (x (i), t (i) )(i = 1,..., N) w 1) x (i) y (i) 2) w w + η(y (i) t (i) )x (i) η y (i) t (i) 0 error-correcting leaning x (i) Minsky Papert 1/0 2) Rumelhart error back-propagation learning 3) f R(w) w c /(50)

19 f(x, w) w w η w R(w) ) information geometry natural gradient method 5) p(y x; w) q(x) x G(w) = R R ( w ln p)( w ln p) T p(y x; w)q(x)dydx w w ηg 1 (w) w l(x, y, w) l(x, y, w) = ln p(y x; w) ln q(x) p(y x; w) adaptive natural gradient 6) method 1) F. Rosenblatt, Principles of Neurodynamics, Perceptrons and the Theory of Brain Mechanisms, Spartan Books, Washington, ) M.A. Minsky and S.A. Papert, Perceptrons Expanded Edition, MIT Press, Cambridge, 1988.,,,, ) D.E. Rumelhart, J.L. McClelland, and the PDP Research Group, Parallel Distributed Processing, Explorations in the Microstructure of Cognition, Volume 1: Foundations, MIT Press, Cambridge, 1986., PDP,,, c /(50)

20 4) C.M. Bishop, Pattern Recognition and Machine Learning, Springer-Verlag, New York, 2006.,, ( ),,, ) S. Amari, Natural gradient works efficiently in learning, Neural Computation, vol.10, no.2, pp , Feb ) S. Amari, H. Park, and K. Fukumizu, Adaptive method of realizing natural gradient learning for multilayer perceptrons, Neural Computation, vol.12, no.6, pp , June c /(50)

21 S supervised learning x y n {(x i, y i )} n 1, i=1 2) x y generalization ability {(x i, y i)} n i=1 p(x, y) independent and identically distributed; i.i.d. y E p(y x) [y] y regression y classification linear model least-squares {ϕ j (x)} t j=1 f linear (x) = tx θ j ϕ j (x) j=1 {θ j } t j=1 min {θ j } t j=1 nx (y i f linear (x i )) 2 i=1 maximum likelihood estimation q Gauss(y x) = 1 exp (y f «linear(x)) 2 2πσ 2 2σ radial basis function; RBF RBF c /(50)

22 nx min {θ j,µ j,σ j } t j=1 i=1 tx f RBF(x) = j=1 (y i f RBF(x i)) 2 θ j exp 1 «2 (x µj) Σ 1 j (x µ j) RBF {θ j} t j=1 {µ j } t j=1 {Σ j } t j=1 RBF kernel model min {θ j } n j=1 nx (y i f kernel (x i)) 2 i=1 f kernel (x) = nx j=1 θ j exp (x «xj) (x x j) 2σ 2 logistic regression y = ±1 y p(y x) q logistic (y x) = exp ( yf linear (x)) f linear (x) {θ j} t j=1 min {θ j } t j=1 nx log (1 + exp ( y i f linear (x i ))) i=1 f kernel (x) KL p(x, y) q(y x)p(x) KL c /(50)

23 Z KL[p(x, y) q(y x)p(x)] = p(x, y) log p(x, y) q(y x)p(x) dxdy KL q(y x) p(y x) information criterion KL Akaike information criterion; AIC 3) AIC = 2 nx log q(y i x i ) + 2t i=1 t AIC KL AIC RBF AIC 4) 1),,,,,,,, ),,,,,,,, ) H. Akaike, A new look at the statistical model identification, IEEE Trans. Automatic Control, vol.ac-19, no.6, pp , Dec ) S. Watanabe, Algebraic analysis for nonidentifiable learning machines, Neural Computation, vol.13, no.4, pp , April c /(50)

24 S PAC 1). PAC probably approximately correct N ɛ 1 δ N ɛ, δ VC Vapnik-Chervonenkis 2 SVM x R m w R m b y = sgn h i w T x + b 2). (1 25) y {+1, 1} sgn N (x n, y n), n = 1,..., N min n y n (w T x n + b) w (1 26) SVM (1 26) w, b (1 26) w b y n (w T x n + b) = 1 y n (w T x n + b) 1 (1 27) (1 26) 1/ w w 2 /2 SVM c /(50)

25 min w,b 1 2 w 2 s.t. y n(w T x n + b) 1 (1 28) 2 3 (1 28) SVM SVM n α n 0 = (α 1,..., α N ) L(w, b, ) L(w, b, ) = 1 NX 2 w 2 α n [y n (w T x n + b) 1] n=1 (1 29) (1 28) (1 29) L(w, b, ) w b 0 NX NX w = α ny nx n, 0 = α ny n n=1 n=1 (1 30) (1 30) SVM w x n L(w, b, ) w b (1 28) NX max α n 1 NX 2 α n 0 n=1 NX s.t. α n y n = 0 n=1 n=1 n =1 NX α n α n y n y n x T n x n (1 31) 2 (1 31) ˆα n, n = 1,..., N SVM NX y = ˆα ny nx T n x + b n=1 (1 32) b ˆα n > 0 n (1 32) ˆα n > 0 N PAC SVM c /(50)

26 x f( ) f = f(x) f f f f( ) SVM SVM f n = f(x n ) NX max α n 1 NX 2 α n 0 n=1 NX s.t. α n y n = 0 n=1 n=1 n =1 NX α n α n y n y n f T n f n (1 33) NX y = ˆα n y n f T n f + b n=1 (1 34) 2 (1 33) (1 34) x K(x, x ) = f T (x)f(x ) K(, ) K(, ) x n, x n R m, c n, c n R X n,n K(x n, x n )c n c n 0 (1 35) f( ) x SVM 1 C-SVM SVM SVM c /(50)

27 ξ n 2). ξ n ξ n 1 min w,b,ξ n 2 w 2 + C NX n=1 ξ n s.t. y n (w T x n + b) 1 ξ n, ξ n 0 (1 36) max 0 α n C n=1 NX α n 1 NX 2 NX s.t. α ny n = 0 n=1 n=1 n =1 NX α nα n y ny n x T n x n (1 37) α n C (1 31) ν-svm C-SVM SVM 2 ν -SVM C-SVM C ν-svm 1 β β 1 min w,b,ξ n 2 w 2 + C NX ξ n β n=1 s.t. y n(w T x n + b) β, ξ n 0 (1 38) 3). max 0 α n C 1 2 s.t. NX n=1 n =1 NX α n y n = 0, n=1 NX α n α n y n y n x T n x n NX α n = 1 n=1 (1 39) C-SVM C ν-svm α n 4, ν 5). 3 SVM c /(50)

28 SVM (a) ɛ (b) E ɛ ( ) 2). SVM SVR a b 1 3 SVR 1 2 w 2 + C NX E ɛ (w T x n + b y n ) n=1 (1 40) max 0 α n,α n C 1 2 NX n=1 n =1 n=1 NX (α n α n)(α n α n )xnt x n NX NX ɛ (α n + α n) + (α n α n)y n (1 41) n=1 2 4 SVM SVM 2 SVM SVM 1 SVM 1 1 SVM 1 SVM 6). SVM ν-svm 1) L.G. Valiant, A Theory of the Learnable, Commun. ACM, vol.27, pp , Nov c /(50)

29 2) V.N. Vapnik, The Nature of Statistical Learning Theory, Springer-Verlag, New York, ) B. Schölkopf, et al., New Support Vector Algorithms, Neural Computation, vol.12, no.5, pp , May ) K.P. Bennett and E.J. Bredensteiner, Duality and Geometry in SVM Classifiers, Proceedings of the 17th International Conference on Machine Learning, pp.57 64, ) K. Ikeda and T. Aoishi, An Asymptotic Statistical Analysis of Support Vector Machines with Soft Margins, Neural Networks, vol.18, pp , April ) B. Schölkopf, et al., Estimating the Support of a High-Dimensional Distribution, Neural Computation, vol.13, no.7, pp , July c /(50)

30 S )2) 3)4)5) m σ 2 {(m, σ) R 2 ; σ > 0} (m, σ) W d W d 1 2 d w x p(x w) {p(x w); w} p p d p T p p T p P v = (v i) 2 g ij i,j gijvivj g ij p g ij 0 g ij 2 p q T p T q T p T q c /(50)

31 2 α- α- α α = 1 α = 1 α c /(50)

32 1 1 AIC BIC 0 2 AIC BIC AIC BIC d c /(50)

33 AIC d BIC d 1 1 1),,,, ) S. Amari, K. Nagaoka, Methods of Information Geometry, Oxford University Press, Oxford, ),,,, ) M. Drton, B, Sturmfels, S.Sullivant, Lectures on Algebraic Statistics, Birkhäuser, Basel, ) S. Watanabe, Algebraic Geometry and Statistical Learning Theory, Cambridge University Press, Cambridge, c /(50)

34 S SOM Self-Organizing Map: SOM Kohonen 1) SOM 2 SOM SOM hawk owl duck hen dove eagle dog fox lion cow horse zebra cat wolf tiger owl dog cow (SOM) 1 4 SOM 16 SOM 2 SOM U-matrix 1 4 c /(50)

35 SOM (SOM) SOM k w k 16 w k y k 1 5 SOM {x 1,..., x N } {w 1,..., w M } w k x i y k 1 5 SOM SOM SOM SOM Step Step 0 t = 0 Step 1 Winner Best Matching Unit x i c /(50)

36 k (i) k (i) = arg min k w k x i Step 2 k i h y k y k α k,i = (i) ; σ(t) P (1 42) N i =1 y h k y k (i ) ; σ(t) h(d; σ) d σ(t) t Step 3 IX w k (t) = (1 ε) w k (t 1) + ε α k,i x i i=1 (1 43) ε 0 < ε 1 ε = 1 t := t + 1 Step 1 1 SOM ε t ε 0 (1 42) SOM SOM SOM c /(50)

37 2) SOM SOM (Generative Topographic Map: GTM) 3) (Kernel-based Maximum Entropy Learning: kmer) 2) SOM SOM SOM SOM SOM SOM Adaptive Subspace SOM: ASSOM Self-Organizing Operator Map SOM SOM k-means SOM 4) SOM SOM SOM SOM 1) SOM SOM PAK SOM 1) T.,,,, ) M.,,,, ) C.M. Bishop, M.Svensén and C.K.I. Williams, GTM: The generative topographic mapping, Neural Computation, vol.10, no.1, pp , Jan ) T.M. Martinetz, S.G. Berkovich and K.J. Schulten, Neural-gas network for vector quantization and its application to time-series prediction, IEEE Trans. Neural Networks, vol. 4, no.4, pp , July c /(50)

38 S N n x 1,..., x N N n X = (x T 1,..., x T N ) T X N m U m n V X UV (1 44) m = n X U V m < n X X n x T i U i m u T i u i V x i m W UV = (UW )(W 1 V ) U = UW, V = W 1 V U V SVD: Singular Value Decomposition N n X,N n U, n Λ, n n V c /(50)

39 X = UΛV T, U T U = I, V T V = I, Λ = diag(λ 1,..., λ n ), λ 1... λ n 0 (1 45) X λ 1,..., λ n 0 X U, V m Ũ, Ṽ Λ = diag(λ 1,..., λ m) X = (Ũ Λ)Ṽ T (1 46) N m Ũ Λ m n X = ( X ij ) X = (X ij ) X 2 PCA: Principal Component Analysis x i m = ( P N i=1 xi)/n X Ũ Λ Ṽ T Ũ Λ = XṼ (1 47) x T Ṽ Ṽ V x λ i, i = 1, 2,..., m X n m Ṽ x Ṽ x c /(50)

40 1) W t+1 = W t + γxx T W t, Ṽ t = W t(w T t W t) 1/2 (1 48) W t n m W 0 t = 0, 1, 2,... 2 Ṽ t W t E[tr(W T xx T W )] W E[ ] (1 48) 1 xx T W t 2) 1 m ) m P m i=1 λ2 i / P n j=1 λ2 j m AIC, MDL, m n x m u n m Ṽ x = Ṽ u + n (1 49) u v 4) Ṽ Newton EM 1 Factor Analysis u n 5) c /(50)

41 (1 46) Ũ ΛṼ Ũ u EM Ṽ W W u (1 49) x = (Ṽ W T )(W u) + n (1 50) 2 ICA: Independent Component Analysis u 1, u 2,..., u m p(u 1, u 2,..., u m ) = p(u 1 ) p(u 2 ) p(u m ) m W p(u 1, u 2,..., u m ) p(u 1 )p(u 2 ) p(u m ) Kullback-Leibler W t+1 = W t + γ(i E[v(u)u T log p(ui) ])W, v(u) = u i «T i=1,...,m (1 51) p(u i) u i E[ ] W t W 1 Lie c /(50)

42 6) 4 κ 4 = E[u 4 i ] 3E[u 2 i ] (1 51) X UV U V NMF: Nonnegative Matrix Factorization 7) 2 x n s(x) n n 8) x, x k(x, x ) = s(x) s(x ) N x 1,..., x N K = (k(x i, x j )) i,j=1,...,n s(x) n 1) E. Oja, Principal Components, Minor Components, and Linear Neural Networks, Neural Networks, vol.5, pp , November-December ),,, vol.43, no.11, pp , Nov c /(50)

43 3), 2 3,, vol.49, no.1, pp , ) S. Akaho, The e-pca and m-pca: dimension reduction of parameters by information geometry, Proceedings IEEE International Joint Conference on Neural Networks, vol.1, pp , July ) C. Bishop, Pattern Recognition and Machine Learning, Springer-Verlag, New York, 2006,,,, 2007, ) Y. Nishimori and S. Akaho, Learning Algorithms Utilizing Quasi-Geodesic Flows on the Stiefel Manifold, Neurocomputing, vol.67, pp , August ) D.D. Lee and H.S. Seung, Algorithms for Non-negative Matrix Factorization, Advances in Neural Information Processing Systems, vol.13, pp , MIT Press, Cambridge, ),,,, ),,,, c /(50)

44 S ensemble over-training, over-fitting clustering hard clustering soft clustering k- k-means clustering method LVQ learning vector quantization SOM self-organizing map MoE Mixture of Experts c /(50)

45 CART classification and regression tree bagging boosting MoE 1, 2) Mixture of Experts MoE Mixture of Experts 3) 4) EM EM algorithm D = {(x i, y i); i = 1,..., n} x y MoE θ {θ k } x k y P (y x; θ k ) y GLM generalized linear model 1 ξ x k P (k x; ξ) MoE x y P (y x; {θ k }, ξ) = X k P (k x; ξ)p (y x; θ k ) D {θ k }, ξ EM P (k x, y) = θ (t+1) k = arg max θ k P (k x; ξ(t) )P (y x; θ (t) k ) Pk P (k x; ξ(t) )P (y x; θ (t) k ) X (x,y) D P (k x, y) log P (y x; θ k ) c /(50)

46 ξ (t+1) = arg max ξ X (x,y) D S3 4 1 P (k x, y) log P (k x; ξ) (t) MoE MoE boosting 5) AdaBoost 6) 2 AdaBoost 2 X x 2 y {+1, 1} D{(x i, y i ); i = 1,..., n} x ±1 h(x) (x i, y i ) D i AdaBoost D (1) = 1/n T i h (x i, y i ) F(h) h (t) ɛ (t) (h) = X i F(h) D (t) i h (t) ɛ (t) = ɛ (t) (h (t) ) α (t) = 1 «1 ɛ (t) 2 ln ɛ (t) D (t+1) i = D(t) i exp( α (t) y ih (t) (x i)) Z Z P D i = 1 H(x) = sign! TX α (t) h (t) (x) t=1 h(x) y h(x) y c /(50)

47 decision stump random guess boost 1),,,,,, ) C.M. Bishop, Pattern Recognition and Machine Learning, Springer, Berlin, ) R.A. Jacobs, M.I. Jordan, S.J. Norlan and G.E. Hinton, Adaptive Mixtures of Local Experts, Neural Computation, vol.3, no.1, pp.79 87, Spring ) M.I. Jordan and R.A. Jacobs, Hierarchical Mixtures of Experts and the EM Algorithm, Neural Computation, vol.6, no.2, pp , March ) R.E. Schapire, The strength of weak learnability, Machine Learning, vol.5, pp , June ) Y. Freund and R.E. Schapire, A decision-theoretic generalization of on-line learning and an application to boosting, J. Comput. Syst. Sci., vol.55, no.1, pp , August c /(50)

48 S θ p(θ) D = {x 1,.., x N } p(θ D) p(θ D) p(d θ)p(θ) (1 52) p(d θ) D x Z p(x D) = p(x θ)p(θ D)dθ (1 53) K p(x θ (1) ),..., p(x θ (K) ) p(θ (j) α),j = 1,..., K α p(α) θ = (θ (1),..., θ (K) ) D j j KY p(θ, α D) p(d j θ (j) )p(θ (j) α)p(α) j=1 (1 54) K K Dirichlet Process:DP 1) DP distribution over distributions K A 1,..., A K P (A 1),..., P (A K) K G = (P (A 1),..., P (A K)) G Dirichlet(G; γg 0 (A 1 ),..., γg 0 (A K )) (1 55) G DP G 0 γ G DP(γ, G 0 ) G K (P (A 1 )+ +P (A K ) = 1) P (A 1),..., P (A K) {γg 0(A k )} K k=1 c /(50)

49 θ G Dirichlet Process Mixture: DPM K x i θ i i = 1, 2,..., i DP θ i 2) 1. θ 1 G 0(θ) 2. θ 2 1 δ γ+1 θ 1 (θ) + γ G0(θ) γ+1. i. θ i 1 δ γ+i 1 θ 1 (θ) δ γ+i 1 θ i 1 (θ) + γ G γ+i 1 0(θ) x p(x) x p(x) δ x(y) x = y 1 0 θ i θ 1, θ 2,..., θ i 1 G 0 θ j, j = 1,..., i 1 1/(γ + i 1) θ γ/(γ + i 1) i 1 θ j, (j < i) θ (1),..., θ (K) K θ i θ (i) γ θ (K+1) DP i θ i log i θ i θ (k) x i θ i {θ (1),..., θ (K) } θ (k) θ (l) for k l i j θ i = θ j z i x i K z i z i {1,..., K} DP z i i 1 i z i = k θ i DP ( m k /(γ + i 1) if m k > 0 P (z i = k z 1,..., z i 1 ) = γ/(γ + i 1) if m k = 0 (1 56) m k k m k = 0 k K K + 1 i i θ (j) (1 56) Chinese Restaurant Process CRP 2) c /(50)

50 CRP exchangability CRP z 1,..., z N P (z 1,..., z N ) = P (z 1 )P (z 2 z 1 ) P (z N z 1, z 2,..., z N 1 ) (1 57) z i DPM Gibbs DPM P (z 1,..., z N D) DP Z = (z 1,..., z N ) D Gibbs DPM 2) DP Gaussian Process: GP DP 3) GP K K 3) GP 1) T.S. Ferguson, A Bayesian analysis of some nonparametric problems, The Annals of Statistics, vol.1, no.2, pp , March ),,,, vol.17, no.3, pp , Sep ) C.E. Rasmussen and C.K. Williams, Gaussian Processes for Machine Learning, MIT Press, Cambridge, c /(50)

18 2 20 W/C W/C W/C 4-4-1 0.05 1.0 1000 1. 1 1.1 1 1.2 3 2. 4 2.1 4 (1) 4 (2) 4 2.2 5 (1) 5 (2) 5 2.3 7 3. 8 3.1 8 3.2 ( ) 11 3.3 11 (1) 12 (2) 12 4. 14 4.1 14 4.2 14 (1) 15 (2) 16 (3) 17 4.3 17 5. 19

More information

x T = (x 1,, x M ) x T x M K C 1,, C K 22 x w y 1: 2 2

x T = (x 1,, x M ) x T x M K C 1,, C K 22 x w y 1: 2 2 Takio Kurita Neurosceince Research Institute, National Institute of Advanced Indastrial Science and Technology takio-kurita@aistgojp (Support Vector Machine, SVM) 1 (Support Vector Machine, SVM) ( ) 2

More information

http://www.ieice-hbkb.org/ 2 2 1 2 -- 2 -- 1 1--1 2011 2 26 26 1 1 AD 1 1 1-2 1-3 c 2013 2/(19)

http://www.ieice-hbkb.org/ 2 2 1 2 -- 2 -- 1 1--1 2011 2 26 26 1 1 AD 1 1 1-2 1-3 c 2013 2/(19) 2 -- 2 1 2011 2 1-1 1-2 SVM Boosting 1-3 1-4 c 2013 1/(19) http://www.ieice-hbkb.org/ 2 2 1 2 -- 2 -- 1 1--1 2011 2 26 26 1 1 AD 1 1 1-2 1-3 c 2013 2/(19) 1-4 c 2013 3/(19) 2 -- 2 -- 1 1--2 2011 2 Principal

More information

Introduction of Self-Organizing Map * 1 Ver. 1.00.00 (2017 6 3 ) *1 E-mail: furukawa@brain.kyutech.ac.jp i 1 1 1.1................................ 2 1.2...................................... 4 1.3.......................

More information

& 3 3 ' ' (., (Pixel), (Light Intensity) (Random Variable). (Joint Probability). V., V = {,,, V }. i x i x = (x, x,, x V ) T. x i i (State Variable),

& 3 3 ' ' (., (Pixel), (Light Intensity) (Random Variable). (Joint Probability). V., V = {,,, V }. i x i x = (x, x,, x V ) T. x i i (State Variable), .... Deeping and Expansion of Large-Scale Random Fields and Probabilistic Image Processing Kazuyuki Tanaka The mathematical frameworks of probabilistic image processing are formulated by means of Markov

More information

untitled

untitled K-Means 1 5 2 K-Means 7 2.1 K-Means.............................. 7 2.2 K-Means.......................... 8 2.3................... 9 3 K-Means 11 3.1.................................. 11 3.2..................................

More information

1 IDC Wo rldwide Business Analytics Technology and Services 2013-2017 Forecast 2 24 http://www.soumu.go.jp/johotsusintokei/whitepaper/ja/h24/pdf/n2010000.pdf 3 Manyika, J., Chui, M., Brown, B., Bughin,

More information

aca-mk23.dvi

aca-mk23.dvi E-Mail: matsu@nanzan-u.ac.jp [13] [13] 2 ( ) n-gram 1 100 ( ) (Google ) [13] (Breiman[3] ) [13] (Friedman[5, 6]) 2 2.1 [13] 10 20 200 11 10 110 6 10 60 [13] 1: (1892-1927) (1888-1948) (1867-1916) (1862-1922)

More information

ばらつき抑制のための確率最適制御

ばらつき抑制のための確率最適制御 ( ) http://wwwhayanuemnagoya-uacjp/ fujimoto/ 2011 3 9 11 ( ) 2011/03/09-11 1 / 46 Outline 1 2 3 4 5 ( ) 2011/03/09-11 2 / 46 Outline 1 2 3 4 5 ( ) 2011/03/09-11 3 / 46 (1/2) r + Controller - u Plant y

More information

Convolutional Neural Network A Graduation Thesis of College of Engineering, Chubu University Investigation of feature extraction by Convolution

Convolutional Neural Network A Graduation Thesis of College of Engineering, Chubu University Investigation of feature extraction by Convolution Convolutional Neural Network 2014 3 A Graduation Thesis of College of Engineering, Chubu University Investigation of feature extraction by Convolutional Neural Network Fukui Hiroshi 1940 1980 [1] 90 3

More information

untitled

untitled 2007 55 2 255 268 c 2007 2007 1 24 2007 10 30 k 10 200 11 110 6 60 3 1. 1 19 Mendenhall 1887 Dickens, 1812 1870 Thackeray, 1811 1863 Mill, 1806 1873 1960 610 0394 1 3 256 55 2 2007 Sebastiani 2002 k k

More information

IS1-09 第 回画像センシングシンポジウム, 横浜,14 年 6 月 2 Hough Forest Hough Forest[6] Random Forest( [5]) Random Forest Hough Forest Hough Forest 2.1 Hough Forest 1 2.2

IS1-09 第 回画像センシングシンポジウム, 横浜,14 年 6 月 2 Hough Forest Hough Forest[6] Random Forest( [5]) Random Forest Hough Forest Hough Forest 2.1 Hough Forest 1 2.2 IS1-09 第 回画像センシングシンポジウム, 横浜,14 年 6 月 MI-Hough Forest () E-mail: ym@vision.cs.chubu.ac.jphf@cs.chubu.ac.jp Abstract Hough Forest Random Forest MI-Hough Forest Multiple Instance Learning Bag Hough Forest

More information

Duality in Bayesian prediction and its implication

Duality in Bayesian prediction and its implication $\theta$ 1860 2013 104-119 104 Duality in Bayesian prediction and its implication Toshio Ohnishi and Takemi Yanagimotob) a) Faculty of Economics, Kyushu University b) Department of Industrial and Systems

More information

main.dvi

main.dvi CDMA 1 CDMA ( ) CDMA CDMA CDMA 1 ( ) Hopfield [1] Hopfield 1 E-mail: okada@brain.riken.go.jp 1 1: 1 [] Hopfield Sourlas Hopfield [3] Sourlas 1? CDMA.1 DS/BPSK CDMA (Direct Sequence; DS) (Binary Phase-Shift-Keying;

More information

2008 : 80725872 1 2 2 3 2.1.......................................... 3 2.2....................................... 3 2.3......................................... 4 2.4 ()..................................

More information

2007/8 Vol. J90 D No. 8 Stauffer [7] 2 2 I 1 I 2 2 (I 1(x),I 2(x)) 2 [13] I 2 = CI 1 (C >0) (I 1,I 2) (I 1,I 2) Field Monitoring Server

2007/8 Vol. J90 D No. 8 Stauffer [7] 2 2 I 1 I 2 2 (I 1(x),I 2(x)) 2 [13] I 2 = CI 1 (C >0) (I 1,I 2) (I 1,I 2) Field Monitoring Server a) Change Detection Using Joint Intensity Histogram Yasuyo KITA a) 2 (0 255) (I 1 (x),i 2 (x)) I 2 = CI 1 (C>0) (I 1,I 2 ) (I 1,I 2 ) 2 1. [1] 2 [2] [3] [5] [6] [8] Intelligent Systems Research Institute,

More information

0.,,., m Euclid m m. 2.., M., M R 2 ψ. ψ,, R 2 M.,, (x 1 (),, x m ()) R m. 2 M, R f. M (x 1,, x m ), f (x 1,, x m ) f(x 1,, x m ). f ( ). x i : M R.,,

0.,,., m Euclid m m. 2.., M., M R 2 ψ. ψ,, R 2 M.,, (x 1 (),, x m ()) R m. 2 M, R f. M (x 1,, x m ), f (x 1,, x m ) f(x 1,, x m ). f ( ). x i : M R.,, 2012 10 13 1,,,.,,.,.,,. 2?.,,. 1,, 1. (θ, φ), θ, φ (0, π),, (0, 2π). 1 0.,,., m Euclid m m. 2.., M., M R 2 ψ. ψ,, R 2 M.,, (x 1 (),, x m ()) R m. 2 M, R f. M (x 1,, x m ), f (x 1,, x m ) f(x 1,, x m ).

More information

21 Pitman-Yor Pitman- Yor [7] n -gram W w n-gram G Pitman-Yor P Y (d, θ, G 0 ) (1) G P Y (d, θ, G 0 ) (1) Pitman-Yor d, θ, G 0 d 0 d 1 θ Pitman-Yor G

21 Pitman-Yor Pitman- Yor [7] n -gram W w n-gram G Pitman-Yor P Y (d, θ, G 0 ) (1) G P Y (d, θ, G 0 ) (1) Pitman-Yor d, θ, G 0 d 0 d 1 θ Pitman-Yor G ol2013-nl-214 No6 1,a) 2,b) n-gram 1 M [1] (TG: Tree ubstitution Grammar) [2], [3] TG TG 1 2 a) ohno@ilabdoshishaacjp b) khatano@maildoshishaacjp [4], [5] [6] 2 Pitman-Yor 3 Pitman-Yor 1 21 Pitman-Yor

More information

Dirichlet process mixture Dirichlet process mixture 2 /40 MIRU2008 :

Dirichlet process mixture Dirichlet process mixture 2 /40 MIRU2008 : Dirichlet Process : joint work with: Max Welling (UC Irvine), Yee Whye Teh (UCL, Gatsby) http://kenichi.kurihara.googlepages.com/miru_workshop.pdf 1 /40 MIRU2008 : Dirichlet process mixture Dirichlet process

More information

On the Limited Sample Effect of the Optimum Classifier by Bayesian Approach he Case of Independent Sample Size for Each Class Xuexian HA, etsushi WAKA

On the Limited Sample Effect of the Optimum Classifier by Bayesian Approach he Case of Independent Sample Size for Each Class Xuexian HA, etsushi WAKA Journal Article / 学術雑誌論文 ベイズアプローチによる最適識別系の有限 標本効果に関する考察 : 学習標本の大きさ がクラス間で異なる場合 (< 論文小特集 > パ ターン認識のための学習 : 基礎と応用 On the limited sample effect of bayesian approach : the case of each class 韓, 雪仙 ; 若林, 哲史

More information

Accuracy Improvement by Compound Discriminant Functions for Resembling Character Recognition Takashi NAKAJIMA, Tetsushi WAKABAYASHI, Fumitaka KIMURA,

Accuracy Improvement by Compound Discriminant Functions for Resembling Character Recognition Takashi NAKAJIMA, Tetsushi WAKABAYASHI, Fumitaka KIMURA, Journal Article / 学 術 雑 誌 論 文 混 合 識 別 関 数 による 類 似 文 字 認 識 の 高 精 度 化 Accuracy improvement by compoun for resembling character recogn 中 嶋, 孝 ; 若 林, 哲 史 ; 木 村, 文 隆 ; 三 宅, 康 二 Nakajima, Takashi; Wakabayashi,

More information

untitled

untitled 1 n m (ICA = independent component analysis) BSS (= blind source separation) : s(t) =(s 1 (t),...,s n (t)) R n : x(t) =(x 1 (t),...,x n (t)) R m 1 i s i (t) a ji R j 2 (A =(a ji )) x(t) =As(t) (1) n =

More information

{x 1 -x 4, x 2 -x 5, x 3 -x 6 }={X, Y, Z} {X, Y, Z} EEC EIC Freeman (4) ANN Artificial Neural Network ANN Freeman mesoscopicscale 2.2 {X, Y, Z} X a (t

{x 1 -x 4, x 2 -x 5, x 3 -x 6 }={X, Y, Z} {X, Y, Z} EEC EIC Freeman (4) ANN Artificial Neural Network ANN Freeman mesoscopicscale 2.2 {X, Y, Z} X a (t ( ) No. 4-69 71 5 (5-5) *1 A Coupled Nonlinear Oscillator Model for Emergent Systems (2nd Report, Spatiotemporal Coupled Lorenz Model-based Subsystem) Tetsuji EMURA *2 *2 College of Human Sciences, Kinjo

More information

…p…^†[…fiflF”¯ Pattern Recognition

…p…^†[…fiflF”¯   Pattern Recognition Pattern Recognition Shin ichi Satoh National Institute of Informatics June 11, 2019 (Support Vector Machines) (Support Vector Machines: SVM) SVM Vladimir N. Vapnik and Alexey Ya. Chervonenkis 1963 SVM

More information

:EM,,. 4 EM. EM Finch, (AIC)., ( ), ( ), Web,,.,., [1].,. 2010,,,, 5 [2]., 16,000.,..,,. (,, )..,,. (socio-dynamics) [3, 4]. Weidlich Haag.

:EM,,. 4 EM. EM Finch, (AIC)., ( ), ( ), Web,,.,., [1].,. 2010,,,, 5 [2]., 16,000.,..,,. (,, )..,,. (socio-dynamics) [3, 4]. Weidlich Haag. :EM,,. 4 EM. EM Finch, (AIC)., ( ), ( ),. 1. 1990. Web,,.,., [1].,. 2010,,,, 5 [2]., 16,000.,..,,. (,, )..,,. (socio-dynamics) [3, 4]. Weidlich Haag. [5]. 606-8501,, TEL:075-753-5515, FAX:075-753-4919,

More information

it-ken_open.key

it-ken_open.key 深層学習技術の進展 ImageNet Classification 画像認識 音声認識 自然言語処理 機械翻訳 深層学習技術は これらの分野において 特に圧倒的な強みを見せている Figure (Left) Eight ILSVRC-2010 test Deep images and the cited4: from: ``ImageNet Classification with Networks et

More information

Grund.dvi

Grund.dvi 24 24 23 411M133 i 1 1 1.1........................................ 1 2 4 2.1...................................... 4 2.2.................................. 6 2.2.1........................... 6 2.2.2 viterbi...........................

More information

ii

ii I05-010 : 19 1 ii k + 1 2 DS 198 20 32 1 1 iii ii iv v vi 1 1 2 2 3 3 3.1.................................... 3 3.2............................. 4 3.3.............................. 6 3.4.......................................

More information

Computational Semantics 1 category specificity Warrington (1975); Warrington & Shallice (1979, 1984) 2 basic level superiority 3 super-ordinate catego

Computational Semantics 1 category specificity Warrington (1975); Warrington & Shallice (1979, 1984) 2 basic level superiority 3 super-ordinate catego Computational Semantics 1 category specificity Warrington (1975); Warrington & Shallice (1979, 1984) 2 basic level superiority 3 super-ordinate category preservation 1 / 13 analogy by vector space Figure

More information

Part. 4. () 4.. () 4.. 3 5. 5 5.. 5 5.. 6 5.3. 7 Part 3. 8 6. 8 6.. 8 6.. 8 7. 8 7.. 8 7.. 3 8. 3 9., 34 9.. 34 9.. 37 9.3. 39. 4.. 4.. 43. 46.. 46..

Part. 4. () 4.. () 4.. 3 5. 5 5.. 5 5.. 6 5.3. 7 Part 3. 8 6. 8 6.. 8 6.. 8 7. 8 7.. 8 7.. 3 8. 3 9., 34 9.. 34 9.. 37 9.3. 39. 4.. 4.. 43. 46.. 46.. Cotets 6 6 : 6 6 6 6 6 6 7 7 7 Part. 8. 8.. 8.. 9..... 3. 3 3.. 3 3.. 7 3.3. 8 Part. 4. () 4.. () 4.. 3 5. 5 5.. 5 5.. 6 5.3. 7 Part 3. 8 6. 8 6.. 8 6.. 8 7. 8 7.. 8 7.. 3 8. 3 9., 34 9.. 34 9.. 37 9.3.

More information

fiš„v3.dvi

fiš„v3.dvi (2001) 49 1 23 42 2000 10 16 2001 4 23 NTT * 1. 1.1 1998 * 104 0033 1 21 2 7F 24 49 1 2001 1999 70 91 MIT M. Turk Recognition Using Eigenface (Turk and Pentland (1991)). 1998 IC 1 CPU (Jain and Waller

More information

25 11M15133 0.40 0.44 n O(n 2 ) O(n) 0.33 0.52 O(n) 0.36 0.52 O(n) 2 0.48 0.52

25 11M15133 0.40 0.44 n O(n 2 ) O(n) 0.33 0.52 O(n) 0.36 0.52 O(n) 2 0.48 0.52 26 1 11M15133 25 11M15133 0.40 0.44 n O(n 2 ) O(n) 0.33 0.52 O(n) 0.36 0.52 O(n) 2 0.48 0.52 1 2 2 4 2.1.............................. 4 2.2.................................. 5 2.2.1...........................

More information

? (EM),, EM? (, 2004/ 2002) von Mises-Fisher ( 2004) HMM (MacKay 1997) LDA (Blei et al. 2001) PCFG ( 2004)... Variational Bayesian methods for Natural

? (EM),, EM? (, 2004/ 2002) von Mises-Fisher ( 2004) HMM (MacKay 1997) LDA (Blei et al. 2001) PCFG ( 2004)... Variational Bayesian methods for Natural SLC Internal tutorial Daichi Mochihashi daichi.mochihashi@atr.jp ATR SLC 2005.6.21 (Tue) 13:15 15:00@Meeting Room 1 Variational Bayesian methods for Natural Language Processing p.1/30 ? (EM),, EM? (, 2004/

More information

X G P G (X) G BG [X, BG] S 2 2 2 S 2 2 S 2 = { (x 1, x 2, x 3 ) R 3 x 2 1 + x 2 2 + x 2 3 = 1 } R 3 S 2 S 2 v x S 2 x x v(x) T x S 2 T x S 2 S 2 x T x S 2 = { ξ R 3 x ξ } R 3 T x S 2 S 2 x x T x S 2

More information

Real AdaBoost HOG 2009 3 A Graduation Thesis of College of Engineering, Chubu University Efficient Reducing Method of HOG Features for Human Detection based on Real AdaBoost Chika Matsushima ITS Graphics

More information

1 1 2 3 2.1.................. 3 2.2.......... 6 3 7 3.1......................... 7 3.1.1 ALAGIN................ 7 3.1.2 (SVM).........................

1 1 2 3 2.1.................. 3 2.2.......... 6 3 7 3.1......................... 7 3.1.1 ALAGIN................ 7 3.1.2 (SVM)......................... [5] Yahoo! Yahoo! (SVM) 3 F 7 7 (SVM) 3 F 6 0 1 1 2 3 2.1.................. 3 2.2.......... 6 3 7 3.1......................... 7 3.1.1 ALAGIN................ 7 3.1.2 (SVM)........................... 8

More information

(a) (b) (c) Canny (d) 1 ( x α, y α ) 3 (x α, y α ) (a) A 2 + B 2 + C 2 + D 2 + E 2 + F 2 = 1 (3) u ξ α u (A, B, C, D, E, F ) (4) ξ α (x 2 α, 2x α y α,

(a) (b) (c) Canny (d) 1 ( x α, y α ) 3 (x α, y α ) (a) A 2 + B 2 + C 2 + D 2 + E 2 + F 2 = 1 (3) u ξ α u (A, B, C, D, E, F ) (4) ξ α (x 2 α, 2x α y α, [II] Optimization Computation for 3-D Understanding of Images [II]: Ellipse Fitting 1. (1) 2. (2) (edge detection) (edge) (zero-crossing) Canny (Canny operator) (3) 1(a) [I] [II] [III] [IV ] E-mail sugaya@iim.ics.tut.ac.jp

More information

(MIRU2008) HOG Histograms of Oriented Gradients (HOG)

(MIRU2008) HOG Histograms of Oriented Gradients (HOG) (MIRU2008) 2008 7 HOG - - E-mail: katsu0920@me.cs.scitec.kobe-u.ac.jp, {takigu,ariki}@kobe-u.ac.jp Histograms of Oriented Gradients (HOG) HOG Shape Contexts HOG 5.5 Histograms of Oriented Gradients D Human

More information

例題ではじめる部分空間法 - パターン認識へのいざない -

例題ではじめる部分空間法  - パターン認識へのいざない - - - ( ) 69 2012 5 22 (1) ( ) MATLAB/Octave 3 download http://www.tuat.ac.jp/ s-hotta/rsj2012 (2) ( ) [1] 対応付け 0 1 2 3 4 未知パターン ( クラスが未知 ) 利用 5 6 7 8 クラス ( 概念 ) 9 訓練パターン ( クラスが既知 ) (3) [1] 識別演算部 未知パターン

More information

4. C i k = 2 k-means C 1 i, C 2 i 5. C i x i p [ f(θ i ; x) = (2π) p 2 Vi 1 2 exp (x µ ] i) t V 1 i (x µ i ) 2 BIC BIC = 2 log L( ˆθ i ; x i C i ) + q

4. C i k = 2 k-means C 1 i, C 2 i 5. C i x i p [ f(θ i ; x) = (2π) p 2 Vi 1 2 exp (x µ ] i) t V 1 i (x µ i ) 2 BIC BIC = 2 log L( ˆθ i ; x i C i ) + q x-means 1 2 2 x-means, x-means k-means Bayesian Information Criterion BIC Watershed x-means Moving Object Extraction Using the Number of Clusters Determined by X-means Clustering Naoki Kubo, 1 Kousuke

More information

Run-Based Trieから構成される 決定木の枝刈り法

Run-Based Trieから構成される  決定木の枝刈り法 Run-Based Trie 2 2 25 6 Run-Based Trie Simple Search Run-Based Trie Network A Network B Packet Router Packet Filtering Policy Rule Network A, K Network B Network C, D Action Permit Deny Permit Network

More information

gengo.dvi

gengo.dvi 4 97.52% tri-gram 92.76% 98.49% : Japanese word segmentation by Adaboost using the decision list as the weak learner Hiroyuki Shinnou In this paper, we propose the new method of Japanese word segmentation

More information

No. 3 Oct The person to the left of the stool carried the traffic-cone towards the trash-can. α α β α α β α α β α Track2 Track3 Track1 Track0 1

No. 3 Oct The person to the left of the stool carried the traffic-cone towards the trash-can. α α β α α β α α β α Track2 Track3 Track1 Track0 1 ACL2013 TACL 1 ACL2013 Grounded Language Learning from Video Described with Sentences (Yu and Siskind 2013) TACL Transactions of the Association for Computational Linguistics What Makes Writing Great?

More information

untitled

untitled N N X=[ ] R IJK R X R ABC A=[a ] R B=[b ] R C=[c ] R ABC X =[ ] R = a b c X X X X X D( ) D(X X )= log + D( ) a a b b c c b c b c a c a c a b a b R X X A a t =a b c a = t a R i i = a =. a I R = a = b =

More information

untitled

untitled 2 : n =1, 2,, 10000 0.5125 0.51 0.5075 0.505 0.5025 0.5 0.4975 0.495 0 2000 4000 6000 8000 10000 2 weak law of large numbers 1. X 1,X 2,,X n 2. µ = E(X i ),i=1, 2,,n 3. σi 2 = V (X i ) σ 2,i=1, 2,,n ɛ>0

More information

1) (AI) 5G AI AI Google IT Deep Learning RWC (RWCP) RWC Web RWCP [1] 2. RWC ETL-Mark I, II (1952, 1955) (ETL) (ETL-Mark

1) (AI) 5G AI AI Google IT Deep Learning RWC (RWCP) RWC Web RWCP [1] 2. RWC ETL-Mark I, II (1952, 1955) (ETL) (ETL-Mark RWC 10 RWC 1992 4 2001 13 RWC 21 RWC 1. RWC (Real World Computing ) (1982 1992 ) 3 1992 2001 10 21 RWCP 5 1992 1996 5 1997 2001 RWC (RWI) (PDC) 2 RWC (5G) 1 1) (AI) 5G AI AI Google IT Deep Learning RWC

More information

TC1-31st Fuzzy System Symposium (Chofu, September -, 15) cremental Neural Networ (SOINN) [5] Enhanced SOINN (ESOINN) [] ESOINN GNG Deng Evolving Self-

TC1-31st Fuzzy System Symposium (Chofu, September -, 15) cremental Neural Networ (SOINN) [5] Enhanced SOINN (ESOINN) [] ESOINN GNG Deng Evolving Self- TC1-31st Fuzzy System Symposium (Chofu, September -, 15) Proposing a Growing Self-Organizing Map Based on a Learning Theory of a Gaussian Mixture Model Kazuhiro Tounaga National Fisheries University Abstract:

More information

letter by letter reading read R, E, A, D 1

letter by letter reading read R, E, A, D 1 3 2009 10 14 1 1.1 1 1.2 1 letter by letter reading read R, E, A, D 1 1.3 1.4 Exner s writing center hypergraphia, micrographia hypergraphia micrographia 2 3 phonological dyslexia surface dyslexia deep

More information

カルマンフィルターによるベータ推定( )

カルマンフィルターによるベータ推定( ) β TOPIX 1 22 β β smoothness priors (the Capital Asset Pricing Model, CAPM) CAPM 1 β β β β smoothness priors :,,. E-mail: koiti@ism.ac.jp., 104 1 TOPIX β Z i = β i Z m + α i (1) Z i Z m α i α i β i (the

More information

3 2 2 (1) (2) (3) (4) 4 4 AdaBoost 2. [11] Onishi&Yoda [8] Iwashita&Stoica [5] 4 [3] 3. 3 (1) (2) (3)

3 2 2 (1) (2) (3) (4) 4 4 AdaBoost 2. [11] Onishi&Yoda [8] Iwashita&Stoica [5] 4 [3] 3. 3 (1) (2) (3) (MIRU2012) 2012 8 820-8502 680-4 E-mail: {d kouno,shimada,endo}@pluto.ai.kyutech.ac.jp (1) (2) (3) (4) 4 AdaBoost 1. Kanade [6] CLAFIC [12] EigenFace [10] 1 1 2 1 [7] 3 2 2 (1) (2) (3) (4) 4 4 AdaBoost

More information

(1970) 17) V. Kucera: A Contribution to Matrix Ouadratic Equations, IEEE Trans. on Automatic Control, AC- 17-3, 344/347 (1972) 18) V. Kucera: On Nonnegative Definite Solutions to Matrix Ouadratic Equations,

More information

Part () () Γ Part ,

Part () () Γ Part , Contents a 6 6 6 6 6 6 6 7 7. 8.. 8.. 8.3. 8 Part. 9. 9.. 9.. 3. 3.. 3.. 3 4. 5 4.. 5 4.. 9 4.3. 3 Part. 6 5. () 6 5.. () 7 5.. 9 5.3. Γ 3 6. 3 6.. 3 6.. 3 6.3. 33 Part 3. 34 7. 34 7.. 34 7.. 34 8. 35

More information

untitled

untitled c 645 2 1. GM 1959 Lindsey [1] 1960 Howard [2] Howard 1 25 (Markov Decision Process) 3 3 2 3 +1=25 9 Bellman [3] 1 Bellman 1 k 980 8576 27 1 015 0055 84 4 1977 D Esopo and Lefkowitz [4] 1 (SI) Cover and

More information

03.Œk’ì

03.Œk’ì HRS KG NG-HRS NG-KG AIC Fama 1965 Mandelbrot Blattberg Gonedes t t Kariya, et. al. Nagahara ARCH EngleGARCH Bollerslev EGARCH Nelson GARCH Heynen, et. al. r n r n =σ n w n logσ n =α +βlogσ n 1 + v n w

More information

知的学習認識システム特論9.key

知的学習認識システム特論9.key shouno@uec.ac.jp 1 http://www.slideshare.net/pfi/deep-learning-22350063 1960 1970 1980 1990 2000 2010 Perceptron (Rosenblatt 57) Linear Separable (Minski & Papert 68) SVM (Vapnik 95) Neocognitron

More information

main.dvi

main.dvi 1 10,.,,.,,,.,,, 2. 1,, [1].,,,.,,.,,,.. 100,,., [2]. [3,4,5]. [6,7,8,9,10,11]. [12, 13, 14]. 1 E-mail: kau@statp.is.tohoku.ac.jp CDMA [15, 16].. 1970, 1980 90, 1990 30,,. [17, 18]. [19, 20, 21]. [17,

More information

IPSJ SIG Technical Report Vol.2009-CVIM-167 No /6/10 Real AdaBoost HOG 1 1 1, 2 1 Real AdaBoost HOG HOG Real AdaBoost HOG A Method for Reducing

IPSJ SIG Technical Report Vol.2009-CVIM-167 No /6/10 Real AdaBoost HOG 1 1 1, 2 1 Real AdaBoost HOG HOG Real AdaBoost HOG A Method for Reducing Real AdaBoost HOG 1 1 1, 2 1 Real AdaBoost HOG HOG Real AdaBoost HOG A Method for Reducing number of HOG Features based on Real AdaBoost Chika Matsushima, 1 Yuji Yamauchi, 1 Takayoshi Yamashita 1, 2 and

More information

untitled

untitled . x2.0 0.5 0 0.5.0 x 2 t= 0: : x α ij β j O x2 u I = α x j ij i i= 0 y j = + exp( u ) j v J = β y j= 0 j j o = + exp( v ) 0 0 e x p e x p J j I j ij i i o x β α = = = + +.. 2 3 8 x 75 58 28 36 x2 3 3 4

More information

IPSJ SIG Technical Report 1, Instrument Separation in Reverberant Environments Using Crystal Microphone Arrays Nobutaka ITO, 1, 2 Yu KITANO, 1

IPSJ SIG Technical Report 1, Instrument Separation in Reverberant Environments Using Crystal Microphone Arrays Nobutaka ITO, 1, 2 Yu KITANO, 1 1, 2 1 1 1 Instrument Separation in Reverberant Environments Using Crystal Microphone Arrays Nobutaka ITO, 1, 2 Yu KITANO, 1 Nobutaka ONO 1 and Shigeki SAGAYAMA 1 This paper deals with instrument separation

More information

1 4 1 ( ) ( ) ( ) ( ) () 1 4 2

1 4 1 ( ) ( ) ( ) ( ) () 1 4 2 7 1995, 2017 7 21 1 2 2 3 3 4 4 6 (1).................................... 6 (2)..................................... 6 (3) t................. 9 5 11 (1)......................................... 11 (2)

More information

,,, 2 ( ), $[2, 4]$, $[21, 25]$, $V$,, 31, 2, $V$, $V$ $V$, 2, (b) $-$,,, (1) : (2) : (3) : $r$ $R$ $r/r$, (4) : 3

,,, 2 ( ), $[2, 4]$, $[21, 25]$, $V$,, 31, 2, $V$, $V$ $V$, 2, (b) $-$,,, (1) : (2) : (3) : $r$ $R$ $r/r$, (4) : 3 1084 1999 124-134 124 3 1 (SUGIHARA Kokichi),,,,, 1, [5, 11, 12, 13], (2, 3 ), -,,,, 2 [5], 3,, 3, 2 2, -, 3,, 1,, 3 2,,, 3 $R$ ( ), $R$ $R$ $V$, $V$ $R$,,,, 3 2 125 1 3,,, 2 ( ), $[2, 4]$, $[21, 25]$,

More information

(2) Fisher α (α) α Fisher α ( α) 0 Levi Civita (1) ( 1) e m (e) (m) ([1], [2], [13]) Poincaré e m Poincaré e m Kähler-like 2 Kähler-like

(2) Fisher α (α) α Fisher α ( α) 0 Levi Civita (1) ( 1) e m (e) (m) ([1], [2], [13]) Poincaré e m Poincaré e m Kähler-like 2 Kähler-like () 10 9 30 1 Fisher α (α) α Fisher α ( α) 0 Levi Civita (1) ( 1) e m (e) (m) ([1], [], [13]) Poincaré e m Poincaré e m Kähler-like Kähler-like Kähler M g M X, Y, Z (.1) Xg(Y, Z) = g( X Y, Z) + g(y, XZ)

More information

4 4. A p X A 1 X X A 1 A 4.3 X p X p X S(X) = E ((X p) ) X = X E(X) = E(X) p p 4.3p < p < 1 X X p f(i) = P (X = i) = p(1 p) i 1, i = 1,,... 1 + r + r

4 4. A p X A 1 X X A 1 A 4.3 X p X p X S(X) = E ((X p) ) X = X E(X) = E(X) p p 4.3p < p < 1 X X p f(i) = P (X = i) = p(1 p) i 1, i = 1,,... 1 + r + r 4 1 4 4.1 X P (X = 1) =.4, P (X = ) =.3, P (X = 1) =., P (X = ) =.1 E(X) = 1.4 +.3 + 1. +.1 = 4. X Y = X P (X = ) = P (X = 1) = P (X = ) = P (X = 1) = P (X = ) =. Y P (Y = ) = P (X = ) =., P (Y = 1) =

More information

三石貴志.indd

三石貴志.indd 流通科学大学論集 - 経済 情報 政策編 - 第 21 巻第 1 号,23-33(2012) SIRMs SIRMs Fuzzy fuzzyapproximate approximatereasoning reasoningusing using Lukasiewicz Łukasiewicz logical Logical operations Operations Takashi Mitsuishi

More information

,.,. NP,., ,.,,.,.,,, (PCA)...,,. Tipping and Bishop (1999) PCA. (PPCA)., (Ilin and Raiko, 2010). PPCA EM., , tatsukaw

,.,. NP,., ,.,,.,.,,, (PCA)...,,. Tipping and Bishop (1999) PCA. (PPCA)., (Ilin and Raiko, 2010). PPCA EM., , tatsukaw ,.,. NP,.,. 1 1.1.,.,,.,.,,,. 2. 1.1.1 (PCA)...,,. Tipping and Bishop (1999) PCA. (PPCA)., (Ilin and Raiko, 2010). PPCA EM., 152-8552 2-12-1, tatsukawa.m.aa@m.titech.ac.jp, 190-8562 10-3, mirai@ism.ac.jp

More information

Optical Flow t t + δt 1 Motion Field 3 3 1) 2) 3) Lucas-Kanade 4) 1 t (x, y) I(x, y, t)

Optical Flow t t + δt 1 Motion Field 3 3 1) 2) 3) Lucas-Kanade 4) 1 t (x, y) I(x, y, t) http://wwwieice-hbkborg/ 2 2 4 2 -- 2 4 2010 9 3 3 4-1 Lucas-Kanade 4-2 Mean Shift 3 4-3 2 c 2013 1/(18) http://wwwieice-hbkborg/ 2 2 4 2 -- 2 -- 4 4--1 2010 9 4--1--1 Optical Flow t t + δt 1 Motion Field

More information

3. ( 1 ) Linear Congruential Generator:LCG 6) (Mersenne Twister:MT ), L 1 ( 2 ) 4 4 G (i,j) < G > < G 2 > < G > 2 g (ij) i= L j= N

3. ( 1 ) Linear Congruential Generator:LCG 6) (Mersenne Twister:MT ), L 1 ( 2 ) 4 4 G (i,j) < G > < G 2 > < G > 2 g (ij) i= L j= N RMT 1 1 1 N L Q=L/N (RMT), RMT,,,., Box-Muller, 3.,. Testing Randomness by Means of RMT Formula Xin Yang, 1 Ryota Itoi 1 and Mieko Tanaka-Yamawaki 1 Random matrix theory derives, at the limit of both dimension

More information

1 Kinect for Windows M = [X Y Z] T M = [X Y Z ] T f (u,v) w 3.2 [11] [7] u = f X +u Z 0 δ u (X,Y,Z ) (5) v = f Y Z +v 0 δ v (X,Y,Z ) (6) w = Z +

1 Kinect for Windows M = [X Y Z] T M = [X Y Z ] T f (u,v) w 3.2 [11] [7] u = f X +u Z 0 δ u (X,Y,Z ) (5) v = f Y Z +v 0 δ v (X,Y,Z ) (6) w = Z + 3 3D 1,a) 1 1 Kinect (X, Y) 3D 3D 1. 2010 Microsoft Kinect for Windows SDK( (Kinect) SDK ) 3D [1], [2] [3] [4] [5] [10] 30fps [10] 3 Kinect 3 Kinect Kinect for Windows SDK 3 Microsoft 3 Kinect for Windows

More information

Kalman ( ) 1) (Kalman filter) ( ) t y 0,, y t x ˆx 3) 10) t x Y [y 0,, y ] ) x ( > ) ˆx (prediction) ) x ( ) ˆx (filtering) )

Kalman ( ) 1) (Kalman filter) ( ) t y 0,, y t x ˆx 3) 10) t x Y [y 0,, y ] ) x ( > ) ˆx (prediction) ) x ( ) ˆx (filtering) ) 1 -- 5 6 2009 3 R.E. Kalman ( ) H 6-1 6-2 6-3 H Rudolf Emil Kalman IBM IEEE Medal of Honor(1974) (1985) c 2011 1/(23) 1 -- 5 -- 6 6--1 2009 3 Kalman ( ) 1) (Kalman filter) ( ) t y 0,, y t x ˆx 3) 10) t

More information

Microsoft PowerPoint - SSII_harada pptx

Microsoft PowerPoint - SSII_harada pptx The state of the world The gathered data The processed data w d r I( W; D) I( W; R) The data processing theorem states that data processing can only destroy information. David J.C. MacKay. Information

More information

24.15章.微分方程式

24.15章.微分方程式 m d y dt = F m d y = mg dt V y = dy dt d y dt = d dy dt dt = dv y dt dv y dt = g dv y dt = g dt dt dv y = g dt V y ( t) = gt + C V y ( ) = V y ( ) = C = V y t ( ) = gt V y ( t) = dy dt = gt dy = g t dt

More information

IPSJ SIG Technical Report Vol.2017-SLP-115 No /2/18 1,a) 1 1,2 Sakriani Sakti [1][2] [3][4] [5][6][7] [8] [9] 1 Nara Institute of Scie

IPSJ SIG Technical Report Vol.2017-SLP-115 No /2/18 1,a) 1 1,2 Sakriani Sakti [1][2] [3][4] [5][6][7] [8] [9] 1 Nara Institute of Scie 1,a) 1 1,2 Sakriani Sakti 1 1 1 1. [1][2] [3][4] [5][6][7] [8] [9] 1 Nara Institute of Science and Technology 2 Japan Science and Technology Agency a) ishikawa.yoko.io5@is.naist.jp 2. 1 Belief-Desire theory

More information

( ) (, ) arxiv: hgm OpenXM search. d n A = (a ij ). A i a i Z d, Z d. i a ij > 0. β N 0 A = N 0 a N 0 a n Z A (β; p) = Au=β,u N n 0 A

( ) (, ) arxiv: hgm OpenXM search. d n A = (a ij ). A i a i Z d, Z d. i a ij > 0. β N 0 A = N 0 a N 0 a n Z A (β; p) = Au=β,u N n 0 A ( ) (, ) arxiv: 1510.02269 hgm OpenXM search. d n A = (a ij ). A i a i Z d, Z d. i a ij > 0. β N 0 A = N 0 a 1 + + N 0 a n Z A (β; p) = Au=β,u N n 0 A-. u! = n i=1 u i!, p u = n i=1 pu i i. Z = Z A Au

More information

report-MSPC.dvi

report-MSPC.dvi Multivariate Statistical Process Control 4 1 5 6 Copyright cfl4-5 by Manabu Kano. All rights reserved. 1 1 3 3.1 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :

More information

研究シリーズ第40号

研究シリーズ第40号 165 PEN WPI CPI WAGE IIP Feige and Pearce 166 167 168 169 Vector Autoregression n (z) z z p p p zt = φ1zt 1 + φ2zt 2 + + φ pzt p + t Cov( 0 ε t, ε t j )= Σ for for j 0 j = 0 Cov( ε t, zt j ) = 0 j = >

More information

I A A441 : April 15, 2013 Version : 1.1 I Kawahira, Tomoki TA (Shigehiro, Yoshida )

I A A441 : April 15, 2013 Version : 1.1 I   Kawahira, Tomoki TA (Shigehiro, Yoshida ) I013 00-1 : April 15, 013 Version : 1.1 I Kawahira, Tomoki TA (Shigehiro, Yoshida) http://www.math.nagoya-u.ac.jp/~kawahira/courses/13s-tenbou.html pdf * 4 15 4 5 13 e πi = 1 5 0 5 7 3 4 6 3 6 10 6 17

More information

7 9 7..................................... 9 7................................ 3 7.3...................................... 3 A A. ω ν = ω/π E = hω. E

7 9 7..................................... 9 7................................ 3 7.3...................................... 3 A A. ω ν = ω/π E = hω. E B 8.9.4, : : MIT I,II A.P. E.F.,, 993 I,,, 999, 7 I,II, 95 A A........................... A........................... 3.3 A.............................. 4.4....................................... 5 6..............................

More information

(note-02) Rademacher 1/57

(note-02) Rademacher 1/57 (note-02) Rademacher 1/57 (x 1, y 1 ),..., (x n, y n ) X Y f : X Y Y = R f Y = {+1, 1}, {1, 2,..., G} f x y 1. (x 1, y 1 ),..., (x n, y n ) f(x i ) ( ) 2. x f(x) Y 2/57 (x, y) f(x) f(x) y (, loss) l(f(x),

More information

130 Oct Radial Basis Function RBF Efficient Market Hypothesis Fama ) 4) 1 Fig. 1 Utility function. 2 Fig. 2 Value function. (1) (2)

130 Oct Radial Basis Function RBF Efficient Market Hypothesis Fama ) 4) 1 Fig. 1 Utility function. 2 Fig. 2 Value function. (1) (2) Vol. 47 No. SIG 14(TOM 15) Oct. 2006 RBF 2 Effect of Stock Investor Agent According to Framing Effect to Stock Exchange in Artificial Stock Market Zhai Fei, Shen Kan, Yusuke Namikawa and Eisuke Kita Several

More information

9 8 7 (x-1.0)*(x-1.0) *(x-1.0) (a) f(a) (b) f(a) Figure 1: f(a) a =1.0 (1) a 1.0 f(1.0)

9 8 7 (x-1.0)*(x-1.0) *(x-1.0) (a) f(a) (b) f(a) Figure 1: f(a) a =1.0 (1) a 1.0 f(1.0) E-mail: takio-kurita@aist.go.jp 1 ( ) CPU ( ) 2 1. a f(a) =(a 1.0) 2 (1) a ( ) 1(a) f(a) a (1) a f(a) a =2(a 1.0) (2) 2 0 a f(a) a =2(a 1.0) = 0 (3) 1 9 8 7 (x-1.0)*(x-1.0) 6 4 2.0*(x-1.0) 6 2 5 4 0 3-2

More information

2003/3 Vol. J86 D II No.3 2.3. 4. 5. 6. 2. 1 1 Fig. 1 An exterior view of eye scanner. CCD [7] 640 480 1 CCD PC USB PC 2 334 PC USB RS-232C PC 3 2.1 2

2003/3 Vol. J86 D II No.3 2.3. 4. 5. 6. 2. 1 1 Fig. 1 An exterior view of eye scanner. CCD [7] 640 480 1 CCD PC USB PC 2 334 PC USB RS-232C PC 3 2.1 2 Curved Document Imaging with Eye Scanner Toshiyuki AMANO, Tsutomu ABE, Osamu NISHIKAWA, Tetsuo IYODA, and Yukio SATO 1. Shape From Shading SFS [1] [2] 3 2 Department of Electrical and Computer Engineering,

More information

PC PIN [4] PIN 2. 2.1 2 PIN n 10n 3 2 PIN Fig. 2 Feature indices for PIN input on touch-screen a = (a 1, a 2,, a i,, a 10n 3 ) (a i, i {1,, 2n 1}) : 2

PC PIN [4] PIN 2. 2.1 2 PIN n 10n 3 2 PIN Fig. 2 Feature indices for PIN input on touch-screen a = (a 1, a 2,, a i,, a 10n 3 ) (a i, i {1,, 2n 1}) : 2 マルチメディア 分散 協調とモバイル (DICOMO2014)シンポジウム 平成26年7月 PIN 入力タッチスクリーンバイオメトリクスにおける 識別手法の影響 泉 将之1 西村 友佑2 柏木 まもる3 佐村 敏治4 西村 治彦5 概要 本研究ではスマートフォンを用いた PIN 入力タッチスクリーンバイオメトリクスについて検討を行 なった 従来のキーストローク認証では得られなかったセンサやタッチスクリーンからの情報を利用した

More information

基礎数学I

基礎数学I I & II ii ii........... 22................. 25 12............... 28.................. 28.................... 31............. 32.................. 34 3 1 9.................... 1....................... 1............

More information

L. S. Abstract. Date: last revised on 9 Feb translated to Japanese by Kazumoto Iguchi. Original papers: Received May 13, L. Onsager and S.

L. S. Abstract. Date: last revised on 9 Feb translated to Japanese by Kazumoto Iguchi. Original papers: Received May 13, L. Onsager and S. L. S. Abstract. Date: last revised on 9 Feb 01. translated to Japanese by Kazumoto Iguchi. Original papers: Received May 13, 1953. L. Onsager and S. Machlup, Fluctuations and Irreversibel Processes, Physical

More information

A11 (1993,1994) 29 A12 (1994) 29 A13 Trefethen and Bau Numerical Linear Algebra (1997) 29 A14 (1999) 30 A15 (2003) 30 A16 (2004) 30 A17 (2007) 30 A18

A11 (1993,1994) 29 A12 (1994) 29 A13 Trefethen and Bau Numerical Linear Algebra (1997) 29 A14 (1999) 30 A15 (2003) 30 A16 (2004) 30 A17 (2007) 30 A18 2013 8 29y, 2016 10 29 1 2 2 Jordan 3 21 3 3 Jordan (1) 3 31 Jordan 4 32 Jordan 4 33 Jordan 6 34 Jordan 8 35 9 4 Jordan (2) 10 41 x 11 42 x 12 43 16 44 19 441 19 442 20 443 25 45 25 5 Jordan 26 A 26 A1

More information

h(n) x(n) s(n) S (ω) = H(ω)X(ω) (5 1) H(ω) H(ω) = F[h(n)] (5 2) F X(ω) x(n) X(ω) = F[x(n)] (5 3) S (ω) s(n) S (ω) = F[s(n)] (5

h(n) x(n) s(n) S (ω) = H(ω)X(ω) (5 1) H(ω) H(ω) = F[h(n)] (5 2) F X(ω) x(n) X(ω) = F[x(n)] (5 3) S (ω) s(n) S (ω) = F[s(n)] (5 1 -- 5 5 2011 2 1940 N. Wiener FFT 5-1 5-2 Norbert Wiener 1894 1912 MIT c 2011 1/(12) 1 -- 5 -- 5 5--1 2008 3 h(n) x(n) s(n) S (ω) = H(ω)X(ω) (5 1) H(ω) H(ω) = F[h(n)] (5 2) F X(ω) x(n) X(ω) = F[x(n)]

More information

z.prn(Gray)

z.prn(Gray) 1. 90 2 1 1 2 Friedman[1983] Friedman ( ) Dockner[1992] closed-loop Theorem 2 Theorem 4 Dockner ( ) 31 40 2010 Kinoshita, Suzuki and Kaiser [2002] () 1) 2) () VAR 32 () Mueller[1986], Mueller ed. [1990]

More information

kubostat2018d p.2 :? bod size x and fertilization f change seed number? : a statistical model for this example? i response variable seed number : { i

kubostat2018d p.2 :? bod size x and fertilization f change seed number? : a statistical model for this example? i response variable seed number : { i kubostat2018d p.1 I 2018 (d) model selection and kubo@ees.hokudai.ac.jp http://goo.gl/76c4i 2018 06 25 : 2018 06 21 17:45 1 2 3 4 :? AIC : deviance model selection misunderstanding kubostat2018d (http://goo.gl/76c4i)

More information

Kullback-Leibler

Kullback-Leibler Kullback-Leibler 206 6 6 http://www.math.tohoku.ac.jp/~kuroki/latex/206066kullbackleibler.pdf 0 2 Kullback-Leibler 3. q i.......................... 3.2........... 3.3 Kullback-Leibler.............. 4.4

More information

http://www2.math.kyushu-u.ac.jp/~hara/lectures/lectures-j.html 2 N(ε 1 ) N(ε 2 ) ε 1 ε 2 α ε ε 2 1 n N(ɛ) N ɛ ɛ- (1.1.3) n > N(ɛ) a n α < ɛ n N(ɛ) a n

http://www2.math.kyushu-u.ac.jp/~hara/lectures/lectures-j.html 2 N(ε 1 ) N(ε 2 ) ε 1 ε 2 α ε ε 2 1 n N(ɛ) N ɛ ɛ- (1.1.3) n > N(ɛ) a n α < ɛ n N(ɛ) a n http://www2.math.kyushu-u.ac.jp/~hara/lectures/lectures-j.html 1 1 1.1 ɛ-n 1 ɛ-n lim n a n = α n a n α 2 lim a n = 1 n a k n n k=1 1.1.7 ɛ-n 1.1.1 a n α a n n α lim n a n = α ɛ N(ɛ) n > N(ɛ) a n α < ɛ

More information

平成○○年度知能システム科学専攻修士論文

平成○○年度知能システム科学専攻修士論文 A Realization of Robust Agents in an Agent-based Virtual Market Makio Yamashige 3 7 A Realization of Robust Agents in an Agent-based Virtual Market Makio Yamashige Abstract There are many people who try

More information

untitled

untitled 18 1 2,000,000 2,000,000 2007 2 2 2008 3 31 (1) 6 JCOSSAR 2007pp.57-642007.6. LCC (1) (2) 2 10mm 1020 14 12 10 8 6 4 40,50,60 2 0 1998 27.5 1995 1960 40 1) 2) 3) LCC LCC LCC 1 1) Vol.42No.5pp.29-322004.5.

More information

通信容量制約を考慮したフィードバック制御 - 電子情報通信学会 情報理論研究会(IT) 若手研究者のための講演会

通信容量制約を考慮したフィードバック制御 -  電子情報通信学会 情報理論研究会(IT)  若手研究者のための講演会 IT 1 2 1 2 27 11 24 15:20 16:05 ( ) 27 11 24 1 / 49 1 1940 Witsenhausen 2 3 ( ) 27 11 24 2 / 49 1940 2 gun director Warren Weaver, NDRC (National Defence Research Committee) Final report D-2 project #2,

More information

x, y x 3 y xy 3 x 2 y + xy 2 x 3 + y 3 = x 3 y xy 3 x 2 y + xy 2 x 3 + y 3 = 15 xy (x y) (x + y) xy (x y) (x y) ( x 2 + xy + y 2) = 15 (x y)

x, y x 3 y xy 3 x 2 y + xy 2 x 3 + y 3 = x 3 y xy 3 x 2 y + xy 2 x 3 + y 3 = 15 xy (x y) (x + y) xy (x y) (x y) ( x 2 + xy + y 2) = 15 (x y) x, y x 3 y xy 3 x 2 y + xy 2 x 3 + y 3 = 15 1 1977 x 3 y xy 3 x 2 y + xy 2 x 3 + y 3 = 15 xy (x y) (x + y) xy (x y) (x y) ( x 2 + xy + y 2) = 15 (x y) ( x 2 y + xy 2 x 2 2xy y 2) = 15 (x y) (x + y) (xy

More information

チュートリアル:ノンパラメトリックベイズ

チュートリアル:ノンパラメトリックベイズ { x,x, L, xn} 2 p( θ, θ, θ, θ, θ, } { 2 3 4 5 θ6 p( p( { x,x, L, N} 2 x { θ, θ2, θ3, θ4, θ5, θ6} K n p( θ θ n N n θ x N + { x,x, L, N} 2 x { θ, θ2, θ3, θ4, θ5, θ6} log p( 6 n logθ F 6 log p( + λ θ F θ

More information

5 36 5................................................... 36 5................................................... 36 5.3..............................

5 36 5................................................... 36 5................................................... 36 5.3.............................. 9 8 3............................................. 3.......................................... 4.3............................................ 4 5 3 6 3..................................................

More information

example2_time.eps

example2_time.eps Google (20/08/2 ) ( ) Random Walk & Google Page Rank Agora on Aug. 20 / 67 Introduction ( ) Random Walk & Google Page Rank Agora on Aug. 20 2 / 67 Introduction Google ( ) Random Walk & Google Page Rank

More information

S I. dy fx x fx y fx + C 3 C dy fx 4 x, y dy v C xt y C v e kt k > xt yt gt [ v dt dt v e kt xt v e kt + C k x v + C C k xt v k 3 r r + dr e kt S dt d

S I. dy fx x fx y fx + C 3 C dy fx 4 x, y dy v C xt y C v e kt k > xt yt gt [ v dt dt v e kt xt v e kt + C k x v + C C k xt v k 3 r r + dr e kt S dt d S I.. http://ayapin.film.s.dendai.ac.jp/~matuda /TeX/lecture.html PDF PS.................................... 3.3.................... 9.4................5.............. 3 5. Laplace................. 5....

More information

163 KdV KP Lax pair L, B L L L 1/2 W 1 LW = ( / x W t 1, t 2, t 3, ψ t n ψ/ t n = B nψ (KdV B n = L n/2 KP B n = L n KdV KP Lax W Lax τ KP L ψ τ τ Cha

163 KdV KP Lax pair L, B L L L 1/2 W 1 LW = ( / x W t 1, t 2, t 3, ψ t n ψ/ t n = B nψ (KdV B n = L n/2 KP B n = L n KdV KP Lax W Lax τ KP L ψ τ τ Cha 63 KdV KP Lax pair L, B L L L / W LW / x W t, t, t 3, ψ t n / B nψ KdV B n L n/ KP B n L n KdV KP Lax W Lax τ KP L ψ τ τ Chapter 7 An Introduction to the Sato Theory Masayui OIKAWA, Faculty of Engneering,

More information

2019 1 5 0 3 1 4 1.1.................... 4 1.1.1......................... 4 1.1.2........................ 5 1.1.3................... 5 1.1.4........................ 6 1.1.5......................... 6 1.2..........................

More information