Overview (Gaussian Process) GPLVM GPDM 2 / 59

Size: px
Start display at page:

Download "Overview (Gaussian Process) GPLVM GPDM 2 / 59"

Transcription

1 ( ) 1 / 59

2 Overview (Gaussian Process) GPLVM GPDM 2 / 59

3 (Gaussian Process) y x x y (regressor) D = { (x (n), y (n) ) } N, n=1 x (n+1) y (n+1), ( ) 3 / 59

4 (Gaussian Process) y x x y (regressor) D = { (x (n), y (n) ) } N, n=1 x (n+1) y (n+1), ( ) 3 / 59

5 (Gaussian Process) y x x y (regressor) D = { (x (n), y (n) ) } N, n=1 x (n+1) y (n+1), ( ) 4 / 59

6 y = w 0 + w 1 x 1 + w 2 x 2 + ϵ = (w 0 w 1 w 2 ) 1 +ϵ }{{} w T x 1 = w T x + ϵ x 2 }{{} x ŵ = (X T X) 1 X T y ( ), 5 / 59

7 (GLM) y = w 0 + w 1 x + w 2 x 2 + w 3 x 3 + ϵ (1) = (w 0 w 1 w 2 w 3 ) 1 +ϵ (2) }{{} w T x x 2 x 3 }{{} = w T ϕ(x) + ϵ ϕ(x) (3) ϕ(x)! 6 / 59

8 (GLM) (2) ϕ(x) = ( (x µ 1) 2 2σ 2, (x µ 2) 2 2σ 2,, (x µ K) 2 ) 2σ 2 (4),! µ = (µ 1, µ 2,, µ K ) 7 / 59

9 x y R y = f(x) x = (x 1,, x d ) R d y = f(x), x ϕ(x) y = w T ϕ(x) (5) ϕ(x) = (ϕ 1 (x), ϕ 2 (x),, ϕ H (x)) T = (1, x 1,, x d, x 2 1,, x 2 d )T w = (w 0, w 1,, w 2d ) T, y = w T ϕ(x) = w 0 + w 1 x w d x d + w d+1 x w 2d x 2 d. 8 / 59

10 GP (1) y (1) y (N), y = Φw (Φ : ) ϕ 1 (x (1) ) ϕ H (x (1) ) w 1. = ϕ 1 (x (2) ) ϕ H (x (2) ) w 2... ϕ 1 (x (N) ) ϕ H (x (N) ). y (1) y (2) y (N) w H y Φ w w p(w) = N(0, α 1 I), y = Φw, 0, yy T = (Φw) (Φw) T = Φ ww T Φ T (7) = α 1 ΦΦ T (6) 9 / 59

11 GP (2) p(y) = N(y 0, α 1 ΦΦ T ) (8), {x n } N n=1 (x 1, x 2,, x N ), y = (y 1, y 2,, y N ), p(y). =, K = α 1 ΦΦ T k(x, x ) = α 1 ϕ(x) T ϕ(x ) (9) k(x, x ) x x ; x y 10 / 59

12 GP (3), ϵ { y = w T ϕ(x) + ϵ = p(y f) = N(w T ϕ(x), β 1 I) (10) ϵ N(0, β 1 I) f = w T ϕ(x) p(y x) = p(y f)p(f x)df (11) = N(0, C) (12) Gaussian, C : C(x i, x j ) = k(x i, x j ) + β 1 δ(i, j). (13) GP, k(x, x ) α, β. 11 / 59

13 y 0 y x Gaussian: exp( (x x ) 2 /l) x Exponential: exp( x x /l) (OU process) y 0 y x x Periodic: exp( 2 sin 2 ( x x 2 )/l 2 ) Periodic(L): exp( 2 sin 2 ( x x 2 )/(10l) 2 ) 12 / 59

14 Correlated Gaussian K = 13 / 59

15 (2) Correlated Gaussian K = 14 / 59

16 (3) Correlated Gaussian K = 15 / 59

17 Infinite dimensional Gaussian, (x 1, x 2,, x n ) y = (y 1, y 2,, y n ), y. (x 1, x 2,, x n ), ( ). K K ij = k(x i, x j ) k. 16 / 59

18 RBF ϕ(x) = exp((x h) 2 /r 2 ) 1, h k(x, x ) = σ 2 H h=1 ϕ h (x)ϕ h (x ) (14) (x h)2 exp ( r 2 ) exp ( (x h) 2 r 2 ) dh (15) = πr 2 exp ( (x x ) 2 ) 2r 2 θ 1 exp ( (x x ) 2 ) θ 2 2 (16) (x, x ) RBF, RBF. θ 1, θ 2 17 / 59

19 GP y new y Gaussian, p(y new x new, X, y, θ) = p((y, ynew ) (X, x new ), θ) p(y X, θ) [ exp 1 2 ([y, K ynew ] k T k k ] 1 [ ] y y new y T K 1 y) (17) (18) (19) N(k T K 1 y, k k T K 1 k). (20) K = [k(x, x )]. k = (k(x new, x 1 ),, k(x new, x N )). 18 / 59

20 GP SVR, Ridge, ARD (Cohn+ 2013) ( ) k(x, x ) = σf 2 exp 1 (x k x k )2 2 σk 2 k (21) Model MAE RMSE µ SVM Linear ARD Squared exp. Isotropic Squared exp. ARD Rational quadratic ARD Matern(5,2) Neural network / 59

21 GP SVR, Ridge, ARD (Cohn+ 2013) ( ) k(x, x ) = σf 2 exp 1 (x k x k )2 2 σk 2 k (22) Model MAE RMSE µ Independent SVMs EasyAdapt SVM Independent Pooled Pooled &{N} Combined / 59

22 GP>SVR,, (Cohn etc.)! 21 / 59

23 GP GP : / X K 1 O(N 3 ) N > 1000, : m X m, X m O(m 2 N) 22 / 59

24 Subset of Data : K K mm (23), m O(m 3 ), 23 / 59

25 Subset of Data : K K mm (24), m O(m 3 ), / 59

26 (2) Subset of Regressors (Silverman 1985) : m K K nm K 1 mmk mn = K (25) K nm : N m O(m 2 N) 25 / 59

27 (2) Subset of Regressors (Silverman 1985) : m K K nm K 1 mmk mn = K (26) K nm : N m O(m 2 N), / 59

28 K, (Quiñonero-Candela & Rasmussen 2005). 27 / 59

29 (Titsias 2009), Jensen : log p(x)f(x)dx p(x) log f(x)dx X m GP f m, log p(y) = log p(y, f, f m )dfdf m (27) = log q(f, f m ) p(y, f, f m) q(f, f m ) dfdf m (28) q(f, f m ) log p(y, f, f m) q(f, f m ) dfdf m (29), q(f, f m ) 28 / 59

30 (2) p(y, f, f m ) = p(y f)p(f f m )p(f m ), q(f, f m ) = p(f f m )q(f m ), log p(y) = = = q(f, f m ) log p(y, f, f m) q(f, f m ) dfdf m (30) p(f f m )q(f m ) log p(y f) p(f f m )p(f m ) dfdf m p(f f m )q(f m ) (31) p(f f m )q(f m ) log p(y f)p(f m) dfdf m q(f m ) (32) q(f m )[ p(f f m ) log p(y f)df } {{ } G(f m ) + log p(f ] m) df m q(f m ) (33) 29 / 59

31 (3) G(f m ), G(f m ) = p(f f m ) log p(y f)df (34) = p(f f m ) ( N2 ) (y log(2πσ2 f)2 ) 2σ 2 df (35) [ = p(f f m ) N 2 log(2πσ2 ) 1 ] 2σ 2 tr(yt y 2y T f +f T f) df = N 2 log(2πσ2 ) 1 [ y T 2σ 2 y 2y T α+α T α+tr ( K nn K nm K 1 ( α = E[f fm ] = K nm Kmmf 1 ) m = log N(y α, σ 2 I) 1 2σ 2 tr ( K nn K nn (36) mmk mn )] (37) ). (38) 30 / 59

32 (4), log p(y) = = [ q(f m ) q(f m ) G(f m ) + log p(f m) q(f m ) [ log N(y α, σ 2 I) 1 ] df m (39) 2σ 2 tr ( K nn K nn ) + log p(f m) q(f m ) ] df m [ q(f m ) log N(y α, σ2 I) + log p(f m ) q(f m ) Jensen bound, p(x) log f(x) dx log p(x) ] df m (40) 1 2σ 2 tr(k nn K nn) (41) f(x)dx (42) 31 / 59

33 (5), log N(y α, σ 2 I)p(f m )df m 1 2σ 2 tr(k nn K nn) (K nn = K nm K 1 mmk mn ) (43) α = E[f f m ] = K nm K 1 mmf m, N(y α, σ 2 I)p(f m )df m = N(y 0, σ 2 I + K nn) (44), log p(y) log N(y 0, σ 2 I + K nn) 1 2σ 2 tr(k nn K nn). (45) 32 / 59

34 (6) log N(y 0, σ 2 I + K nn) 1 2σ 2 tr(k nn K nn) = log N(y 0, σ 2 I + K nn) 1 2σ 2 tr(cov(f f m)) (46) 1 : f m 2 : f m K nn, / 59

35 GP SVM y = {+1, 1}, p(y f) = σ(y f) (logit) or Ψ(y f) (probit) minimize: log p(y f)p(f X) = 1 N 2 f T K 1 f log p(y i f i ) (47) i=1 SVM Kα = f, w = α i x i w 2 = α T Kα = f T K 1 f, i 1 N minimize: 2 w 2 C (1 y i f i ) + i=1 = 1 N 2 f T K 1 f C (1 y i f i ) +. (48) i=1, SVM hinge loss. 34 / 59

36 Loss functions Relationships between GPs and Other Models 2 log(1 + exp( z)) log Φ(z) max(1 z, 0) g ǫ(z) ǫ 0 ǫ z (a) (b) Figure 6.3: (a) A comparison of the hinge error, g λ and g Φ. (b) The ǫ-insensitive error function used in SVR. SVM ME, :, GP classifier ( ) 35 / 59

37 DP Gaussian process Dirichlet process [ ] GP: (x 1, x 2,, x ), (y 1, y 2,, y ) DP: (X 1, X 2,, X ), Dir(α(X 1 ), α(x 2 ),, α(x )), smoother 36 / 59

38

39 Probabilistic PCA (Tipping & Bishop 1999), { yn = Wx n + ϵ ϵ N(0, σ 2 I) (49) L = log p(y n ) = log N(Wx n, σ 2 I) (50) = N 2 ( log 2π + log C + tr(c 1 S) ) (51), C = WW T + σ 2 I (52) S = 1 N YYT. (53) 38 / 59

40 (2) L = 0, L W W Ŵ U q(λ q σ 2 I) 1 2 (σ 2 = 0 U q Λ 1 2 ) (54) Λ q, U q : YY T q σ 2 = 0 39 / 59

41 Gaussian Process Latent Variable Models (GPLVM) Probabilistic PCA (Tipping&Bishop 1999): p(y n W, β) = p(y n x n,w, β)p(x n )dx n (55) p(y W, β) = n p(y n W, β) W GPLVM (Lawrence, NIPS 2003): W prior p(w) = D N(w i 0, α 1 I) (56) i=1 p(y X, β) = p(y X, β)p(w)dw (57) ( 1 = (2π) DN/2 exp 1 ) K D/2 2 tr(k 1 YY T ) (58) 40 / 59

42 GPLVM (2): PPCA Dual log p(y X, β) = DN 2 log(2π) D 2 log K 1 2 tr(k 1 YY T ) (59) K = αxx T + β 1 I (60) X = [x 1,, x N ] T (61) X, L X = αk 1 YY T K 1 X αdk 1 X = 0 (62) X = 1 D YYT K 1 X X U Q LV T (63) U Q (N Q) : YY T Q λ 1 λ Q L = diag(l 1,, l Q ); l i = 1/ λi αd 1 αβ 41 / 59

43 GPLVM (3) : Kernel log p(y X, β) = DN 2 log(2π) D 2 log K 1 2 tr(k 1 YY T ) K = αxx T + β 1 I, (64) X = [x 1,, x N ] T (65) = K ( k(x n, x m ) = α exp γ 2 (x n x m ) 2) + δ(n, m)β 1 (66) L K = K 1 YY T K 1 DK 1 L = L K x n,j K x n,j Scaled Conjugate Gradient GPLVM in MATLAB: neill/gplvm/ 42 / 59

44 GPLVM (4): Figure 1: Visualisation of the Oil data with (a) PCA (a linear GPLVM) and (b) A GPLVM which uses an RBF kernel. Crosses, circles and plus signs represent stratifi ed, annular and homogeneous flows respectively. The greyscales in plot (b) indicate the precision with which the manifold was expressed in data-space for that latent point. The optimised parameters of the kernel were, and f. PPCA( ), GP-LVM( ), Confidence (O(N 3 )): active set ( ), 43 / 59

45 GPLVM (4): Caveat PCA, Neil Lawrence, 1e-2*randn(N,dims) Scaled conjugate gradient / 59

46 GPLVM (5): / 59

47 GPLVM (6): / 59

48 GPLVM (6): PCA 47 / 59

49 GPLVM (7): MCMC Local Global MCMC ( =0.2, 400 iteration) 0 ( GPDM) 48 / 59

50 GPLVM (8): MCMC (Oil Flow) Local Global MCMC, X, X 49 / 59

51 Gaussian Process Dynamical Model (Hertzmann 2005) jmwang/gpdm/ GPLVM, x n x n (GP ). ( ).? 50 / 59

52 GPDM (2): Formulation (1) { xt = f(x t 1 ; A) + ϵ x,t y t = g(x t ; B) + ϵ y,t, f GP(0, K x ) (67) g GP(0, K y ) (68) p(y, X α, β) = p(y X, β)p(x α). 1 W N ( p(y X, β) = (2π) ND/2 exp 1 ) K Y D/2 2 tr(k 1 Y YW2 Y T ) (69) GPLVM. K Y ( ) RBF 51 / 59

53 GPDM (3): Formulation (2) 2 Markov N p(x α) = p(x 1 ) p(x t x t 1, A, α) p(a α) da (70) }{{} t=2 Gaussian ( 1 = p(x 1 ) (2π) d(n 1)/2 K X d exp 1 ) 2 tr(k 1 X X X T ) (71) X = [x 2,, x N ] T K X x 1 x N 1 RBF+ ( k(x, x ) = α 1 exp α 2 2 x x 2) + α 3 x T x + α4 1 δ(x, x ). (72) 52 / 59

54 GPDM (4): Formulation(3) p(y, X, α, β) = p(y X, β)p(x α)p(α)p(β) (73) p(α) i αi 1, p(β) i β 1 i. (74) log p(y, X, α, β) = 1 2 tr(k 1 X X X T ) tr(k 1 Y YW2 Y T ) + d 2 log K X + D 2 log K Y ( ) log W + log α j + log β j j j }{{} ( ) (75). (76) 53 / 59

55 Gaussian Process Density Sampler (1) (a) l x =1, l y =1, α=1 (b) l x =1, l y =1, α=10 (c) l x =0.2, l y =0.2, α=5 (d) l x =0.1, l y =2, α=5 GP prior? p(x) = 1 Φ(f(x))π(x) (77) Z(f) f(x) GP(x) ; π(x) : Φ(x) [0, 1] : ex. Φ(x) = 1/(1 + exp( x)) 54 / 59

56 Gaussian Process Density Sampler (2) : Rejection sampling p(x) = 1 Φ(f(x))π(x) (78) Z(f) 1. Draw x π(x). 2. Draw r Uniform[0, 1]. 3. If r < Φ(g(x)) then accept x; else reject x Accept N, reject M ( ) Z(f), Φ(g(x)) MCMC! Infinite Mixture 55 / 59

57 Gaussian process,,,,, (GPLVM, GPDM) 56 / 59

58 Literature Gaussian Process Dynamical Models. J. Wang, D. Fleet, and A. Hertzmann. NIPS jmwang/gpdm/ Gaussian Process Latent Variable Models for Visualization of High Dimensional Data. Neil D. Lawrence, NIPS The Gaussian Process Density Sampler. Ryan Prescott Adams, Iain Murray and David MacKay. NIPS Archipelago: Nonparametric Bayesian Semi-Supervised Learning. Ryan Prescott Adams and Zoubin Ghahramani. ICML / 59

59 (Pattern Recognition and Machine Learning), Chapter 6. Christopher Bishop, Springer, Gaussian Processes for Machine Learning. Rasmussen and Williams, MIT Press, Gaussian Processes A Replacement for Supervised Neural Networks?. David MacKay, Lecture notes at NIPS Videolectures.net: Gaussian Process Basics. mackay gpb/ (1)., tmasada/ pdf 58 / 59

60 Codes GPML Toolbox (in MATLAB): GPy (in Python): 59 / 59

? (EM),, EM? (, 2004/ 2002) von Mises-Fisher ( 2004) HMM (MacKay 1997) LDA (Blei et al. 2001) PCFG ( 2004)... Variational Bayesian methods for Natural

? (EM),, EM? (, 2004/ 2002) von Mises-Fisher ( 2004) HMM (MacKay 1997) LDA (Blei et al. 2001) PCFG ( 2004)... Variational Bayesian methods for Natural SLC Internal tutorial Daichi Mochihashi daichi.mochihashi@atr.jp ATR SLC 2005.6.21 (Tue) 13:15 15:00@Meeting Room 1 Variational Bayesian methods for Natural Language Processing p.1/30 ? (EM),, EM? (, 2004/

More information

A

A A 2563 15 4 21 1 3 1.1................................................ 3 1.2............................................. 3 2 3 2.1......................................... 3 2.2............................................

More information

I A A441 : April 15, 2013 Version : 1.1 I Kawahira, Tomoki TA (Shigehiro, Yoshida )

I A A441 : April 15, 2013 Version : 1.1 I   Kawahira, Tomoki TA (Shigehiro, Yoshida ) I013 00-1 : April 15, 013 Version : 1.1 I Kawahira, Tomoki TA (Shigehiro, Yoshida) http://www.math.nagoya-u.ac.jp/~kawahira/courses/13s-tenbou.html pdf * 4 15 4 5 13 e πi = 1 5 0 5 7 3 4 6 3 6 10 6 17

More information

30

30 3 ............................................2 2...........................................2....................................2.2...................................2.3..............................

More information

meiji_resume_1.PDF

meiji_resume_1.PDF β β β (q 1,q,..., q n ; p 1, p,..., p n ) H(q 1,q,..., q n ; p 1, p,..., p n ) Hψ = εψ ε k = k +1/ ε k = k(k 1) (x, y, z; p x, p y, p z ) (r; p r ), (θ; p θ ), (ϕ; p ϕ ) ε k = 1/ k p i dq i E total = E

More information

‚åŁÎ“·„´Šš‡ðŠp‡¢‡½‹âfi`fiI…A…‰…S…−…Y…•‡ÌMarkovŸA“½fiI›ð’Í

‚åŁÎ“·„´Šš‡ðŠp‡¢‡½‹âfi`fiI…A…‰…S…−…Y…•‡ÌMarkovŸA“½fiI›ð’Í Markov 2009 10 2 Markov 2009 10 2 1 / 25 1 (GA) 2 GA 3 4 Markov 2009 10 2 2 / 25 (GA) (GA) L ( 1) I := {0, 1} L f : I (0, ) M( 2) S := I M GA (GA) f (i) i I Markov 2009 10 2 3 / 25 (GA) ρ(i, j), i, j I

More information

ばらつき抑制のための確率最適制御

ばらつき抑制のための確率最適制御 ( ) http://wwwhayanuemnagoya-uacjp/ fujimoto/ 2011 3 9 11 ( ) 2011/03/09-11 1 / 46 Outline 1 2 3 4 5 ( ) 2011/03/09-11 2 / 46 Outline 1 2 3 4 5 ( ) 2011/03/09-11 3 / 46 (1/2) r + Controller - u Plant y

More information

X G P G (X) G BG [X, BG] S 2 2 2 S 2 2 S 2 = { (x 1, x 2, x 3 ) R 3 x 2 1 + x 2 2 + x 2 3 = 1 } R 3 S 2 S 2 v x S 2 x x v(x) T x S 2 T x S 2 S 2 x T x S 2 = { ξ R 3 x ξ } R 3 T x S 2 S 2 x x T x S 2

More information

tokei01.dvi

tokei01.dvi 2. :,,,. :.... Apr. - Jul., 26FY Dept. of Mechanical Engineering, Saga Univ., JAPAN 4 3. (probability),, 1. : : n, α A, A a/n. :, p, p Apr. - Jul., 26FY Dept. of Mechanical Engineering, Saga Univ., JAPAN

More information

…p…^†[…fiflF”¯ Pattern Recognition

…p…^†[…fiflF”¯   Pattern Recognition Pattern Recognition Shin ichi Satoh National Institute of Informatics June 11, 2019 (Support Vector Machines) (Support Vector Machines: SVM) SVM Vladimir N. Vapnik and Alexey Ya. Chervonenkis 1963 SVM

More information

1 1.1 ( ). z = a + bi, a, b R 0 a, b 0 a 2 + b 2 0 z = a + bi = ( ) a 2 + b 2 a a 2 + b + b 2 a 2 + b i 2 r = a 2 + b 2 θ cos θ = a a 2 + b 2, sin θ =

1 1.1 ( ). z = a + bi, a, b R 0 a, b 0 a 2 + b 2 0 z = a + bi = ( ) a 2 + b 2 a a 2 + b + b 2 a 2 + b i 2 r = a 2 + b 2 θ cos θ = a a 2 + b 2, sin θ = 1 1.1 ( ). z = + bi,, b R 0, b 0 2 + b 2 0 z = + bi = ( ) 2 + b 2 2 + b + b 2 2 + b i 2 r = 2 + b 2 θ cos θ = 2 + b 2, sin θ = b 2 + b 2 2π z = r(cos θ + i sin θ) 1.2 (, ). 1. < 2. > 3. ±,, 1.3 ( ). A

More information

.2 ρ dv dt = ρk grad p + 3 η grad (divv) + η 2 v.3 divh = 0, rote + c H t = 0 dive = ρ, H = 0, E = ρ, roth c E t = c ρv E + H c t = 0 H c E t = c ρv T

.2 ρ dv dt = ρk grad p + 3 η grad (divv) + η 2 v.3 divh = 0, rote + c H t = 0 dive = ρ, H = 0, E = ρ, roth c E t = c ρv E + H c t = 0 H c E t = c ρv T NHK 204 2 0 203 2 24 ( ) 7 00 7 50 203 2 25 ( ) 7 00 7 50 203 2 26 ( ) 7 00 7 50 203 2 27 ( ) 7 00 7 50 I. ( ν R n 2 ) m 2 n m, R = e 2 8πε 0 hca B =.09737 0 7 m ( ν = ) λ a B = 4πε 0ħ 2 m e e 2 = 5.2977

More information

I A A441 : April 21, 2014 Version : Kawahira, Tomoki TA (Kondo, Hirotaka ) Google

I A A441 : April 21, 2014 Version : Kawahira, Tomoki TA (Kondo, Hirotaka ) Google I4 - : April, 4 Version :. Kwhir, Tomoki TA (Kondo, Hirotk) Google http://www.mth.ngoy-u.c.jp/~kwhir/courses/4s-biseki.html pdf 4 4 4 4 8 e 5 5 9 etc. 5 6 6 6 9 n etc. 6 6 6 3 6 3 7 7 etc 7 4 7 7 8 5 59

More information

1 (Berry,1975) 2-6 p (S πr 2 )p πr 2 p 2πRγ p p = 2γ R (2.5).1-1 : : : : ( ).2 α, β α, β () X S = X X α X β (.1) 1 2

1 (Berry,1975) 2-6 p (S πr 2 )p πr 2 p 2πRγ p p = 2γ R (2.5).1-1 : : : : ( ).2 α, β α, β () X S = X X α X β (.1) 1 2 2005 9/8-11 2 2.2 ( 2-5) γ ( ) γ cos θ 2πr πρhr 2 g h = 2γ cos θ ρgr (2.1) γ = ρgrh (2.2) 2 cos θ θ cos θ = 1 (2.2) γ = 1 ρgrh (2.) 2 2. p p ρgh p ( ) p p = p ρgh (2.) h p p = 2γ r 1 1 (Berry,1975) 2-6

More information

December 28, 2018

December 28, 2018 e-mail : kigami@i.kyoto-u.ac.jp December 28, 28 Contents 2............................. 3.2......................... 7.3..................... 9.4................ 4.5............. 2.6.... 22 2 36 2..........................

More information

6.1 (P (P (P (P (P (P (, P (, P.

6.1 (P (P (P (P (P (P (, P (, P. (011 30 7 0 ( ( 3 ( 010 1 (P.3 1 1.1 (P.4.................. 1 1. (P.4............... 1 (P.15.1 (P.16................. (P.0............3 (P.18 3.4 (P.3............... 4 3 (P.9 4 3.1 (P.30........... 4 3.

More information

gr09.dvi

gr09.dvi .1, θ, ϕ d = A, t dt + B, t dtd + C, t d + D, t dθ +in θdϕ.1.1 t { = f1,t t = f,t { D, t = B, t =.1. t A, tdt e φ,t dt, C, td e λ,t d.1.3,t, t d = e φ,t dt + e λ,t d + dθ +in θdϕ.1.4 { = f1,t t = f,t {

More information

6.1 (P (P (P (P (P (P (, P (, P.101

6.1 (P (P (P (P (P (P (, P (, P.101 (008 0 3 7 ( ( ( 00 1 (P.3 1 1.1 (P.3.................. 1 1. (P.4............... 1 (P.15.1 (P.15................. (P.18............3 (P.17......... 3.4 (P................ 4 3 (P.7 4 3.1 ( P.7...........

More information

W u = u(x, t) u tt = a 2 u xx, a > 0 (1) D := {(x, t) : 0 x l, t 0} u (0, t) = 0, u (l, t) = 0, t 0 (2)

W u = u(x, t) u tt = a 2 u xx, a > 0 (1) D := {(x, t) : 0 x l, t 0} u (0, t) = 0, u (l, t) = 0, t 0 (2) 3 215 4 27 1 1 u u(x, t) u tt a 2 u xx, a > (1) D : {(x, t) : x, t } u (, t), u (, t), t (2) u(x, ) f(x), u(x, ) t 2, x (3) u(x, t) X(x)T (t) u (1) 1 T (t) a 2 T (t) X (x) X(x) α (2) T (t) αa 2 T (t) (4)

More information

1 (1) () (3) I 0 3 I I d θ = L () dt θ L L θ I d θ = L = κθ (3) dt κ T I T = π κ (4) T I κ κ κ L l a θ L r δr δl L θ ϕ ϕ = rθ (5) l

1 (1) () (3) I 0 3 I I d θ = L () dt θ L L θ I d θ = L = κθ (3) dt κ T I T = π κ (4) T I κ κ κ L l a θ L r δr δl L θ ϕ ϕ = rθ (5) l 1 1 ϕ ϕ ϕ S F F = ϕ (1) S 1: F 1 1 (1) () (3) I 0 3 I I d θ = L () dt θ L L θ I d θ = L = κθ (3) dt κ T I T = π κ (4) T I κ κ κ L l a θ L r δr δl L θ ϕ ϕ = rθ (5) l : l r δr θ πrδr δf (1) (5) δf = ϕ πrδr

More information

211 kotaro@math.titech.ac.jp 1 R *1 n n R n *2 R n = {(x 1,..., x n ) x 1,..., x n R}. R R 2 R 3 R n R n R n D D R n *3 ) (x 1,..., x n ) f(x 1,..., x n ) f D *4 n 2 n = 1 ( ) 1 f D R n f : D R 1.1. (x,

More information

II A A441 : October 02, 2014 Version : Kawahira, Tomoki TA (Kondo, Hirotaka )

II A A441 : October 02, 2014 Version : Kawahira, Tomoki TA (Kondo, Hirotaka ) II 214-1 : October 2, 214 Version : 1.1 Kawahira, Tomoki TA (Kondo, Hirotaka ) http://www.math.nagoya-u.ac.jp/~kawahira/courses/14w-biseki.html pdf 1 2 1 9 1 16 1 23 1 3 11 6 11 13 11 2 11 27 12 4 12 11

More information

newmain.dvi

newmain.dvi 数論 サンプルページ この本の定価 判型などは, 以下の URL からご覧いただけます. http://www.morikita.co.jp/books/mid/008142 このサンプルページの内容は, 第 2 版 1 刷発行当時のものです. Daniel DUVERNEY: THÉORIE DES NOMBRES c Dunod, Paris, 1998, This book is published

More information

) ] [ h m x + y + + V x) φ = Eφ 1) z E = i h t 13) x << 1) N n n= = N N + 1) 14) N n n= = N N + 1)N + 1) 6 15) N n 3 n= = 1 4 N N + 1) 16) N n 4

) ] [ h m x + y + + V x) φ = Eφ 1) z E = i h t 13) x << 1) N n n= = N N + 1) 14) N n n= = N N + 1)N + 1) 6 15) N n 3 n= = 1 4 N N + 1) 16) N n 4 1. k λ ν ω T v p v g k = π λ ω = πν = π T v p = λν = ω k v g = dω dk 1) ) 3) 4). p = hk = h λ 5) E = hν = hω 6) h = h π 7) h =6.6618 1 34 J sec) hc=197.3 MeV fm = 197.3 kev pm= 197.3 ev nm = 1.97 1 3 ev

More information

1 Introduction 1 (1) (2) (3) () {f n (x)} n=1 [a, b] K > 0 n, x f n (x) K < ( ) x [a, b] lim f n (x) f(x) (1) f(x)? (2) () f(x)? b lim a f n (x)dx = b

1 Introduction 1 (1) (2) (3) () {f n (x)} n=1 [a, b] K > 0 n, x f n (x) K < ( ) x [a, b] lim f n (x) f(x) (1) f(x)? (2) () f(x)? b lim a f n (x)dx = b 1 Introduction 2 2.1 2.2 2.3 3 3.1 3.2 σ- 4 4.1 4.2 5 5.1 5.2 5.3 6 7 8. Fubini,,. 1 1 Introduction 1 (1) (2) (3) () {f n (x)} n=1 [a, b] K > 0 n, x f n (x) K < ( ) x [a, b] lim f n (x) f(x) (1) f(x)?

More information

,. Black-Scholes u t t, x c u 0 t, x x u t t, x c u t, x x u t t, x + σ x u t, x + rx ut, x rux, t 0 x x,,.,. Step 3, 7,,, Step 6., Step 4,. Step 5,,.

,. Black-Scholes u t t, x c u 0 t, x x u t t, x c u t, x x u t t, x + σ x u t, x + rx ut, x rux, t 0 x x,,.,. Step 3, 7,,, Step 6., Step 4,. Step 5,,. 9 α ν β Ξ ξ Γ γ o δ Π π ε ρ ζ Σ σ η τ Θ θ Υ υ ι Φ φ κ χ Λ λ Ψ ψ µ Ω ω Def, Prop, Th, Lem, Note, Remark, Ex,, Proof, R, N, Q, C [a, b {x R : a x b} : a, b {x R : a < x < b} : [a, b {x R : a x < b} : a,

More information

Dirichlet process mixture Dirichlet process mixture 2 /40 MIRU2008 :

Dirichlet process mixture Dirichlet process mixture 2 /40 MIRU2008 : Dirichlet Process : joint work with: Max Welling (UC Irvine), Yee Whye Teh (UCL, Gatsby) http://kenichi.kurihara.googlepages.com/miru_workshop.pdf 1 /40 MIRU2008 : Dirichlet process mixture Dirichlet process

More information

y π π O π x 9 s94.5 y dy dx. y = x + 3 y = x logx + 9 s9.6 z z x, z y. z = xy + y 3 z = sinx y 9 s x dx π x cos xdx 9 s93.8 a, fx = e x ax,. a =

y π π O π x 9 s94.5 y dy dx. y = x + 3 y = x logx + 9 s9.6 z z x, z y. z = xy + y 3 z = sinx y 9 s x dx π x cos xdx 9 s93.8 a, fx = e x ax,. a = [ ] 9 IC. dx = 3x 4y dt dy dt = x y u xt = expλt u yt λ u u t = u u u + u = xt yt 6 3. u = x, y, z = x + y + z u u 9 s9 grad u ux, y, z = c c : grad u = u x i + u y j + u k i, j, k z x, y, z grad u v =

More information

201711grade1ouyou.pdf

201711grade1ouyou.pdf 2017 11 26 1 2 52 3 12 13 22 23 32 33 42 3 5 3 4 90 5 6 A 1 2 Web Web 3 4 1 2... 5 6 7 7 44 8 9 1 2 3 1 p p >2 2 A 1 2 0.6 0.4 0.52... (a) 0.6 0.4...... B 1 2 0.8-0.2 0.52..... (b) 0.6 0.52.... 1 A B 2

More information

2011 8 26 3 I 5 1 7 1.1 Markov................................ 7 2 Gau 13 2.1.................................. 13 2.2............................... 18 2.3............................ 23 3 Gau (Le vy

More information

目次 ガウス過程 (Gaussian Process; GP) 序論 GPによる回帰 GPによる識別 GP 状態空間モデル 概括 GP 状態空間モデルによる音楽ムードの推定

目次 ガウス過程 (Gaussian Process; GP) 序論 GPによる回帰 GPによる識別 GP 状態空間モデル 概括 GP 状態空間モデルによる音楽ムードの推定 公開講座 : ガウス過程の基礎と応用 05/3/3 ガウス過程の基礎 統計数理研究所 松井知子 目次 ガウス過程 (Gaussian Process; GP) 序論 GPによる回帰 GPによる識別 GP 状態空間モデル 概括 GP 状態空間モデルによる音楽ムードの推定 GP 序論 ノンパラメトリック予測 カーネル法の利用 参照文献 : C. E. Rasmussen and C. K. I. Williams

More information

x V x x V x, x V x = x + = x +(x+x )=(x +x)+x = +x = x x = x x = x =x =(+)x =x +x = x +x x = x ( )x = x =x =(+( ))x =x +( )x = x +( )x ( )x = x x x R

x V x x V x, x V x = x + = x +(x+x )=(x +x)+x = +x = x x = x x = x =x =(+)x =x +x = x +x x = x ( )x = x =x =(+( ))x =x +( )x = x +( )x ( )x = x x x R V (I) () (4) (II) () (4) V K vector space V vector K scalor K C K R (I) x, y V x + y V () (x + y)+z = x +(y + z) (2) x + y = y + x (3) V x V x + = x (4) x V x + x = x V x x (II) x V, α K αx V () (α + β)x

More information

2016 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 1 16 2 1 () X O 3 (O1) X O, O (O2) O O (O3) O O O X (X, O) O X X (O1), (O2), (O3) (O2) (O3) n (O2) U 1,..., U n O U k O k=1 (O3) U λ O( λ Λ) λ Λ U λ O 0 X 0 (O2) n =

More information

keisoku01.dvi

keisoku01.dvi 2.,, Mon, 2006, 401, SAGA, JAPAN Dept. of Mechanical Engineering, Saga Univ., JAPAN 4 Mon, 2006, 401, SAGA, JAPAN Dept. of Mechanical Engineering, Saga Univ., JAPAN 5 Mon, 2006, 401, SAGA, JAPAN Dept.

More information

知能科学:ニューラルネットワーク

知能科学:ニューラルネットワーク 2 3 4 (Neural Network) (Deep Learning) (Deep Learning) ( x x = ax + b x x x ? x x x w σ b = σ(wx + b) x w b w b .2.8.6 σ(x) = + e x.4.2 -.2 - -5 5 x w x2 w2 σ x3 w3 b = σ(w x + w 2 x 2 + w 3 x 3 + b) x,

More information

知能科学:ニューラルネットワーク

知能科学:ニューラルネットワーク 2 3 4 (Neural Network) (Deep Learning) (Deep Learning) ( x x = ax + b x x x ? x x x w σ b = σ(wx + b) x w b w b .2.8.6 σ(x) = + e x.4.2 -.2 - -5 5 x w x2 w2 σ x3 w3 b = σ(w x + w 2 x 2 + w 3 x 3 + b) x,

More information

2011de.dvi

2011de.dvi 211 ( 4 2 1. 3 1.1............................... 3 1.2 1- -......................... 13 1.3 2-1 -................... 19 1.4 3- -......................... 29 2. 37 2.1................................ 37

More information

I z n+1 = zn 2 + c (c ) c pd L.V. K. 2

I z n+1 = zn 2 + c (c ) c   pd L.V. K. 2 I 2012 00-1 I : October 1, 2012 Version : 1.1 3. 10 1 10 15 10 22 1: 10 29 11 5 11 12 11 19 2: 11 26 12 3 12 10 12 17 3: 12 25 1 9 1 21 3 1 I 2012 00-2 z n+1 = zn 2 + c (c ) c http://www.math.nagoya-u.ac.jp/~kawahira/courses/12w-tenbou.html

More information

6 2 2 x y x y t P P = P t P = I P P P ( ) ( ) ,, ( ) ( ) cos θ sin θ cos θ sin θ, sin θ cos θ sin θ cos θ y x θ x θ P

6 2 2 x y x y t P P = P t P = I P P P ( ) ( ) ,, ( ) ( ) cos θ sin θ cos θ sin θ, sin θ cos θ sin θ cos θ y x θ x θ P 6 x x 6.1 t P P = P t P = I P P P 1 0 1 0,, 0 1 0 1 cos θ sin θ cos θ sin θ, sin θ cos θ sin θ cos θ x θ x θ P x P x, P ) = t P x)p ) = t x t P P ) = t x = x, ) 6.1) x = Figure 6.1 Px = x, P=, θ = θ P

More information

V(x) m e V 0 cos x π x π V(x) = x < π, x > π V 0 (i) x = 0 (V(x) V 0 (1 x 2 /2)) n n d 2 f dξ 2ξ d f 2 dξ + 2n f = 0 H n (ξ) (ii) H

V(x) m e V 0 cos x π x π V(x) = x < π, x > π V 0 (i) x = 0 (V(x) V 0 (1 x 2 /2)) n n d 2 f dξ 2ξ d f 2 dξ + 2n f = 0 H n (ξ) (ii) H 199 1 1 199 1 1. Vx) m e V cos x π x π Vx) = x < π, x > π V i) x = Vx) V 1 x /)) n n d f dξ ξ d f dξ + n f = H n ξ) ii) H n ξ) = 1) n expξ ) dn dξ n exp ξ )) H n ξ)h m ξ) exp ξ )dξ = π n n!δ n,m x = Vx)

More information

1 Tokyo Daily Rainfall (mm) Days (mm)

1 Tokyo Daily Rainfall (mm) Days (mm) ( ) r-taka@maritime.kobe-u.ac.jp 1 Tokyo Daily Rainfall (mm) 0 100 200 300 0 10000 20000 30000 40000 50000 Days (mm) 1876 1 1 2013 12 31 Tokyo, 1876 Daily Rainfall (mm) 0 50 100 150 0 100 200 300 Tokyo,

More information

( 28 ) ( ) ( ) 0 This note is c 2016, 2017 by Setsuo Taniguchi. It may be used for personal or classroom purposes, but not for commercial purp

( 28 ) ( ) ( ) 0 This note is c 2016, 2017 by Setsuo Taniguchi. It may be used for personal or classroom purposes, but not for commercial purp ( 28) ( ) ( 28 9 22 ) 0 This ote is c 2016, 2017 by Setsuo Taiguchi. It may be used for persoal or classroom purposes, but ot for commercial purposes. i (http://www.stat.go.jp/teacher/c2epi1.htm ) = statistics

More information

IA hara@math.kyushu-u.ac.jp Last updated: January,......................................................................................................................................................................................

More information

Part () () Γ Part ,

Part () () Γ Part , Contents a 6 6 6 6 6 6 6 7 7. 8.. 8.. 8.3. 8 Part. 9. 9.. 9.. 3. 3.. 3.. 3 4. 5 4.. 5 4.. 9 4.3. 3 Part. 6 5. () 6 5.. () 7 5.. 9 5.3. Γ 3 6. 3 6.. 3 6.. 3 6.3. 33 Part 3. 34 7. 34 7.. 34 7.. 34 8. 35

More information

6kg 1.1m 1.m.1m.1 l λ ϵ λ l + λ l l l dl dl + dλ ϵ dλ dl dl + dλ dl dl 3 1. JIS 1 6kg 1% 66kg 1 13 σ a1 σ m σ a1 σ m σ m σ a1 f f σ a1 σ a1 σ m f 4

6kg 1.1m 1.m.1m.1 l λ ϵ λ l + λ l l l dl dl + dλ ϵ dλ dl dl + dλ dl dl 3 1. JIS 1 6kg 1% 66kg 1 13 σ a1 σ m σ a1 σ m σ m σ a1 f f σ a1 σ a1 σ m f 4 35-8585 7 8 1 I I 1 1.1 6kg 1m P σ σ P 1 l l λ λ l 1.m 1 6kg 1.1m 1.m.1m.1 l λ ϵ λ l + λ l l l dl dl + dλ ϵ dλ dl dl + dλ dl dl 3 1. JIS 1 6kg 1% 66kg 1 13 σ a1 σ m σ a1 σ m σ m σ a1 f f σ a1 σ a1 σ m

More information

Introduction of Self-Organizing Map * 1 Ver. 1.00.00 (2017 6 3 ) *1 E-mail: furukawa@brain.kyutech.ac.jp i 1 1 1.1................................ 2 1.2...................................... 4 1.3.......................

More information

2000年度『数学展望 I』講義録

2000年度『数学展望 I』講義録 2000 I I IV I II 2000 I I IV I-IV. i ii 3.10 (http://www.math.nagoya-u.ac.jp/ kanai/) 2000 A....1 B....4 C....10 D....13 E....17 Brouwer A....21 B....26 C....33 D....39 E. Sperner...45 F....48 A....53

More information

8.1 Fubini 8.2 Fubini 9 (0%) 10 (50%) Carathéodory 10.3 Fubini 1 Introduction 1 (1) (2) {f n (x)} n=1 [a, b] K > 0 n, x f n (x) K < ( ) x [a

8.1 Fubini 8.2 Fubini 9 (0%) 10 (50%) Carathéodory 10.3 Fubini 1 Introduction 1 (1) (2) {f n (x)} n=1 [a, b] K > 0 n, x f n (x) K < ( ) x [a % 100% 1 Introduction 2 (100%) 2.1 2.2 2.3 3 (100%) 3.1 3.2 σ- 4 (100%) 4.1 4.2 5 (100%) 5.1 5.2 5.3 6 (100%) 7 (40%) 8 Fubini (90%) 2007.11.5 1 8.1 Fubini 8.2 Fubini 9 (0%) 10 (50%) 10.1 10.2 Carathéodory

More information

(3) (2),,. ( 20) ( s200103) 0.7 x C,, x 2 + y 2 + ax = 0 a.. D,. D, y C, C (x, y) (y 0) C m. (2) D y = y(x) (x ± y 0), (x, y) D, m, m = 1., D. (x 2 y

(3) (2),,. ( 20) ( s200103) 0.7 x C,, x 2 + y 2 + ax = 0 a.. D,. D, y C, C (x, y) (y 0) C m. (2) D y = y(x) (x ± y 0), (x, y) D, m, m = 1., D. (x 2 y [ ] 7 0.1 2 2 + y = t sin t IC ( 9) ( s090101) 0.2 y = d2 y 2, y = x 3 y + y 2 = 0 (2) y + 2y 3y = e 2x 0.3 1 ( y ) = f x C u = y x ( 15) ( s150102) [ ] y/x du x = Cexp f(u) u (2) x y = xey/x ( 16) ( s160101)

More information

O x y z O ( O ) O (O ) 3 x y z O O x v t = t = 0 ( 1 ) O t = 0 c t r = ct P (x, y, z) r 2 = x 2 + y 2 + z 2 (t, x, y, z) (ct) 2 x 2 y 2 z 2 = 0

O x y z O ( O ) O (O ) 3 x y z O O x v t = t = 0 ( 1 ) O t = 0 c t r = ct P (x, y, z) r 2 = x 2 + y 2 + z 2 (t, x, y, z) (ct) 2 x 2 y 2 z 2 = 0 9 O y O ( O ) O (O ) 3 y O O v t = t = 0 ( ) O t = 0 t r = t P (, y, ) r = + y + (t,, y, ) (t) y = 0 () ( )O O t (t ) y = 0 () (t) y = (t ) y = 0 (3) O O v O O v O O O y y O O v P(, y,, t) t (, y,, t )

More information

基礎数学I

基礎数学I I & II ii ii........... 22................. 25 12............... 28.................. 28.................... 31............. 32.................. 34 3 1 9.................... 1....................... 1............

More information

v er.1/ c /(21)

v er.1/ c /(21) 12 -- 1 1 2009 1 17 1-1 1-2 1-3 1-4 2 2 2 1-5 1 1-6 1 1-7 1-1 1-2 1-3 1-4 1-5 1-6 1-7 c 2011 1/(21) 12 -- 1 -- 1 1--1 1--1--1 1 2009 1 n n α { n } α α { n } lim n = α, n α n n ε n > N n α < ε N {1, 1,

More information

pdf

pdf http://www.ns.kogakuin.ac.jp/~ft13389/lecture/physics1a2b/ pdf I 1 1 1.1 ( ) 1. 30 m µm 2. 20 cm km 3. 10 m 2 cm 2 4. 5 cm 3 km 3 5. 1 6. 1 7. 1 1.2 ( ) 1. 1 m + 10 cm 2. 1 hr + 6400 sec 3. 3.0 10 5 kg

More information

I, II 1, A = A 4 : 6 = max{ A, } A A 10 10%

I, II 1, A = A 4 : 6 = max{ A, } A A 10 10% 1 2006.4.17. A 3-312 tel: 092-726-4774, e-mail: hara@math.kyushu-u.ac.jp, http://www.math.kyushu-u.ac.jp/ hara/lectures/lectures-j.html Office hours: B A I ɛ-δ ɛ-δ 1. 2. A 1. 1. 2. 3. 4. 5. 2. ɛ-δ 1. ɛ-n

More information

1 1 x y = y(x) y, y,..., y (n) : n y F (x, y, y,..., y (n) ) = 0 n F (x, y, y ) = 0 1 y(x) y y = G(x, y) y, y y + p(x)y = q(x) 1 p(x) q(

1 1 x y = y(x) y, y,..., y (n) : n y F (x, y, y,..., y (n) ) = 0 n F (x, y, y ) = 0 1 y(x) y y = G(x, y) y, y y + p(x)y = q(x) 1 p(x) q( 1 1 y = y() y, y,..., y (n) : n y F (, y, y,..., y (n) ) = 0 n F (, y, y ) = 0 1 y() 1.1 1 y y = G(, y) 1.1.1 1 y, y y + p()y = q() 1 p() q() (q() = 0) y + p()y = 0 y y + py = 0 y y = p (log y) = p log

More information

20 9 19 1 3 11 1 3 111 3 112 1 4 12 6 121 6 122 7 13 7 131 8 132 10 133 10 134 12 14 13 141 13 142 13 143 15 144 16 145 17 15 19 151 1 19 152 20 2 21 21 21 211 21 212 1 23 213 1 23 214 25 215 31 22 33

More information

医系の統計入門第 2 版 サンプルページ この本の定価 判型などは, 以下の URL からご覧いただけます. このサンプルページの内容は, 第 2 版 1 刷発行時のものです.

医系の統計入門第 2 版 サンプルページ この本の定価 判型などは, 以下の URL からご覧いただけます.   このサンプルページの内容は, 第 2 版 1 刷発行時のものです. 医系の統計入門第 2 版 サンプルページ この本の定価 判型などは, 以下の URL からご覧いただけます. http://www.morikita.co.jp/books/mid/009192 このサンプルページの内容は, 第 2 版 1 刷発行時のものです. i 2 t 1. 2. 3 2 3. 6 4. 7 5. n 2 ν 6. 2 7. 2003 ii 2 2013 10 iii 1987

More information

/ 2 n n n n x 1,..., x n 1 n 2 n R n n ndimensional Euclidean space R n vector point R n set space R n R n x = x 1 x n y = y 1 y n distance dx,

/ 2 n n n n x 1,..., x n 1 n 2 n R n n ndimensional Euclidean space R n vector point R n set space R n R n x = x 1 x n y = y 1 y n distance dx, 1 1.1 R n 1.1.1 3 xyz xyz 3 x, y, z R 3 := x y : x, y, z R z 1 3. n n x 1,..., x n x 1. x n x 1 x n 1 / 2 n n n n x 1,..., x n 1 n 2 n R n n ndimensional Euclidean space R n vector point 1.1.2 R n set

More information

January 27, 2015

January 27, 2015 e-mail : kigami@i.kyoto-u.ac.jp January 27, 205 Contents 2........................ 2.2....................... 3.3....................... 6.4......................... 2 6 2........................... 6

More information

simx simxdx, cosxdx, sixdx 6.3 px m m + pxfxdx = pxf x p xf xdx = pxf x p xf x + p xf xdx 7.4 a m.5 fx simxdx 8 fx fx simxdx = πb m 9 a fxdx = πa a =

simx simxdx, cosxdx, sixdx 6.3 px m m + pxfxdx = pxf x p xf xdx = pxf x p xf x + p xf xdx 7.4 a m.5 fx simxdx 8 fx fx simxdx = πb m 9 a fxdx = πa a = II 6 ishimori@phys.titech.ac.jp 6.. 5.4.. f Rx = f Lx = fx fx + lim = lim x x + x x f c = f x + x < c < x x x + lim x x fx fx x x = lim x x f c = f x x < c < x cosmx cosxdx = {cosm x + cosm + x} dx = [

More information

2S III IV K A4 12:00-13:30 Cafe David 1 2 TA 1 appointment Cafe David K2-2S04-00 : C

2S III IV K A4 12:00-13:30 Cafe David 1 2 TA 1  appointment Cafe David K2-2S04-00 : C 2S III IV K200 : April 16, 2004 Version : 1.1 TA M2 TA 1 10 2 n 1 ɛ-δ 5 15 20 20 45 K2-2S04-00 : C 2S III IV K200 60 60 74 75 89 90 1 email 3 4 30 A4 12:00-13:30 Cafe David 1 2 TA 1 email appointment Cafe

More information

n ( (

n ( ( 1 2 27 6 1 1 m-mat@mathscihiroshima-uacjp 2 http://wwwmathscihiroshima-uacjp/~m-mat/teach/teachhtml 2 1 3 11 3 111 3 112 4 113 n 4 114 5 115 5 12 7 121 7 122 9 123 11 124 11 125 12 126 2 2 13 127 15 128

More information

x, y x 3 y xy 3 x 2 y + xy 2 x 3 + y 3 = x 3 y xy 3 x 2 y + xy 2 x 3 + y 3 = 15 xy (x y) (x + y) xy (x y) (x y) ( x 2 + xy + y 2) = 15 (x y)

x, y x 3 y xy 3 x 2 y + xy 2 x 3 + y 3 = x 3 y xy 3 x 2 y + xy 2 x 3 + y 3 = 15 xy (x y) (x + y) xy (x y) (x y) ( x 2 + xy + y 2) = 15 (x y) x, y x 3 y xy 3 x 2 y + xy 2 x 3 + y 3 = 15 1 1977 x 3 y xy 3 x 2 y + xy 2 x 3 + y 3 = 15 xy (x y) (x + y) xy (x y) (x y) ( x 2 + xy + y 2) = 15 (x y) ( x 2 y + xy 2 x 2 2xy y 2) = 15 (x y) (x + y) (xy

More information

08-Note2-web

08-Note2-web r(t) t r(t) O v(t) = dr(t) dt a(t) = dv(t) dt = d2 r(t) dt 2 r(t), v(t), a(t) t dr(t) dt r(t) =(x(t),y(t),z(t)) = d 2 r(t) dt 2 = ( dx(t) dt ( d 2 x(t) dt 2, dy(t), dz(t) dt dt ), d2 y(t) dt 2, d2 z(t)

More information

A S hara/lectures/lectures-j.html ϵ-n 1 ϵ-n lim n a n = α n a n α 2 lim a n = 0 1 n a k n n k= ϵ

A S hara/lectures/lectures-j.html ϵ-n 1 ϵ-n lim n a n = α n a n α 2 lim a n = 0 1 n a k n n k= ϵ A S1-20 http://www2.mth.kyushu-u.c.jp/ hr/lectures/lectures-j.html 1 1 1.1 ϵ-n 1 ϵ-n lim n n = α n n α 2 lim n = 0 1 n k n n k=1 0 1.1.7 ϵ-n 1.1.1 n α n n α lim n n = α ϵ N(ϵ) n > N(ϵ) n α < ϵ (1.1.1)

More information

x T = (x 1,, x M ) x T x M K C 1,, C K 22 x w y 1: 2 2

x T = (x 1,, x M ) x T x M K C 1,, C K 22 x w y 1: 2 2 Takio Kurita Neurosceince Research Institute, National Institute of Advanced Indastrial Science and Technology takio-kurita@aistgojp (Support Vector Machine, SVM) 1 (Support Vector Machine, SVM) ( ) 2

More information

& 3 3 ' ' (., (Pixel), (Light Intensity) (Random Variable). (Joint Probability). V., V = {,,, V }. i x i x = (x, x,, x V ) T. x i i (State Variable),

& 3 3 ' ' (., (Pixel), (Light Intensity) (Random Variable). (Joint Probability). V., V = {,,, V }. i x i x = (x, x,, x V ) T. x i i (State Variable), .... Deeping and Expansion of Large-Scale Random Fields and Probabilistic Image Processing Kazuyuki Tanaka The mathematical frameworks of probabilistic image processing are formulated by means of Markov

More information

,, Poisson 3 3. t t y,, y n Nµ, σ 2 y i µ + ɛ i ɛ i N0, σ 2 E[y i ] µ * i y i x i y i α + βx i + ɛ i ɛ i N0, σ 2, α, β *3 y i E[y i ] α + βx i

,, Poisson 3 3. t t y,, y n Nµ, σ 2 y i µ + ɛ i ɛ i N0, σ 2 E[y i ] µ * i y i x i y i α + βx i + ɛ i ɛ i N0, σ 2, α, β *3 y i E[y i ] α + βx i Armitage.? SAS.2 µ, µ 2, µ 3 a, a 2, a 3 a µ + a 2 µ 2 + a 3 µ 3 µ, µ 2, µ 3 µ, µ 2, µ 3 log a, a 2, a 3 a µ + a 2 µ 2 + a 3 µ 3 µ, µ 2, µ 3 * 2 2. y t y y y Poisson y * ,, Poisson 3 3. t t y,, y n Nµ,

More information

数理統計学Iノート

数理統計学Iノート I ver. 0/Apr/208 * (inferential statistics) *2 A, B *3 5.9 *4 *5 [6] [],.., 7 2004. [2].., 973. [3]. R (Wonderful R )., 9 206. [4]. ( )., 7 99. [5]. ( )., 8 992. [6],.., 989. [7]. - 30., 0 996. [4] [5]

More information

Note.tex 2008/09/19( )

Note.tex 2008/09/19( ) 1 20 9 19 2 1 5 1.1........................ 5 1.2............................. 8 2 9 2.1............................. 9 2.2.............................. 10 3 13 3.1.............................. 13 3.2..................................

More information

4. ϵ(ν, T ) = c 4 u(ν, T ) ϵ(ν, T ) T ν π4 Planck dx = 0 e x 1 15 U(T ) x 3 U(T ) = σt 4 Stefan-Boltzmann σ 2π5 k 4 15c 2 h 3 = W m 2 K 4 5.

4. ϵ(ν, T ) = c 4 u(ν, T ) ϵ(ν, T ) T ν π4 Planck dx = 0 e x 1 15 U(T ) x 3 U(T ) = σt 4 Stefan-Boltzmann σ 2π5 k 4 15c 2 h 3 = W m 2 K 4 5. A 1. Boltzmann Planck u(ν, T )dν = 8πh ν 3 c 3 kt 1 dν h 6.63 10 34 J s Planck k 1.38 10 23 J K 1 Boltzmann u(ν, T ) T ν e hν c = 3 10 8 m s 1 2. Planck λ = c/ν Rayleigh-Jeans u(ν, T )dν = 8πν2 kt dν c

More information

液晶の物理1:連続体理論(弾性,粘性)

液晶の物理1:連続体理論(弾性,粘性) The Physics of Liquid Crystals P. G. de Gennes and J. Prost (Oxford University Press, 1993) Liquid crystals are beautiful and mysterious; I am fond of them for both reasons. My hope is that some readers

More information

II 2 3.,, A(B + C) = AB + AC, (A + B)C = AC + BC. 4. m m A, m m B,, m m B, AB = BA, A,, I. 5. m m A, m n B, AB = B, A I E, 4 4 I, J, K

II 2 3.,, A(B + C) = AB + AC, (A + B)C = AC + BC. 4. m m A, m m B,, m m B, AB = BA, A,, I. 5. m m A, m n B, AB = B, A I E, 4 4 I, J, K II. () 7 F 7 = { 0,, 2, 3, 4, 5, 6 }., F 7 a, b F 7, a b, F 7,. (a) a, b,,. (b) 7., 4 5 = 20 = 2 7 + 6, 4 5 = 6 F 7., F 7,., 0 a F 7, ab = F 7 b F 7. (2) 7, 6 F 6 = { 0,, 2, 3, 4, 5 },,., F 6., 0 0 a F

More information

2 (2016 3Q N) c = o (11) Ax = b A x = c A n I n n n 2n (A I n ) (I n X) A A X A n A A A (1) (2) c 0 c (3) c A A i j n 1 ( 1) i+j A (i, j) A (i, j) ã i

2 (2016 3Q N) c = o (11) Ax = b A x = c A n I n n n 2n (A I n ) (I n X) A A X A n A A A (1) (2) c 0 c (3) c A A i j n 1 ( 1) i+j A (i, j) A (i, j) ã i [ ] (2016 3Q N) a 11 a 1n m n A A = a m1 a mn A a 1 A A = a n (1) A (a i a j, i j ) (2) A (a i ca i, c 0, i ) (3) A (a i a i + ca j, j i, i ) A 1 A 11 0 A 12 0 0 A 1k 0 1 A 22 0 0 A 2k 0 1 0 A 3k 1 A rk

More information

No δs δs = r + δr r = δr (3) δs δs = r r = δr + u(r + δr, t) u(r, t) (4) δr = (δx, δy, δz) u i (r + δr, t) u i (r, t) = u i x j δx j (5) δs 2

No δs δs = r + δr r = δr (3) δs δs = r r = δr + u(r + δr, t) u(r, t) (4) δr = (δx, δy, δz) u i (r + δr, t) u i (r, t) = u i x j δx j (5) δs 2 No.2 1 2 2 δs δs = r + δr r = δr (3) δs δs = r r = δr + u(r + δr, t) u(r, t) (4) δr = (δx, δy, δz) u i (r + δr, t) u i (r, t) = u i δx j (5) δs 2 = δx i δx i + 2 u i δx i δx j = δs 2 + 2s ij δx i δx j

More information

II (Percolation) ( 3-4 ) 1. [ ],,,,,,,. 2. [ ],.. 3. [ ],. 4. [ ] [ ] G. Grimmett Percolation Springer-Verlag New-York [ ] 3

II (Percolation) ( 3-4 ) 1. [ ],,,,,,,. 2. [ ],.. 3. [ ],. 4. [ ] [ ] G. Grimmett Percolation Springer-Verlag New-York [ ] 3 II (Percolation) 12 9 27 ( 3-4 ) 1 [ ] 2 [ ] 3 [ ] 4 [ ] 1992 5 [ ] G Grimmett Percolation Springer-Verlag New-York 1989 6 [ ] 3 1 3 p H 2 3 2 FKG BK Russo 2 p H = p T (=: p c ) 3 2 Kesten p c =1/2 ( )

More information

untitled

untitled 2 : n =1, 2,, 10000 0.5125 0.51 0.5075 0.505 0.5025 0.5 0.4975 0.495 0 2000 4000 6000 8000 10000 2 weak law of large numbers 1. X 1,X 2,,X n 2. µ = E(X i ),i=1, 2,,n 3. σi 2 = V (X i ) σ 2,i=1, 2,,n ɛ>0

More information

,,,17,,, ( ),, E Q [S T F t ] < S t, t [, T ],,,,,,,,

,,,17,,, ( ),, E Q [S T F t ] < S t, t [, T ],,,,,,,, 14 5 1 ,,,17,,,194 1 4 ( ),, E Q [S T F t ] < S t, t [, T ],,,,,,,, 1 4 1.1........................................ 4 5.1........................................ 5.........................................

More information

³ÎΨÏÀ

³ÎΨÏÀ 2017 12 12 Makoto Nakashima 2017 12 12 1 / 22 2.1. C, D π- C, D. A 1, A 2 C A 1 A 2 C A 3, A 4 D A 1 A 2 D Makoto Nakashima 2017 12 12 2 / 22 . (,, L p - ). Makoto Nakashima 2017 12 12 3 / 22 . (,, L p

More information

p.2/76

p.2/76 kino@info.kanagawa-u.ac.jp p.1/76 p.2/76 ( ) (2001). (2006). (2002). p.3/76 N n, n {1, 2,...N} 0 K k, k {1, 2,...,K} M M, m {1, 2,...,M} p.4/76 R =(r ij ), r ij = i j ( ): k s r(k, s) r(k, 1),r(k, 2),...,r(k,

More information

30 (11/04 )

30 (11/04 ) 30 (11/04 ) i, 1,, II I?,,,,,,,,, ( ),,, ϵ δ,,,,, (, ),,,,,, 5 : (1) ( ) () (,, ) (3) ( ) (4) (5) ( ) (1),, (),,, () (3), (),, (4), (1), (3), ( ), (5),,,,,,,, ii,,,,,,,, Richard P. Feynman, The best teaching

More information

M3 x y f(x, y) (= x) (= y) x + y f(x, y) = x + y + *. f(x, y) π y f(x, y) x f(x + x, y) f(x, y) lim x x () f(x,y) x 3 -

M3 x y f(x, y) (= x) (= y) x + y f(x, y) = x + y + *. f(x, y) π y f(x, y) x f(x + x, y) f(x, y) lim x x () f(x,y) x 3 - M3............................................................................................ 3.3................................................... 3 6........................................... 6..........................................

More information

80 X 1, X 2,, X n ( λ ) λ P(X = x) = f (x; λ) = λx e λ, x = 0, 1, 2, x! l(λ) = n f (x i ; λ) = i=1 i=1 n λ x i e λ i=1 x i! = λ n i=1 x i e nλ n i=1 x

80 X 1, X 2,, X n ( λ ) λ P(X = x) = f (x; λ) = λx e λ, x = 0, 1, 2, x! l(λ) = n f (x i ; λ) = i=1 i=1 n λ x i e λ i=1 x i! = λ n i=1 x i e nλ n i=1 x 80 X 1, X 2,, X n ( λ ) λ P(X = x) = f (x; λ) = λx e λ, x = 0, 1, 2, x! l(λ) = n f (x i ; λ) = n λ x i e λ x i! = λ n x i e nλ n x i! n n log l(λ) = log(λ) x i nλ log( x i!) log l(λ) λ = 1 λ n x i n =

More information

Microsoft Word - 信号処理3.doc

Microsoft Word - 信号処理3.doc Junji OHTSUBO 2012 FFT FFT SN sin cos x v ψ(x,t) = f (x vt) (1.1) t=0 (1.1) ψ(x,t) = A 0 cos{k(x vt) + φ} = A 0 cos(kx ωt + φ) (1.2) A 0 v=ω/k φ ω k 1.3 (1.2) (1.2) (1.2) (1.1) 1.1 c c = a + ib, a = Re[c],

More information

平成 28 年度 ( 第 38 回 ) 数学入門公開講座テキスト ( 京都大学数理解析研究所, 平成 ~8 28 月年 48 日開催月 1 日 semantics FB 1 x, y, z,... FB 1. FB (Boolean) Functional

平成 28 年度 ( 第 38 回 ) 数学入門公開講座テキスト ( 京都大学数理解析研究所, 平成 ~8 28 月年 48 日開催月 1 日 semantics FB 1 x, y, z,... FB 1. FB (Boolean) Functional 1 1.1 semantics F 1 x, y, z,... F 1. F 38 2016 9 1 (oolean) Functional 2. T F F 3. P F (not P ) F 4. P 1 P 2 F (P 1 and P 2 ) F 5. x P 1 P 2 F (let x be P 1 in P 2 ) F 6. F syntax F (let x be (T and y)

More information

20 4 20 i 1 1 1.1............................ 1 1.2............................ 4 2 11 2.1................... 11 2.2......................... 11 2.3....................... 19 3 25 3.1.............................

More information

入試の軌跡

入試の軌跡 4 y O x 4 Typed by L A TEX ε ) ) ) 6 4 ) 4 75 ) http://kumamoto.s.xrea.com/plan/.. PDF) Ctrl +L) Ctrl +) Ctrl + Ctrl + ) ) Alt + ) Alt + ) ESC. http://kumamoto.s.xrea.com/nyusi/kumadai kiseki ri i.pdf

More information

() n C + n C + n C + + n C n n (3) n C + n C + n C 4 + n C + n C 3 + n C 5 + (5) (6 ) n C + nc + 3 nc n nc n (7 ) n C + nc + 3 nc n nc n (

() n C + n C + n C + + n C n n (3) n C + n C + n C 4 + n C + n C 3 + n C 5 + (5) (6 ) n C + nc + 3 nc n nc n (7 ) n C + nc + 3 nc n nc n ( 3 n nc k+ k + 3 () n C r n C n r nc r C r + C r ( r n ) () n C + n C + n C + + n C n n (3) n C + n C + n C 4 + n C + n C 3 + n C 5 + (4) n C n n C + n C + n C + + n C n (5) k k n C k n C k (6) n C + nc

More information

, 1 ( f n (x))dx d dx ( f n (x)) 1 f n (x)dx d dx f n(x) lim f n (x) = [, 1] x f n (x) = n x x 1 f n (x) = x f n (x) = x 1 x n n f n(x) = [, 1] f n (x

, 1 ( f n (x))dx d dx ( f n (x)) 1 f n (x)dx d dx f n(x) lim f n (x) = [, 1] x f n (x) = n x x 1 f n (x) = x f n (x) = x 1 x n n f n(x) = [, 1] f n (x 1 1.1 4n 2 x, x 1 2n f n (x) = 4n 2 ( 1 x), 1 x 1 n 2n n, 1 x n n 1 1 f n (x)dx = 1, n = 1, 2,.. 1 lim 1 lim 1 f n (x)dx = 1 lim f n(x) = ( lim f n (x))dx = f n (x)dx 1 ( lim f n (x))dx d dx ( lim f d

More information

N cos s s cos ψ e e e e 3 3 e e 3 e 3 e

N cos s s cos ψ e e e e 3 3 e e 3 e 3 e 3 3 5 5 5 3 3 7 5 33 5 33 9 5 8 > e > f U f U u u > u ue u e u ue u ue u e u e u u e u u e u N cos s s cos ψ e e e e 3 3 e e 3 e 3 e 3 > A A > A E A f A A f A [ ] f A A e > > A e[ ] > f A E A < < f ; >

More information

ii 3.,. 4. F. (), ,,. 8.,. 1. (75%) (25%) =7 20, =7 21 (. ). 1.,, (). 3.,. 1. ().,.,.,.,.,. () (12 )., (), 0. 2., 1., 0,.

ii 3.,. 4. F. (), ,,. 8.,. 1. (75%) (25%) =7 20, =7 21 (. ). 1.,, (). 3.,. 1. ().,.,.,.,.,. () (12 )., (), 0. 2., 1., 0,. 24(2012) (1 C106) 4 11 (2 C206) 4 12 http://www.math.is.tohoku.ac.jp/~obata,.,,,.. 1. 2. 3. 4. 5. 6. 7.,,. 1., 2007 (). 2. P. G. Hoel, 1995. 3... 1... 2.,,. ii 3.,. 4. F. (),.. 5... 6.. 7.,,. 8.,. 1. (75%)

More information

x () g(x) = f(t) dt f(x), F (x) 3x () g(x) g (x) f(x), F (x) (3) h(x) = x 3x tf(t) dt.9 = {(x, y) ; x, y, x + y } f(x, y) = xy( x y). h (x) f(x), F (x

x () g(x) = f(t) dt f(x), F (x) 3x () g(x) g (x) f(x), F (x) (3) h(x) = x 3x tf(t) dt.9 = {(x, y) ; x, y, x + y } f(x, y) = xy( x y). h (x) f(x), F (x [ ] IC. f(x) = e x () f(x) f (x) () lim f(x) lim f(x) x + x (3) lim f(x) lim f(x) x + x (4) y = f(x) ( ) ( s46). < a < () a () lim a log xdx a log xdx ( ) n (3) lim log k log n n n k=.3 z = log(x + y ),

More information

p = mv p x > h/4π λ = h p m v Ψ 2 Ψ

p = mv p x > h/4π λ = h p m v Ψ 2 Ψ II p = mv p x > h/4π λ = h p m v Ψ 2 Ψ Ψ Ψ 2 0 x P'(x) m d 2 x = mω 2 x = kx = F(x) dt 2 x = cos(ωt + φ) mω 2 = k ω = m k v = dx = -ωsin(ωt + φ) dt = d 2 x dt 2 0 y v θ P(x,y) θ = ωt + φ ν = ω [Hz] 2π

More information

DVIOUT

DVIOUT A. A. A-- [ ] f(x) x = f 00 (x) f 0 () =0 f 00 () > 0= f(x) x = f 00 () < 0= f(x) x = A--2 [ ] f(x) D f 00 (x) > 0= y = f(x) f 00 (x) < 0= y = f(x) P (, f()) f 00 () =0 A--3 [ ] y = f(x) [, b] x = f (y)

More information

y = x 4 y = x 8 3 y = x 4 y = x 3. 4 f(x) = x y = f(x) 4 x =,, 3, 4, 5 5 f(x) f() = f() = 3 f(3) = 3 4 f(4) = 4 *3 S S = f() + f() + f(3) + f(4) () *4

y = x 4 y = x 8 3 y = x 4 y = x 3. 4 f(x) = x y = f(x) 4 x =,, 3, 4, 5 5 f(x) f() = f() = 3 f(3) = 3 4 f(4) = 4 *3 S S = f() + f() + f(3) + f(4) () *4 Simpson H4 BioS. Simpson 3 3 0 x. β α (β α)3 (x α)(x β)dx = () * * x * * ɛ δ y = x 4 y = x 8 3 y = x 4 y = x 3. 4 f(x) = x y = f(x) 4 x =,, 3, 4, 5 5 f(x) f() = f() = 3 f(3) = 3 4 f(4) = 4 *3 S S = f()

More information

09 8 9 3 Chebyshev 5................................. 5........................................ 5.3............................. 6.4....................................... 8.4...................................

More information

() x + y + y + x dy dx = 0 () dy + xy = x dx y + x y ( 5) ( s55906) 0.7. (). 5 (). ( 6) ( s6590) 0.8 m n. 0.9 n n A. ( 6) ( s6590) f A (λ) = det(a λi)

() x + y + y + x dy dx = 0 () dy + xy = x dx y + x y ( 5) ( s55906) 0.7. (). 5 (). ( 6) ( s6590) 0.8 m n. 0.9 n n A. ( 6) ( s6590) f A (λ) = det(a λi) 0. A A = 4 IC () det A () A () x + y + z = x y z X Y Z = A x y z ( 5) ( s5590) 0. a + b + c b c () a a + b + c c a b a + b + c 0 a b c () a 0 c b b c 0 a c b a 0 0. A A = 7 5 4 5 0 ( 5) ( s5590) () A ()

More information

鉄鋼協会プレゼン

鉄鋼協会プレゼン NN :~:, 8 Nov., Adaptive H Control for Linear Slider with Friction Compensation positioning mechanism moving table stand manipulator Point to Point Control [G] Continuous Path Control ground Fig. Positoining

More information

Microsoft PowerPoint - SSII_harada pptx

Microsoft PowerPoint - SSII_harada pptx The state of the world The gathered data The processed data w d r I( W; D) I( W; R) The data processing theorem states that data processing can only destroy information. David J.C. MacKay. Information

More information