On the Limited Sample Effect of the Optimum Classifier by Bayesian Approach he Case of Independent Sample Size for Each Class Xuexian HA, etsushi WAKA

Size: px
Start display at page:

Download "On the Limited Sample Effect of the Optimum Classifier by Bayesian Approach he Case of Independent Sample Size for Each Class Xuexian HA, etsushi WAKA"

Transcription

1 Journal Article / 学術雑誌論文 ベイズアプローチによる最適識別系の有限 標本効果に関する考察 : 学習標本の大きさ がクラス間で異なる場合 (< 論文小特集 > パ ターン認識のための学習 : 基礎と応用 On the limited sample effect of bayesian approach : the case of each class 韓, 雪仙 ; 若林, 哲史 ; 木村, 文隆 ; 三宅, 康二 Han, Xuexian; Wakabayashi, etsushi; Kimura, Fumitaka; Miyake 電子情報通信学会論文誌. D-II, 情報 システム, II-パターン he transactions of the Institute o Communication Engineers. D-II

2 On the Limited Sample Effect of the Optimum Classifier by Bayesian Approach he Case of Independent Sample Size for Each Class Xuexian HA, etsushi WAKABAYASHI, Fumitaka KIMURA, and Yasuji MIYAKE [1] Faculty of Engineering, Mie University, su-shi, Japan [] [5] [6] Aitchson Dunsmore [7] [8] [6].. 1 θ D II Vol. J8 D II o. 4 pp

3 99/4 Vol. J8 D II o. 4 p(x χ = p(x θp(θ χdθ (1 p(x χ χ X p(x θ θ X p(θ χ θ p(θ χ θ p(θ [9] (1 ( p(θ. X p(x p(x θ θ ˆθ(χ ( θ ˆθ(χ χ θ (1 ( ˆθ(χ X (3 [10] t 1 1 (1 p(x θ p(θ χ p(x χ t p(x θ p(θ χ p(x χ [6] p(x χ =( π n Σ 1 = (X M Σ 1 Σ =(1 ασ + ασ 0 0 α = + 0 Γ( +1 Γ( n+1 (X M } +1 (3 X n M Σ Σ 0 X 0 Σ 0 Γ p(x χ t (3 g(x = lnp(xp (ω =( +1ln 1+ (X } M Σ 1 (X M D =( π n +ln Σ lnd lnp (ω (4 Γ( +1 Γ( n+1 P (ω ω 0 =0 = Σ =Σ Σ 0 0 = 0 Σ 0 X Σ 0 = σ I I σ = = 0 σ M 3. (4 6

4 [11] [ 1.] g(x [ =( ln 1+ 1 X M 0σ + k (1 αλ i (1 αλ i + ασ [ Φ i (X M ] k ln ( (1 αλ i + ασ lnp (ω ]} (5 λ i Φ i Σ i i k 3. 3 Σ P (ω 0 (5 3 1 ( g(x = X M k (1 αλ i (1 αλ i + ασ Φ i (X M } (6 i < = k λ i ( 0/σ (6 [1] g(x = n i=k+1 = X M Φ i (X M } k Φ i (X M } (7 X k K-L X 1 Fig. 1 Decision boundaries of projection distance and modified projection distance. 1 1 X g(x =(X M Σ 1 (X M +ln Σ lnp (ω (8 63

5 99/4 Vol. J8 D II o. 4 (% 1 + =6,σ1 = σ =1.0, 1 Fig. heoretical mean error rate (% v.s. sample size with fixed total sample size ( 1 + =6,σ1 = σ =1.0, univariate case. 3 (% 1 + =6,σ1 =4.0,σ =0.5, 1 Fig. 3 heoretical mean error rate (% v.s. sample size with fixed total sample size ( 1 + =6,σ1 =4.0,σ =0.5, univariate case σ 1 = σ =1.0 3 σ 1 =4.0,σ = =6 t [.] (% 1 + =40,8 Fig. 4 Mean error rate (% v.s. sample size with fixed total sample size ( 1 + = 40, 8-variate case

6 8 8 diagσ =(8.41, 1.06, 0.1, 0., 1.49, 1.77, 0.35,.73 (9 8 M 1 =(0, 0, 0,..., 0, M =(3.86, 3.10, 0.84, 0.84, 1.64, 1.08, 0.6, 0.01 ( = [11] 3, 6, 9, 1, 0, 3, 48, 64, 100, 144, 196, (1 7 7 ( r r=5 (3 0 1 (4 Roberts (5 π / able 1 Sample size of each class (Case of nearly common learning sample size otal able Sample size of each class (Case of independent learning sample size otal ( (5 3 (7 [14641] (8 y = x u u = n n = 3, 6, 9, 1, 0, 3, 48, 64, 100, 144, 196, 56,

7 99/4 Vol. J8 D II o. 4 5 Fig. 5 Recognition rate of handwritten numeral recognition (Case of nearly common learning sample size. [11], [13], [14] 3 14,946 44,838 9,877 14, = α 1 α (11 α 5 6 Fig. 6 Recognition rate of optimum discriminant function (Case of independent learning sample size % % [15]

8 able 3 3 Ratio of computation cost Fig. 7 Recognition rate of handwritten numeral recognition (Case of independent learning sample size. [], [16] (1 ( (3 (1 ( (3 (4 (5 0 Σ 0(= σ I t 67

9 99/4 Vol. J8 D II o. 4 [11] 6. (1 0 0 ( 4.3 (3 (4 (5 [1] [] J.M. Van Campenhout, On the peaking of Hughes mean recognition accuracy : he resolution of an apparant paradox, IEEE rans. Syst., Man & Cybern., vol.smc-8, no.5, pp , May [3] W.G. Waller and A.K. Jain, On the monotonicity of the performance of Bayesian classifiers, IEEE rans. Info. heory, vol.i-4, pp , [4] J.M. Van Campenhout, opics in Measurement Selection, Handbook of Statistics, vol., orth- Holland Publishing Company, pp , 198. [5] D. Lindley, he Bayesian approach, Scand. J. Statist., vol.5, pp.1 6, [6] D.G. Keehn, A note on learning for Gaussian properties, IEEE rans. Inform. heory, vol.i-11, no.1, pp.16 13, Jan [7] B.D. Ripley, Pattern Recognition and eural etworks, p.5, Cambridge University Press, [8] S.J. Raudys and A.K. Jain, Small sample size effects in statistical pattern recognition : Recommendations for practitioners, IEEE rans. Pattern Analysis & Machine Intelligence, vol.13, no.3, pp.5 64, March [9] R.O. Duda and P.E. Hart, Pattern Classification and Scene Analysis, p.5, John Wiley & Sons, Inc., ew York, [10] vol.77, no.8, pp , Aug [11] D-II vol.j77-d-ii, no.10, pp , Oct [1] vol.4, no.1, pp , Jan [13] PRU9-33, Sept [14] PRU-93-46, Sept [15] K. Fukunaga and R.R. Hayes, Effects of sample size in classifier design, IEEE rans. Pattern Analysis & Machine Intelligence, vol.pami-11, no.8, pp , Aug [16] G.F. Huges, On the mean accuracy of statistical pattern recognizers, IEEE rans. Info. heory, vol.i- 14, no.1, pp.55 63, Jan Σ 0 = σ I Σ =(1 ασ + ασ I (A 1 (1 ασ + ασ I}Φ i =(1 ασφ i + ασ Φ i =(1 αλ i + ασ }Φ i (i =1,,,n (A Σ (1 αλ i + ασ Φ i(i =1,,,n Y =(X M Σ 1 (X M n 1 = Φ (1 αλ i + ασ i (X M } (A 3 k i > k (1 αλ i ασ (A 3 68

10 k 1 Y Φ (1 αλ i + ασ i (X M } + n i=k+1 n Φ i (X M } i=k+1 = X M 1 ασ Φ i (X M } k Φ i (X M } (A 4 [ Y 1 X M ασ k (A 4 (A 5 (1 αλ i (1 αλ i + ασ Φ i (X M } n ln Σ = ln(1 αλ i + ασ } k ln(1 αλ i + ασ } + n i=k+1 ] (A 6 ln(ασ (A 7 (4 (A 6 (A 7 α = 0/( + 0 ( (4 g i(x =σ +1 i 1+ 1 ( } x mi σ i (i =1, (A 8 h(x 0 h(x =g 1(x g (x = σ +1 1 σ +1 ( 1+ 1 x m1 σ 1 } ( 1+ 1 x m σ =(a bx (am 1 bm x + am 1 bm + c a = 1 σ +1 1, b = 1 σ +1, c = σ h(x =0 α = β = +1 1 σ m1 + m +1 (σ 1 = σ } (A 9 α, β = am1 bm (m 1 m ab (a bc a b (σ 1 = σ (A 10 σ 1 > = σ ε = P (ω 1ε 1 + P (ω ε = P (ω 1P (error χ, ω 1+P (ω P (error χ, ω = B + A + C A = B = α p(x χ, ω P (ω dx ( α m = 1 Φ β α σ p(x χ, ω 1P (ω 1dx ( ( = 1 β m1 Φ 1 α m1 σ 1 Φ σ 1 C = β p(x χ, ω P (ω dx = 1 ( } β m 1 Φ (A 11 σ Φ (x 0 t t (x Φ (x 0= x0 t (xdx (A 1 69

11 99/4 Vol. J8 D II o. 4. σi ( } i +1 g i(x =( 1+ 1 x mi D i σ i D i =( i π 1 Γ i ( i +1 Γ ( i (i =1, (A 13 h(x =0 ewton x k+1 = x k h(x k h (x k σ1 h(x =( D ( } 1 +1 x m1 σ 1 ( σ ( 1+ 1 x m D σ ( h (x = ( 1 +1 ( x m1 σ1 1 σ 1 σ 1 D 1 ( } x m1 1 σ 1 } ME ME ( ( +1 ( x m σ σ σ D ( } 1+ 1 x m (A 14 σ (A

医系の統計入門第 2 版 サンプルページ この本の定価 判型などは, 以下の URL からご覧いただけます. このサンプルページの内容は, 第 2 版 1 刷発行時のものです.

医系の統計入門第 2 版 サンプルページ この本の定価 判型などは, 以下の URL からご覧いただけます.   このサンプルページの内容は, 第 2 版 1 刷発行時のものです. 医系の統計入門第 2 版 サンプルページ この本の定価 判型などは, 以下の URL からご覧いただけます. http://www.morikita.co.jp/books/mid/009192 このサンプルページの内容は, 第 2 版 1 刷発行時のものです. i 2 t 1. 2. 3 2 3. 6 4. 7 5. n 2 ν 6. 2 7. 2003 ii 2 2013 10 iii 1987

More information

ii 3.,. 4. F. (), ,,. 8.,. 1. (75%) (25%) =7 20, =7 21 (. ). 1.,, (). 3.,. 1. ().,.,.,.,.,. () (12 )., (), 0. 2., 1., 0,.

ii 3.,. 4. F. (), ,,. 8.,. 1. (75%) (25%) =7 20, =7 21 (. ). 1.,, (). 3.,. 1. ().,.,.,.,.,. () (12 )., (), 0. 2., 1., 0,. 24(2012) (1 C106) 4 11 (2 C206) 4 12 http://www.math.is.tohoku.ac.jp/~obata,.,,,.. 1. 2. 3. 4. 5. 6. 7.,,. 1., 2007 (). 2. P. G. Hoel, 1995. 3... 1... 2.,,. ii 3.,. 4. F. (),.. 5... 6.. 7.,,. 8.,. 1. (75%)

More information

(2) Fisher α (α) α Fisher α ( α) 0 Levi Civita (1) ( 1) e m (e) (m) ([1], [2], [13]) Poincaré e m Poincaré e m Kähler-like 2 Kähler-like

(2) Fisher α (α) α Fisher α ( α) 0 Levi Civita (1) ( 1) e m (e) (m) ([1], [2], [13]) Poincaré e m Poincaré e m Kähler-like 2 Kähler-like () 10 9 30 1 Fisher α (α) α Fisher α ( α) 0 Levi Civita (1) ( 1) e m (e) (m) ([1], [], [13]) Poincaré e m Poincaré e m Kähler-like Kähler-like Kähler M g M X, Y, Z (.1) Xg(Y, Z) = g( X Y, Z) + g(y, XZ)

More information

solutionJIS.dvi

solutionJIS.dvi May 0, 006 6 [email protected] /9/005 (7 0/5/006 1 1.1 (a) (b) (c) c + c + + c = nc (x 1 x)+(x x)+ +(x n x) =(x 1 + x + + x n ) nx = nx nx =0 c(x 1 x)+c(x x)+ + c(x n x) =c (x i x) =0 y i (x

More information

(a) (b) (c) Canny (d) 1 ( x α, y α ) 3 (x α, y α ) (a) A 2 + B 2 + C 2 + D 2 + E 2 + F 2 = 1 (3) u ξ α u (A, B, C, D, E, F ) (4) ξ α (x 2 α, 2x α y α,

(a) (b) (c) Canny (d) 1 ( x α, y α ) 3 (x α, y α ) (a) A 2 + B 2 + C 2 + D 2 + E 2 + F 2 = 1 (3) u ξ α u (A, B, C, D, E, F ) (4) ξ α (x 2 α, 2x α y α, [II] Optimization Computation for 3-D Understanding of Images [II]: Ellipse Fitting 1. (1) 2. (2) (edge detection) (edge) (zero-crossing) Canny (Canny operator) (3) 1(a) [I] [II] [III] [IV ] E-mail [email protected]

More information

, 3, 6 = 3, 3,,,, 3,, 9, 3, 9, 3, 3, 4, 43, 4, 3, 9, 6, 6,, 0 p, p, p 3,..., p n N = p p p 3 p n + N p n N p p p, p 3,..., p n p, p,..., p n N, 3,,,,

, 3, 6 = 3, 3,,,, 3,, 9, 3, 9, 3, 3, 4, 43, 4, 3, 9, 6, 6,, 0 p, p, p 3,..., p n N = p p p 3 p n + N p n N p p p, p 3,..., p n p, p,..., p n N, 3,,,, 6,,3,4,, 3 4 8 6 6................................. 6.................................. , 3, 6 = 3, 3,,,, 3,, 9, 3, 9, 3, 3, 4, 43, 4, 3, 9, 6, 6,, 0 p, p, p 3,..., p n N = p p p 3 p n + N p n N p p p,

More information

1. 1 A : l l : (1) l m (m 3) (2) m (3) n (n 3) (4) A α, β γ α β + γ = 2 m l lm n nα nα = lm. α = lm n. m lm 2β 2β = lm β = lm 2. γ l 2. 3

1. 1 A : l l : (1) l m (m 3) (2) m (3) n (n 3) (4) A α, β γ α β + γ = 2 m l lm n nα nα = lm. α = lm n. m lm 2β 2β = lm β = lm 2. γ l 2. 3 1. 1 A : l l : (1) l m (m 3) (2) m (3) n (n 3) (4) A 2 1 2 1 2 3 α, β γ α β + γ = 2 m l lm n nα nα = lm. α = lm n. m lm 2β 2β = lm β = lm 2. γ l 2. 3 4 P, Q R n = {(x 1, x 2,, x n ) ; x 1, x 2,, x n R}

More information

I A A441 : April 15, 2013 Version : 1.1 I Kawahira, Tomoki TA (Shigehiro, Yoshida )

I A A441 : April 15, 2013 Version : 1.1 I   Kawahira, Tomoki TA (Shigehiro, Yoshida ) I013 00-1 : April 15, 013 Version : 1.1 I Kawahira, Tomoki TA (Shigehiro, Yoshida) http://www.math.nagoya-u.ac.jp/~kawahira/courses/13s-tenbou.html pdf * 4 15 4 5 13 e πi = 1 5 0 5 7 3 4 6 3 6 10 6 17

More information

Microsoft Word doc

Microsoft Word doc . 正規線形モデルのベイズ推定翠川 大竹距離減衰式 (PGA(Midorikawa, S., and Ohtake, Y. (, Attenuation relationships of peak ground acceleration and velocity considering attenuation characteristics for shallow and deeper earthquakes,

More information

x () g(x) = f(t) dt f(x), F (x) 3x () g(x) g (x) f(x), F (x) (3) h(x) = x 3x tf(t) dt.9 = {(x, y) ; x, y, x + y } f(x, y) = xy( x y). h (x) f(x), F (x

x () g(x) = f(t) dt f(x), F (x) 3x () g(x) g (x) f(x), F (x) (3) h(x) = x 3x tf(t) dt.9 = {(x, y) ; x, y, x + y } f(x, y) = xy( x y). h (x) f(x), F (x [ ] IC. f(x) = e x () f(x) f (x) () lim f(x) lim f(x) x + x (3) lim f(x) lim f(x) x + x (4) y = f(x) ( ) ( s46). < a < () a () lim a log xdx a log xdx ( ) n (3) lim log k log n n n k=.3 z = log(x + y ),

More information

untitled

untitled [email protected] http://www.image.med.osaka-u.ac.jp/member/yoshi/ II Excel, Mathematica Mathematica Osaka Electro-Communication University (2007 Apr) 09849-31503-64015-30704-18799-390 http://www.image.med.osaka-u.ac.jp/member/yoshi/

More information

II Karel Švadlenka * [1] 1.1* 5 23 m d2 x dt 2 = cdx kx + mg dt. c, g, k, m 1.2* u = au + bv v = cu + dv v u a, b, c, d R

II Karel Švadlenka * [1] 1.1* 5 23 m d2 x dt 2 = cdx kx + mg dt. c, g, k, m 1.2* u = au + bv v = cu + dv v u a, b, c, d R II Karel Švadlenka 2018 5 26 * [1] 1.1* 5 23 m d2 x dt 2 = cdx kx + mg dt. c, g, k, m 1.2* 5 23 1 u = au + bv v = cu + dv v u a, b, c, d R 1.3 14 14 60% 1.4 5 23 a, b R a 2 4b < 0 λ 2 + aλ + b = 0 λ =

More information

..3. Ω, Ω F, P Ω, F, P ). ) F a) A, A,..., A i,... F A i F. b) A F A c F c) Ω F. ) A F A P A),. a) 0 P A) b) P Ω) c) [ ] A, A,..., A i,... F i j A i A

..3. Ω, Ω F, P Ω, F, P ). ) F a) A, A,..., A i,... F A i F. b) A F A c F c) Ω F. ) A F A P A),. a) 0 P A) b) P Ω) c) [ ] A, A,..., A i,... F i j A i A .. Laplace ). A... i),. ω i i ). {ω,..., ω } Ω,. ii) Ω. Ω. A ) r, A P A) P A) r... ).. Ω {,, 3, 4, 5, 6}. i i 6). A {, 4, 6} P A) P A) 3 6. ).. i, j i, j) ) Ω {i, j) i 6, j 6}., 36. A. A {i, j) i j }.

More information

A = A x x + A y y + A, B = B x x + B y y + B, C = C x x + C y y + C..6 x y A B C = A x x + A y y + A B x B y B C x C y C { B = A x x + A y y + A y B B

A = A x x + A y y + A, B = B x x + B y y + B, C = C x x + C y y + C..6 x y A B C = A x x + A y y + A B x B y B C x C y C { B = A x x + A y y + A y B B 9 7 A = A x x + A y y + A, B = B x x + B y y + B, C = C x x + C y y + C..6 x y A B C = A x x + A y y + A B x B y B C x C y C { B = A x x + A y y + A y B B x x B } B C y C y + x B y C x C C x C y B = A

More information

N cos s s cos ψ e e e e 3 3 e e 3 e 3 e

N cos s s cos ψ e e e e 3 3 e e 3 e 3 e 3 3 5 5 5 3 3 7 5 33 5 33 9 5 8 > e > f U f U u u > u ue u e u ue u ue u e u e u u e u u e u N cos s s cos ψ e e e e 3 3 e e 3 e 3 e 3 > A A > A E A f A A f A [ ] f A A e > > A e[ ] > f A E A < < f ; >

More information

untitled

untitled K-Means 1 5 2 K-Means 7 2.1 K-Means.............................. 7 2.2 K-Means.......................... 8 2.3................... 9 3 K-Means 11 3.1.................................. 11 3.2..................................

More information

Part () () Γ Part ,

Part () () Γ Part , Contents a 6 6 6 6 6 6 6 7 7. 8.. 8.. 8.3. 8 Part. 9. 9.. 9.. 3. 3.. 3.. 3 4. 5 4.. 5 4.. 9 4.3. 3 Part. 6 5. () 6 5.. () 7 5.. 9 5.3. Γ 3 6. 3 6.. 3 6.. 3 6.3. 33 Part 3. 34 7. 34 7.. 34 7.. 34 8. 35

More information

1 Abstract 2 3 n a ax 2 + bx + c = 0 (a 0) (1) ( x + b ) 2 = b2 4ac 2a 4a 2 D = b 2 4ac > 0 (1) 2 D = 0 D < 0 x + b 2a = ± b2 4ac 2a b ± b 2

1 Abstract 2 3 n a ax 2 + bx + c = 0 (a 0) (1) ( x + b ) 2 = b2 4ac 2a 4a 2 D = b 2 4ac > 0 (1) 2 D = 0 D < 0 x + b 2a = ± b2 4ac 2a b ± b 2 1 Abstract n 1 1.1 a ax + bx + c = 0 (a 0) (1) ( x + b ) = b 4ac a 4a D = b 4ac > 0 (1) D = 0 D < 0 x + b a = ± b 4ac a b ± b 4ac a b a b ± 4ac b i a D (1) ax + bx + c D 0 () () (015 8 1 ) 1. D = b 4ac

More information

tokei01.dvi

tokei01.dvi 2. :,,,. :.... Apr. - Jul., 26FY Dept. of Mechanical Engineering, Saga Univ., JAPAN 4 3. (probability),, 1. : : n, α A, A a/n. :, p, p Apr. - Jul., 26FY Dept. of Mechanical Engineering, Saga Univ., JAPAN

More information

微分積分 サンプルページ この本の定価 判型などは, 以下の URL からご覧いただけます. このサンプルページの内容は, 初版 1 刷発行時のものです.

微分積分 サンプルページ この本の定価 判型などは, 以下の URL からご覧いただけます.   このサンプルページの内容は, 初版 1 刷発行時のものです. 微分積分 サンプルページ この本の定価 判型などは, 以下の URL からご覧いただけます. ttp://www.morikita.co.jp/books/mid/00571 このサンプルページの内容は, 初版 1 刷発行時のものです. i ii 014 10 iii [note] 1 3 iv 4 5 3 6 4 x 0 sin x x 1 5 6 z = f(x, y) 1 y = f(x)

More information

Step 2 O(3) Sym 0 (R 3 ), : a + := λ 1 λ 2 λ 3 a λ 1 λ 2 λ 3. a +. X a +, O(3).X. O(3).X = O(3)/O(3) X, O(3) X. 1.7 Step 3 O(3) Sym 0 (R 3 ),

Step 2 O(3) Sym 0 (R 3 ), : a + := λ 1 λ 2 λ 3 a λ 1 λ 2 λ 3. a +. X a +, O(3).X. O(3).X = O(3)/O(3) X, O(3) X. 1.7 Step 3 O(3) Sym 0 (R 3 ), 1 1 1.1,,. 1.1 1.2 O(2) R 2 O(2).p, {0} r > 0. O(3) R 3 O(3).p, {0} r > 0.,, O(n) ( SO(n), O(n) ): Sym 0 (R n ) := {X M(n, R) t X = X, tr(x) = 0}. 1.3 O(n) Sym 0 (R n ) : g.x := gxg 1 (g O(n), X Sym 0

More information

4. ϵ(ν, T ) = c 4 u(ν, T ) ϵ(ν, T ) T ν π4 Planck dx = 0 e x 1 15 U(T ) x 3 U(T ) = σt 4 Stefan-Boltzmann σ 2π5 k 4 15c 2 h 3 = W m 2 K 4 5.

4. ϵ(ν, T ) = c 4 u(ν, T ) ϵ(ν, T ) T ν π4 Planck dx = 0 e x 1 15 U(T ) x 3 U(T ) = σt 4 Stefan-Boltzmann σ 2π5 k 4 15c 2 h 3 = W m 2 K 4 5. A 1. Boltzmann Planck u(ν, T )dν = 8πh ν 3 c 3 kt 1 dν h 6.63 10 34 J s Planck k 1.38 10 23 J K 1 Boltzmann u(ν, T ) T ν e hν c = 3 10 8 m s 1 2. Planck λ = c/ν Rayleigh-Jeans u(ν, T )dν = 8πν2 kt dν c

More information

2 1,2, , 2 ( ) (1) (2) (3) (4) Cameron and Trivedi(1998) , (1987) (1982) Agresti(2003)

2 1,2, , 2 ( ) (1) (2) (3) (4) Cameron and Trivedi(1998) , (1987) (1982) Agresti(2003) 3 1 1 1 2 1 2 1,2,3 1 0 50 3000, 2 ( ) 1 3 1 0 4 3 (1) (2) (3) (4) 1 1 1 2 3 Cameron and Trivedi(1998) 4 1974, (1987) (1982) Agresti(2003) 3 (1)-(4) AAA, AA+,A (1) (2) (3) (4) (5) (1)-(5) 1 2 5 3 5 (DI)

More information

waseda2010a-jukaiki1-main.dvi

waseda2010a-jukaiki1-main.dvi November, 2 Contents 6 2 8 3 3 3 32 32 33 5 34 34 6 35 35 7 4 R 2 7 4 4 9 42 42 2 43 44 2 5 : 2 5 5 23 52 52 23 53 53 23 54 24 6 24 6 6 26 62 62 26 63 t 27 7 27 7 7 28 72 72 28 73 36) 29 8 29 8 29 82 3

More information

6 2 2 x y x y t P P = P t P = I P P P ( ) ( ) ,, ( ) ( ) cos θ sin θ cos θ sin θ, sin θ cos θ sin θ cos θ y x θ x θ P

6 2 2 x y x y t P P = P t P = I P P P ( ) ( ) ,, ( ) ( ) cos θ sin θ cos θ sin θ, sin θ cos θ sin θ cos θ y x θ x θ P 6 x x 6.1 t P P = P t P = I P P P 1 0 1 0,, 0 1 0 1 cos θ sin θ cos θ sin θ, sin θ cos θ sin θ cos θ x θ x θ P x P x, P ) = t P x)p ) = t x t P P ) = t x = x, ) 6.1) x = Figure 6.1 Px = x, P=, θ = θ P

More information

Vol.8 No (July 2015) 2/ [3] stratification / *1 2 J-REIT *2 *1 *2 J-REIT % J-REIT J-REIT 6 J-REIT J-REIT 10 J-REIT *3 J-

Vol.8 No (July 2015) 2/ [3] stratification / *1 2 J-REIT *2 *1 *2 J-REIT % J-REIT J-REIT 6 J-REIT J-REIT 10 J-REIT *3 J- Vol.8 No.2 1 9 (July 2015) 1,a) 2 3 2012 1 5 2012 3 24, 2013 12 12 2 1 2 A Factor Model for Measuring Market Risk in Real Estate Investment Hiroshi Ishijima 1,a) Akira Maeda 2 Tomohiko Taniyama 3 Received:

More information

第5章 偏微分方程式の境界値問題

第5章 偏微分方程式の境界値問題 October 5, 2018 1 / 113 4 ( ) 2 / 113 Poisson 5.1 Poisson ( A.7.1) Poisson Poisson 1 (A.6 ) Γ p p N u D Γ D b 5.1.1: = Γ D Γ N 3 / 113 Poisson 5.1.1 d {2, 3} Lipschitz (A.5 ) Γ D Γ N = \ Γ D Γ p Γ N Γ

More information

renshumondai-kaito.dvi

renshumondai-kaito.dvi 3 1 13 14 1.1 1 44.5 39.5 49.5 2 0.10 2 0.10 54.5 49.5 59.5 5 0.25 7 0.35 64.5 59.5 69.5 8 0.40 15 0.75 74.5 69.5 79.5 3 0.15 18 0.90 84.5 79.5 89.5 2 0.10 20 1.00 20 1.00 2 1.2 1 16.5 20.5 12.5 2 0.10

More information

φ 4 Minimal subtraction scheme 2-loop ε 2008 (University of Tokyo) (Atsuo Kuniba) version 21/Apr/ Formulas Γ( n + ɛ) = ( 1)n (1 n! ɛ + ψ(n + 1)

φ 4 Minimal subtraction scheme 2-loop ε 2008 (University of Tokyo) (Atsuo Kuniba) version 21/Apr/ Formulas Γ( n + ɛ) = ( 1)n (1 n! ɛ + ψ(n + 1) φ 4 Minimal subtraction scheme 2-loop ε 28 University of Tokyo Atsuo Kuniba version 2/Apr/28 Formulas Γ n + ɛ = n n! ɛ + ψn + + Oɛ n =,, 2, ψn + = + 2 + + γ, 2 n ψ = γ =.5772... Euler const, log + ax x

More information

xx/xx Vol. Jxx A No. xx 1 Fig. 1 PAL(Panoramic Annular Lens) PAL(Panoramic Annular Lens) PAL (2) PAL PAL 2 PAL 3 2 PAL 1 PAL 3 PAL PAL 2. 1 PAL

xx/xx Vol. Jxx A No. xx 1 Fig. 1 PAL(Panoramic Annular Lens) PAL(Panoramic Annular Lens) PAL (2) PAL PAL 2 PAL 3 2 PAL 1 PAL 3 PAL PAL 2. 1 PAL PAL On the Precision of 3D Measurement by Stereo PAL Images Hiroyuki HASE,HirofumiKAWAI,FrankEKPAR, Masaaki YONEDA,andJien KATO PAL 3 PAL Panoramic Annular Lens 1985 Greguss PAL 1 PAL PAL 2 3 2 PAL DP

More information

meiji_resume_1.PDF

meiji_resume_1.PDF β β β (q 1,q,..., q n ; p 1, p,..., p n ) H(q 1,q,..., q n ; p 1, p,..., p n ) Hψ = εψ ε k = k +1/ ε k = k(k 1) (x, y, z; p x, p y, p z ) (r; p r ), (θ; p θ ), (ϕ; p ϕ ) ε k = 1/ k p i dq i E total = E

More information

all.dvi

all.dvi 5,, Euclid.,..,... Euclid,.,.,, e i (i =,, ). 6 x a x e e e x.:,,. a,,. a a = a e + a e + a e = {e, e, e } a (.) = a i e i = a i e i (.) i= {a,a,a } T ( T ),.,,,,. (.),.,...,,. a 0 0 a = a 0 + a + a 0

More information

.3. (x, x = (, u = = 4 (, x x = 4 x, x 0 x = 0 x = 4 x.4. ( z + z = 8 z, z 0 (z, z = (0, 8, (,, (8, 0 3 (0, 8, (,, (8, 0 z = z 4 z (g f(x = g(

.3. (x, x = (, u = = 4 (, x x = 4 x, x 0 x = 0 x = 4 x.4. ( z + z = 8 z, z 0 (z, z = (0, 8, (,, (8, 0 3 (0, 8, (,, (8, 0 z = z 4 z (g f(x = g( 06 5.. ( y = x x y 5 y 5 = (x y = x + ( y = x + y = x y.. ( Y = C + I = 50 + 0.5Y + 50 r r = 00 0.5Y ( L = M Y r = 00 r = 0.5Y 50 (3 00 0.5Y = 0.5Y 50 Y = 50, r = 5 .3. (x, x = (, u = = 4 (, x x = 4 x,

More information

03.Œk’ì

03.Œk’ì HRS KG NG-HRS NG-KG AIC Fama 1965 Mandelbrot Blattberg Gonedes t t Kariya, et. al. Nagahara ARCH EngleGARCH Bollerslev EGARCH Nelson GARCH Heynen, et. al. r n r n =σ n w n logσ n =α +βlogσ n 1 + v n w

More information

振動工学に基礎

振動工学に基礎 Ky Words. ω. ω.3 osω snω.4 ω snω ω osω.5 .6 ω osω snω.7 ω ω ( sn( ω φ.7 ( ω os( ω φ.8 ω ( ω sn( ω φ.9 ω anφ / ω ω φ ω T ω T s π T π. ω Hz ω. T π π rad/s π ω π T. T ω φ 6. 6. 4. 4... -... -. -4. -4. -6.

More information

行列代数2010A

行列代数2010A a ij i j 1) i +j i, j) ij ij 1 j a i1 a ij a i a 1 a j a ij 1) i +j 1,j 1,j +1 a i1,1 a i1,j 1 a i1,j +1 a i1, a i +1,1 a i +1.j 1 a i +1,j +1 a i +1, a 1 a,j 1 a,j +1 a, ij i j 1,j 1,j +1 ij 1) i +j a

More information

seminar0220a.dvi

seminar0220a.dvi 1 Hi-Stat 2 16 2 20 16:30-18:00 2 2 217 1 COE 4 COE RA E-MAIL: [email protected] 2004 2 25 S-PLUS S-PLUS S-PLUS S-code 2 [8] [8] [8] 1 2 ARFIMA(p, d, q) FI(d) φ(l)(1 L) d x t = θ(l)ε t ({ε t }

More information

.. ( )T p T = p p = T () T x T N P (X < x T ) N = ( T ) N (2) ) N ( P (X x T ) N = T (3) T N P T N P 0

.. ( )T p T = p p = T () T x T N P (X < x T ) N = ( T ) N (2) ) N ( P (X x T ) N = T (3) T N P T N P 0 20 5 8..................................................2.....................................3 L.....................................4................................. 2 2. 3 2. (N ).........................................

More information

A

A A 2563 15 4 21 1 3 1.1................................................ 3 1.2............................................. 3 2 3 2.1......................................... 3 2.2............................................

More information

( ) ( 40 )+( 60 ) Schrödinger 3. (a) (b) (c) yoshioka/education-09.html pdf 1

( ) ( 40 )+( 60 ) Schrödinger 3. (a) (b) (c)   yoshioka/education-09.html pdf 1 2009 1 ( ) ( 40 )+( 60 ) 1 1. 2. Schrödinger 3. (a) (b) (c) http://goofy.phys.nara-wu.ac.jp/ yoshioka/education-09.html pdf 1 1. ( photon) ν λ = c ν (c = 3.0 108 /m : ) ɛ = hν (1) p = hν/c = h/λ (2) h

More information

数学の基礎訓練I

数学の基礎訓練I I 9 6 13 1 1 1.1............... 1 1................ 1 1.3.................... 1.4............... 1.4.1.............. 1.4................. 3 1.4.3........... 3 1.4.4.. 3 1.5.......... 3 1.5.1..............

More information

2 G(k) e ikx = (ik) n x n n! n=0 (k ) ( ) X n = ( i) n n k n G(k) k=0 F (k) ln G(k) = ln e ikx n κ n F (k) = F (k) (ik) n n= n! κ n κ n = ( i) n n k n

2 G(k) e ikx = (ik) n x n n! n=0 (k ) ( ) X n = ( i) n n k n G(k) k=0 F (k) ln G(k) = ln e ikx n κ n F (k) = F (k) (ik) n n= n! κ n κ n = ( i) n n k n . X {x, x 2, x 3,... x n } X X {, 2, 3, 4, 5, 6} X x i P i. 0 P i 2. n P i = 3. P (i ω) = i ω P i P 3 {x, x 2, x 3,... x n } ω P i = 6 X f(x) f(x) X n n f(x i )P i n x n i P i X n 2 G(k) e ikx = (ik) n

More information

[1] SBS [2] SBS Random Forests[3] Random Forests ii

[1] SBS [2] SBS Random Forests[3] Random Forests ii Random Forests 2013 3 A Graduation Thesis of College of Engineering, Chubu University Proposal of an efficient feature selection using the contribution rate of Random Forests Katsuya Shimazaki [1] SBS

More information

x, y x 3 y xy 3 x 2 y + xy 2 x 3 + y 3 = x 3 y xy 3 x 2 y + xy 2 x 3 + y 3 = 15 xy (x y) (x + y) xy (x y) (x y) ( x 2 + xy + y 2) = 15 (x y)

x, y x 3 y xy 3 x 2 y + xy 2 x 3 + y 3 = x 3 y xy 3 x 2 y + xy 2 x 3 + y 3 = 15 xy (x y) (x + y) xy (x y) (x y) ( x 2 + xy + y 2) = 15 (x y) x, y x 3 y xy 3 x 2 y + xy 2 x 3 + y 3 = 15 1 1977 x 3 y xy 3 x 2 y + xy 2 x 3 + y 3 = 15 xy (x y) (x + y) xy (x y) (x y) ( x 2 + xy + y 2) = 15 (x y) ( x 2 y + xy 2 x 2 2xy y 2) = 15 (x y) (x + y) (xy

More information

SO(3) 7 = = 1 ( r ) + 1 r r r r ( l ) (5.17) l = 1 ( sin θ ) + sin θ θ θ ϕ (5.18) χ(r)ψ(θ, ϕ) l ψ = αψ (5.19) l 1 = i(sin ϕ θ l = i( cos ϕ θ l 3 = i ϕ

SO(3) 7 = = 1 ( r ) + 1 r r r r ( l ) (5.17) l = 1 ( sin θ ) + sin θ θ θ ϕ (5.18) χ(r)ψ(θ, ϕ) l ψ = αψ (5.19) l 1 = i(sin ϕ θ l = i( cos ϕ θ l 3 = i ϕ SO(3) 71 5.7 5.7.1 1 ħ L k l k l k = iϵ kij x i j (5.117) l k SO(3) l z l ± = l 1 ± il = i(y z z y ) ± (z x x z ) = ( x iy) z ± z( x ± i y ) = X ± z ± z (5.118) l z = i(x y y x ) = 1 [(x + iy)( x i y )

More information

1 (Berry,1975) 2-6 p (S πr 2 )p πr 2 p 2πRγ p p = 2γ R (2.5).1-1 : : : : ( ).2 α, β α, β () X S = X X α X β (.1) 1 2

1 (Berry,1975) 2-6 p (S πr 2 )p πr 2 p 2πRγ p p = 2γ R (2.5).1-1 : : : : ( ).2 α, β α, β () X S = X X α X β (.1) 1 2 2005 9/8-11 2 2.2 ( 2-5) γ ( ) γ cos θ 2πr πρhr 2 g h = 2γ cos θ ρgr (2.1) γ = ρgrh (2.2) 2 cos θ θ cos θ = 1 (2.2) γ = 1 ρgrh (2.) 2 2. p p ρgh p ( ) p p = p ρgh (2.) h p p = 2γ r 1 1 (Berry,1975) 2-6

More information

Convolutional Neural Network A Graduation Thesis of College of Engineering, Chubu University Investigation of feature extraction by Convolution

Convolutional Neural Network A Graduation Thesis of College of Engineering, Chubu University Investigation of feature extraction by Convolution Convolutional Neural Network 2014 3 A Graduation Thesis of College of Engineering, Chubu University Investigation of feature extraction by Convolutional Neural Network Fukui Hiroshi 1940 1980 [1] 90 3

More information

.2 ρ dv dt = ρk grad p + 3 η grad (divv) + η 2 v.3 divh = 0, rote + c H t = 0 dive = ρ, H = 0, E = ρ, roth c E t = c ρv E + H c t = 0 H c E t = c ρv T

.2 ρ dv dt = ρk grad p + 3 η grad (divv) + η 2 v.3 divh = 0, rote + c H t = 0 dive = ρ, H = 0, E = ρ, roth c E t = c ρv E + H c t = 0 H c E t = c ρv T NHK 204 2 0 203 2 24 ( ) 7 00 7 50 203 2 25 ( ) 7 00 7 50 203 2 26 ( ) 7 00 7 50 203 2 27 ( ) 7 00 7 50 I. ( ν R n 2 ) m 2 n m, R = e 2 8πε 0 hca B =.09737 0 7 m ( ν = ) λ a B = 4πε 0ħ 2 m e e 2 = 5.2977

More information

l µ l µ l 0 (1, x r, y r, z r ) 1 r (1, x r, y r, z r ) l µ g µν η µν 2ml µ l ν 1 2m r 2mx r 2 2my r 2 2mz r 2 2mx r 2 1 2mx2 2mxy 2mxz 2my r 2mz 2 r

l µ l µ l 0 (1, x r, y r, z r ) 1 r (1, x r, y r, z r ) l µ g µν η µν 2ml µ l ν 1 2m r 2mx r 2 2my r 2 2mz r 2 2mx r 2 1 2mx2 2mxy 2mxz 2my r 2mz 2 r 2 1 (7a)(7b) λ i( w w ) + [ w + w ] 1 + w w l 2 0 Re(γ) α (7a)(7b) 2 γ 0, ( w) 2 1, w 1 γ (1) l µ, λ j γ l 2 0 Re(γ) α, λ w + w i( w w ) 1 + w w γ γ 1 w 1 r [x2 + y 2 + z 2 ] 1/2 ( w) 2 x2 + y 2 + z 2

More information

Lecture 12. Properties of Expanders

Lecture 12. Properties of Expanders Lecture 12. Properties of Expanders M2 Mitsuru Kusumoto Kyoto University 2013/10/29 Preliminalies G = (V, E) L G : A G : 0 = λ 1 λ 2 λ n : L G ψ 1,..., ψ n : L G µ 1 µ 2 µ n : A G ϕ 1,..., ϕ n : A G (Lecture

More information

AC Modeling and Control of AC Motors Seiji Kondo, Member 1. q q (1) PM (a) N d q Dept. of E&E, Nagaoka Unive

AC Modeling and Control of AC Motors Seiji Kondo, Member 1. q q (1) PM (a) N d q Dept. of E&E, Nagaoka Unive AC Moeling an Control of AC Motors Seiji Kono, Member 1. (1) PM 33 54 64. 1 11 1(a) N 94 188 163 1 Dept. of E&E, Nagaoka University of Technology 163 1, Kamitomioka-cho, Nagaoka, Niigata 94 188 (a) 巻数

More information