21世紀の統計科学 <Vol. III>

Size: px
Start display at page:

Download "21世紀の統計科学 <Vol. III>"

Transcription

1 21 75 < Vol. III > ( ) ( HP ) 1 2

2 HP 21 Vol.I, Vol.II, Vol.III i

3 (computational statistics) (statistical multivariate analysis) (statistical time series analysis) ( ) ( ) ii

4 B.Efron (bootstrap) EM(expectation-maximization) (Bayseian statistics) (posterior distribution) MCMC( ) ( ) (statistical filtering) iii

5 ( ) (Japan Statistical Society, (1982 ) I II III 2007 II III 21 II I III iv

6 I EM MCMC v

7 4.1 p n p n 4.2 p n n p n p II BLUP EBLUP vi

8 ARMA 2.5 ARMA 3 ARIMA ARFIMA (simulation) Wiener III vii

9 EM 1 2 EM 3 EM 4 EM EM 5.1 Louis 5.2 Oakes 5.3 SEM(Supplemented EM) 6 EM 7 GEM EM 7.1 GEM(Generalized EM) 7.2 ECM 7.3 ECME Contaminated Normal 7.4 optimal EM MH 3.3 MH 4 viii

10 MH ix

11 21 III HP, ( ) 1 takemura@stat.t.u-tokyo.ac.jp 1

12 1 : 20 Wald Lehmann ([13]) 1) 2) 2

13 (Owen[18]) 3 3

14 ( [17]) (propensity score) 1950 (Simon[22], Blalock[3]) { 1, if 0 x 1 f(x) = f 1 (x) = 0, X 1,..., X n 0 1 Y = X X n n = 2 (x 1, x 2 ) [0, 1] 2 x 1 + x 2 c 4

15 Y = X 1 + X 2 f 2 (y) y, if 0 y 1 f 2 (y) = 2 y, if 1 < y 2 (2.1) 0, X 1 + X 2 + X 3 f 3 (y) = 1 0 f 2 (y x)f 1 (x)dx = 1 0 f 2 (y x)dx = y y 1 f 2 (u)du (2.2) (2.1) 1 n Feller 2 ([6]) I.9 P (Y x) = U n (x) = 1 n! u n+1 (x) = U n+1(x) = 1 n! n ( ) n ( 1) ν (x ν) n ν +, ν=0 n+1 ν=0 ( 1) ν ( n + 1 ν ) (x ν) n +. x + = max(x, 0) Feller Laplace (2.2) u n+1 (x) = U n (x) U n (x 1) = P (x 1 < X X n x) (2.3) x x = 1,..., n u n+1 (1) + u n+1 (2) + + u n+1 (n) = P (0 < X X n n) = 1 (2.3) k = 1,..., n u n+1 (k) X 1 + +X n k 1 n [0, 1] n x 1 + +x n = k 1 x x n = k 2 5

16 n! x = k, k = 1,..., n k 1 ( ) n + 1 n!u n+1 (k) = A n (k 1) = ( 1) j (k j) n j j=0 A 1 (0) A 2 (0) A 2 (1) A 3 (0) A 3 (1) A 3 (2) A 4 (0) A 4 (1) A 4 (2) A 4 (3) = A n (j) (Eulerian number) ( [9] 1 ) 1, 2,..., n a 1,..., a n a i > a i+1 i = 1,..., n 1 n = > 1, 7 > 4, 4 > A n (j) = 1,..., n j (2.4) n = 3 3! = , (2.3) (2.4) Stanley[23] 1 X 1,..., X n 1 R(X i ) X i ( X i {X 1,..., X n } ) R(X 1 ),..., R(X n ) 1,..., n 1/n! X i > X i+1 i [0, 1] n [0, 1] n ψ(x 1,..., x n ) = (y 1,..., y n ) y i = { 1 + xi+1 x i, if x i > x i+1 x i+1 x i, if x i < x i+1 6

17 x n+1 0 ψ 0 ( x i ) 1 1 y i x i > x i+1 1 y y n y y n = ( R(x 1 ),..., R(x n ) ) + 1 x 1 ψ 1 Y 1,..., Y n Y Y n j = 0,..., n 1 P (j < Y 1 + +Y n j +1) = P (R(X 1 ),..., R(X n ) = j) = A n(j) n! Schmidt and Simion[20] 2.2 2π = c = e x2 /2 dx 2 X, Y X = r cos θ, Y = r sin θ ( x/ r det y/ r ) x/ θ = r y/ θ (r, θ) 1 c 2 re r2 /2, r > 0, 0 θ 2π 7

18 1 c 2 = 2π r θ θ 0 2π 2 1 c = 2π ( [24] 3 35 [12]4.4 ) π 2 = n=1 n! 2πn n+1/2 e n (2n) 2 (2n 1)(2n + 1) = c = 2π Barndorff-Nielsen and Cox[2] 3.3 ( ) ( [7]) n! = 0 x n e x dx x = n + nz, dx = ndz n! = n (n + nz) n e n nz ndz = n n+1/2 e n n (1 + z ) n e nz dz n [ log (1 + z ] ) n e nz = ( z nz + n n n z2 2n +... ) = z2 2 + o(1) 8

19 o(1) n 0 n!/(n n+1/2 e n ) e x2 /2 dx = 2π log(x n e x ) x = n 2 g(x) > 0 h(x) = log g(x) x = x h(x). = h(x ) 1 2 (x x ) 2 ( h (x )) g(x) g(x). = e h(x ) e 1 2 (x x ) 2 ( h (x )) x [a, b] b a g(x)dx. = e h(x ). = e h(x ). = b a e 1 2 (x x ) 2 ( h (x )) dx 2π h (x ) eh(x ) e 1 2 (x x ) 2 ( h (x )) dx 3 9

20 (skew-normal distribution) Azzalini [1] Genton[8] φ(x) = 1 2π e x2 /2, Φ(x) = x φ(u)du f(x) = 2φ(x)Φ(αx) (3.5) α g(x), h(x) G(x) = x g(u)du g w(x) f(x) = 2h(x)G(w(x)) (3.6) f(x) f(x) 1 X g( ) Y h( ) X w(y ) 1/2 = P (X w(y ) 0) ( ) 1 2 = P (X w(y ) 0) = = = x w(y) 0 h(y)g(x)dxdy ( w(y) h(y) h(y)g(w(y))dy ) g(x)dx dy 10

21 α = 2.0 α = 3.0 1: (3.6) α > 0 h(x) = φ(x), G(x) = Φ(αx), w(x) = x (3.5) X 0, X 1 Y = α X X 1 (3.7) 1 + α α 2 (3.5) α = 0 α X 0 α X 0 α α = 2 α = 3 1 (3.7) E(Y k ) (3.5) (3.6) t- 3.2 N(µ, σ 2 ) µ, σ 2 11

22 (generalized hyperbolic distribution) (Eberlein[4]). ([15]) f GH (x; λ, α, β, δ, µ) = a(λ, α, β, δ) ( δ 2 + (x µ) 2) (λ 1 2 )/2 K λ 1 2 a(λ, α, β, δ) = (α 2 β 2 ) λ/2 2πα λ 1 2 δ λ K λ (δ α 2 β 2 ), K λ(z) = π 2 I λ (z) = (z/2) λ i=0 (α δ 2 + (x µ) 2 ) exp (β(x µ)) (z/2) 2i i!γ(λ + i + 1) I λ (z) I λ (z), sin(λπ) λ K λ (z) I λ K λ 3 < µ, λ <, α > β {δ = 0, λ > 0} {α = β, λ < 0} 5 (Eberlein[4]) N(µ, σ 2 ) µ σ 2 Y ( γ ) ( λ 1 f GIG (y; λ, δ, γ) = δ 2K λ (δγ) yλ 1 exp 1 ( )) δ 2 2 y + γ2 y, y > 0. N(µ + βy, Y ) f GH (x; λ, α, β, δ, µ) 1 = exp ( 1 2πy 2y (x µ βy)2) f GIG (y; λ, δ, α 2 β 2 )dy. y>0 3.3 (i.i.d., independently and identically distributed) X 1, X 2,... S n = X X n 12

23 X i x ( ) S n E(S n ) P Var(Sn ) x Φ(x) (n ) X i S n 1 f(x) =, φ(t) = π(1 + x 2 e t ) X i S n /n φ(t) = exp( t α ), 0 < α 2 α α = 2 α < 2 α α (Rachev[19]) ([16] ) 3.4 (Fang, Kotz, and Ng[5] ) p N(0, Σ) f(x) = 1 (2π) p/2 (det Σ) 1/2 exp( 1 2 x Σ 1 x) 13

24 2: 1 f(x) = h(x Σ 1 x) (3.8) Σ = I p x x x/ x S p 1 = {x x = 1} (Kamiya, Takemura and Kuriki[10]; Kamiya and Takemura[11]). 2 ( ) A A A 1 2 L 1 f(x, y) = h( x + y ) (45 ) L 1 x + y

25 2 2 Y = (Y 1, Y 2 ) ψ(x 1 x 2 ) 2 2 ( ) ψ x 1 (x 1, x 2 ) = Y 1, ψ x 2 (x 1, x 2 ) = Y 2 (3.9) X = (X 1, X 2 ) ψ X McCann [14] ψ(x 1, x 2 ) = 1 2 (x2 1 + x 2 2) 0.9 sin x 1 sin x 2 ( 3(a)) (3.9) X 3(b) p(x 1, x 2 ) = 1 1 2π e 2 ((x 1 0.9c 1 s 2 ) 2 +(x 2 0.9s 1 c 2 ) 2) (( s 1 s 2 ) 2 (0.9c 1 c 2 ) 2 ) ( c i = cos x i, s i = sin x i ) Sei[21] [1] Azzalini, A. (1985). A class of distributions which includes the normal ones. Scand. J. Statist., 12, [2] Barndorff-Nielsen, O.E. and Cox, D.R. (1989). Asymptotic Techniques for Use in Statistics, Chapman and Hall, London. [3] Blalock, H. M. (1973). Causal Models in the Social Sciences. Aldine, Chicago. 15

26 x2 x2 p(x1,x2) psi(x1,x2) x1 x1 (a) ψ(x 1, x 2 ). (b) p(x 1, x 2 ). 3: [4] Eberlein, E. (2001). Application of generalized hyperbolic Lévy motions to finance. in Lévy Processes, Barndorff-Nielsen and Mikosch, T. editors, , Birkhäuser, Boston. [5] Fang, K.T., Kotz, S. and Ng, K.W. (1990). Symmetric Multivariate and Related Distributions. Chapman and Hall, London. [6] Feller, W. (1971). An Introduction to Probability Theory and Its Applications Volume 2, 2nd edition, Wiley, New York. : [7] (2004) [8] Genton, M.G. (2004). Skew-Elliptical Distributions and Their Applications: a Journey beyond Normality. Chapman & Hall, Boca Raton. [9] (1997). [10] Kamiya, H., Takemura, A. and Kuriki. S. (2008). Star-shaped distributions and their generalizations. Journal of Statistical Planning and Inference (Ogawa memorial volume), 1 38,

27 [11] Kamiya, H. and Takemura, A. (2008). Hierarchical orbital decompositions and extended decomposable distributions. Journal of Multivariate Analysis, 99, [12] (2003). I. [13] Lehmann, E.L. and Romano, J.P. (2005). Testing Statistical Hypotheses, 3rd ed. Springer, New York. [14] McCann, R. J. (1995). Existence and uniqueness of monotone measurepreserving maps, Duke Math. J., 80, [15] (2002). GIG GH, 50 2, [16] Matsui, M. and Takemura, A. (2006). Some improvements in numerical evaluation of symmetric stable density and its derivatives. Communications in Statistics, Theory and Methods, 35, [17] (2004). [18] Owen, A. B. (2001). Empirical Likelihood. Chapman & Hall, Boca Raton. [19] Rachev, S.T. (2003). Handbook of Heavy Tailed Distributions in Finance. Elsevier, Amsterdam. [20] Schmidt, F. and Simion, R. (1997). Some geometric probability problems involving the Eulerian numbers, Electronic Journal of Combinatorics, 4, Research Paper 18. [21] Sei, T. (2011). Gradient modeling for multivariate quantitative data. Annals of the Institute of Statistical Mathematics. 63, [22] Simon, H. A. (1953). Causal ordering and identifiability. in Studies in Econometric Method, Hood W. C. and Koopmans, T. C., editors. pp Wiley, New York. [23] Stanley, R. (1977). Eulerian partitions of a unit hypercube. in Higher Combinatorics, M.Aigner ed., NATO Adv. Study Inst. Series, D.Reidel, Dordrect, p. 49. [24] (1983). 3 17

28 21 III HP ( ) 1 kitagawa@rois.ac.jp 18

29 1 20 f(x θ) 1970 AIC AIC AIC 1980 AIC AR ARMA ARMA 19

30 AIC = 2( )+2( ) (2.1) Akaike n y 1,...,y n θ f(y θ) n L(θ) = f(y i θ) (2.2) i=1 n l(θ) = logl(θ) = logf(y i θ) (2.3) i=1 20

31 f(y θ) l(θ) AIC θ µ σ 2 θ = (µ,σ 2 ) T f(y i θ) = ( ) 1 1/2 exp{ 1 } 2πσ 2 2σ (y 2 i µ) 2 (2.3) l(θ) = n 2 log(2πσ2 ) 1 2σ 2 (2.4) n (y i µ) 2 (2.5) i=1 ˆθ = (ˆµ,ˆσ 2 ) T µ σ 2 0 l(θ) µ l(θ) σ 2 = 1 n (y σ 2 i µ) = 0 i=1 = n 2σ (σ 2 ) 2 ˆµ = 1 n i, ˆσ n i=1y 2 = 1 n n (y i µ) 2 = 0 (2.6) i=1 n (y i ˆµ) 2 (2.7) i=1 k y i = a 0 + a j x ij +ε i, ε i N(0,σ 2 ) (2.8) j=1 0 y i = a x i +b) 2 +ε i, ε i N(0,σ 2 ) (2.9) 21

32 f(y i θ) = 1 π (2.4) τ (y i µ) 2 +τ 2 (2.10) ( 2005) θ l(θ), ˆθ θ 0 H0 1 θ k = θ k 1 +λ k H 1 k 1 g(θ k) (2.11) g(θ k ) = l(θ k 1) θ 1 λ k H k 1 s k = θ k θ k 1 z k = g(θ k ) g(θ k 1 ) H 1 k = H 1 k 1 + s k 1s T k 1 s T k 1 y k 1 H 1 k 1 y k 1y T k 1H 1 k 1 y T k 1 H 1 k 1 y k 1 (2.12) DFP(Davidon-Fletcher-Powel) BFGS ( 1978) AIC 22

33 3 (AR) m y n = a j y n j +v n, v n N(0,σ 2 ) (3.1) j=1 AR a 1,...,a m Yule-Walker AR Box-Jenkins (ARMA) m l y n = a j y n j +v n b j v n j (3.2) j=1 j=1 ARMA Akaike ) 1960 y n k x n x n = F n x n 1 +G n v n, v n N(0,Q n ) y n = H n x n +w n, w n N(0,R n ) (3.3) x 1,...,x N O(k 3 N 3 ) n 1 Y {y 1,...,y n 1 } x n x n n 1 V n n 1 n Y n x n x n n V n n ) 23

34 1: [ ] [ ] x n n 1 = F n x n 1 n 1 V n n 1 = F n V n 1 n 1 F T n +G nq n G T n (3.4) K n = V n n 1 H T n (H nv n n 1 H T n +R n) 1 x n n = F n x n 1 n 1 (3.5) V n n 1 = F n V n 1 n 1 F T n +G nq n G T n O(k 3 ) N ARMA ( ) ARMA y n H n x n n 1 H n V n n 1 Hn T +R n ARMA ARMA 24

35 1. AR ARMA F n G n H n Q n R n k x n x n Kitagawa(1987) p(x n Y n 1 ) = p(x n x n 1 )p(x n 1 Y n 1 )dx n 1 25

36 p(x n Y n ) = p(y n x n )p(x n Y n 1 ) p(y n Y n 1 ) (3.6) p(x n Y n 1 ) p(x n Y n ) p(x n x n 1 p(y n x n ) x n 1 x n x n y n O(n) Gordon et al. 1993, Kitagawa 1996, Doucet et al EM MCMC 26

37 4.1 EM EM Dempster et al ) EM (E) (M) EM EM 4.2 Efron ) G(x) y = {y 1,...,y n } Ĝ(x) y = {y1,...,y n } y G Ĝ Ĝ 8 EIC (Ishiguro et al. 1997, 2004) AIC 2 D 27

38 D EIC 28

39 2: D D 3 ( 2004 ) 4.3 MCMC 1980 MCMC( ) MCMC 29

40 9 ) ( 4.4 FORTRAN C PC R S FORTRAN C R R Web Web Web-Decomp 1997) R FORTRAN 30

41 [1] Akaike, H.(1973). Information theory and an extension of the maximum likelihood principle. Proc. 2nd International Symposium on Information Theory (B. N. Petrov and F. Csaki eds.) Akademiai Kiado, Budapest, (1973) [2] Akaike, H. (1978), Covariance matrix computation of the state variable of a stationary Gaussian process. Ann. Inst. Statist. Math. 30-B, [3] Dempster, A., Laird, N. and Rubin, D. (1977). Maximum likelihood from incomplete data via the EM algorithm, Journal of the Royal Statistical Society, B39, [4] Doucet, A., de Freitas, N. and Gordon, N. (2001). Sequential Monte Carlo Methods in Practice, Springer, New York. [5] Efron, B. (1979). Bootstrap methods: another look at the jackknife. Annals of Statistics 7, [6] Gordon, N. J., Salmond, D. J., and Smith, A. F. M. (1993). Novel approach to nonlinear/non-gaussian Bayesian state estimation, IEE Proceedings F, 140(2), [7] Ishiguro, M., Sakamoto, Y., and Kitagawa, G.(1997). Bootstrapping log-likelihood and EIC, an extension of AIC. Annals of the Institute of Statistical Mathematics 49(3), [8] (1983). [9] Kitagawa, G. (1987). Non-Gaussian state space modeling of nonstationary time series (with discussion). Journal of the American Statistical Association 82,

42 [10] Kitagawa, G. (1996). Monte Carlo filter and smoother for non-gaussian nonlinear state space models, Journal of Computational and Graphical Statistics, 5(1) [11] (2005) [12] (2004). [13] (1978) OR 6 [14] (1983) [15] (1997). Web Decomp

43 21 III HP, ( ),,.,, Lindsay et al. (Ed.)(2003),.,,.. 1 fujikoshi@yahoo.co.jp 1 33

44 1,.,,,.,., 20,,,.,,.,,,,,.,,,,,., (1998) ,,,,,.,,,,,,,,,.,,,,,., (The National Science Foundation, NSF) , Lindsay et al. (Ed.)(2003). (2007),,.,,,,.,, Lindsay et al. (Ed.)(2003),., 2 34

45 .,,.,.,., Raftery et al. (Ed.)(2002)..,, Rao (Ed.)(1993) Cuadras and Rao(Ed.)(1995).,, (1993, 2010), (2007). 2,,,,,.,.,, 1 1,., 1930,,.,.,,,,,.,,.,,.,,,. Rao (1977). 3 35

46 + =, A θ., A., n A x., H 1 : θ 1/2., H 1 H 0 : θ = 1/2., c, x 1/2 c H 1., H 0 H 1,, H 0,,.,., H 1 : θ 1/2,,.,., Rao (1997) (1993, 2010),.,,,,,,.,,,., L. J. Savage (1993, 2010) )..,..,,.,,. 4 36

47 ,,,.,,.,.,,,,.,,,.,.,,,.,,,.,,, Rao (2006).,,.,,.,,.,,,,,.,,,.,,,,,.,., 5 37

48 ,,.,,,,.,,. 3,,,.,. 21 Lindsay et al. (Ed.)(2003), (the core of statistics),., (2006),,.,.,.,,,,,,.,,,.,.,,,.,.,,,,. 6 38

49 ,. (i),,,.,,. Huber (1994) (tiny)10 2, (Small)10 4, (medium)10 6, (large)10 8, (huge)10 10,., 100,,,.,,,,,.,. (ii) (reduction) (compression), Fisher,,, ( ).,.,,.,,. (iii) 7 39

50 .,,,.,,,.,.,,.,,,,.,. (iv), n p.,dna,,,. DNA,, 100., p n., n < p,,., 40,. 1,,. p n, p n,,,.,,, S W p (n, σ 2 I p ) S (Johnstone (2001)).,, 8 40

51 . (v) (biased),., 1990,,,,.,,. (vi),,,,..,.,,.,.,,.,.,,.,.,. 1949,.,

52 4 p, n.,,.,., p, n., p., n << p,, (iv), DNA,,, 100.,,.,,, p n & p < n, p n & n < p.,,,,.,. 4.1 p n & p < n, p n,. p << n, p.,, n = 100, p 10.,,,, p 10, p = 20. c = p/n c 0 (0, 1),., p/n 40 (Bai (1999) ).,. p S n l 1 l p, 10 42

53 F n (x) = 1 p {l i : l i x}., { }. F n (x) F, F S n. S n S n = 1 n X i X i n i=1., X i = (X i1,..., X ip ), X ij, 0, σ 2. S n,,., X ij, S n 1,., c = p/n c 0 (0, 1) F n F, a.s.. f(x) = df (x) dx = { 1 2πcxσ (x a)(b x), 2 (a < x < b), 0, ( ), a, b a = (1 c) 2 σ 2, b = (1 + c) 2 σ 2., a.s., 1.,,,., Bai (1999)., T n 1 p {ϕ(l 1) + + ϕ(l p )} = 0 ϕ(x)df n (x)., T n 0 ϕ(x)df (x) 11 43

54 . N(0, σ 2 ), ns n W p (n, σ 2 I p )., Johnstone (2001) l 1. Painlevé II. Tracy and Widom (1996)., MANOVA (Johnstone (2008)).,. S N(µ, Σ) N, S Σ l 1 > > l p > 0, λ 1 λ p > 0., H 0 : λ q+1 = = λ p = λ, m = p q / { } m p 1 p V = l j l j m j=q+1 j=q+1. p N c m,n log V m(m + 1)/2 1 χ 2 (Anderson (2003)). c m,n = {n (2m /m)/6 + λ 2 q j=1 (λ j λ) 2 }, n = N q 1 n m, n m log V, Fujikoshi et al. (2007),. C(0): c = m/n c 0 (0, 1). C(1): λ j = O(n), ρ j = λ j / tr Σ (0, 1), j = 1,..., q., Z m,n = (log V µ m,n )/σ m,n ( mn ) ( n ) + ψ m, 2 2 µ m,n = m log m mψ σm,n 2 = ( n ) ψ m 2 m 2 ψ ( mn 2 ψ() ψ m (a) = m j=1 ψ ( a 1 2 (j 1)) 12 ) 44

55 , H 0 1 q λ j /λ, j = 1,..., q λ j /λ λ j λ j = ρ j m/(1 q k=1 ρ k), j = 1,..., q, N = 100, q = 2, ρ 1 = 0.56, ρ 2 = , 000, Z m,n p = p = p = p = p = p = p = c m,n log V χ p = p = p = p = p = p = p = p = 10 χ 2,, N p (Fujikoshi and Sakurai (2009)). N = n + 1, p 13 45

56 q i r i, ρ i. ρ i., p, q, N n { f(r 2 i ) f(ρ 2 i ) } N(0, σ 2 (ρ 2 i )), in dist.., in dist.", σ 2 (ρ 2 i ) = 4ρ 2 i (1 ρ 2 i ) 2 f (ρ 2 i ) 2. : q;, p, m = n p, c = p/n c 0 (0, 1). n { f(r 2 i ) f( ρ 2 i ) } N(0, τ 2 ( ρ 2 i )) in dist.., ρ 2 i = ρ 2 i + c(1 ρ 2 i ), τ 2 ( ρ 2 i ) = 2(1 c)(1 ρ 2 i ) 2 [2ρ 2 i + c(1 2ρ 2 i )]f ( ρ 2 i ) 2., c = 0 ρ 2 i = ρ 2 i, τ( ρ 2 i ) = σ(ρ 2 i ),. 1 z = 1 2 log 1 + ri 2 c(1 r2 i )/(2 c) 1 ri 2 c(1 r2 i )/(2 c)., z, r 2 i ρ2 i ζ n(1 c)/(1 c/2)(z ζ) N(0, 1) in dist.., c = 0, Fisher z- { 1 n 2 log 1 + r i 1 1 r i 2 log 1 + ρ } i N(0, 1) in dist. 1 ρ i , q = 3, ρ 1 = 0.9, ρ 2 = 0.5, ρ 3 = 0.3, N, p 4.3. z, r 2 i < c/2,,., N, p,

57 4.3. z 95 ρ 1 = 0.9 ρ 2 = 0.5 ρ 3 = 0.3 N p ,,, MANOVA, (Fujikoshi (2000), Fujikoshi et al. (2008), Ledoit and Wolf (2002), Wakaki, Fujikoshi and Ulyanov (2003), ). 4.2 p n & n < p n << p,..,, S (1) S tr S (Dempster (1958)), (2), (Friedman (1989), Hastie, Buja and Tibshirani (1994), Ghosh (2003)), (3) S Moore-Penrose S + (Srivastava (2006)), 15 47

58 ., p/n c (1, ). lim p/n c lim lim n p,.,,. Fisher. p X = (X 1,..., X p ) q. n x 1,..., x n, n 1 1, n q q. Fisher m( min(k 1, p)) h ix H = [h 1,..., h m ]. Z n 0, 1 n q., m Θ q m Y = ZΘ., A = A(Θ, H) = tr(zθ XH) (ZΘ XH).,, Y = ZΘ., A(Θ, H) Θ, H Fisher., X = (X 1,..., X p ) T : X T = (T 1 (X),..., T s (X))., A = A (Θ, H, λ) = tr(zθ XH) (ZΘ XH) + tr H ΩH,., Ω = λi p. Ghosh (2003). 2,. Dudoit et al. (2002),., S = (S ij ) D S = diag(s 11,..., S pp ) 16 48

59 . Srivastava and Kubokawa (2007) S ( ˆΣ B = c S + tr S ) min(n, p) I p. 1, ˆΣ B Σ 1, c., Fujikoshi, Himeno and Wakaki (2004), Fujikoshi et al. (2010), Ledoit and Wolf (2002), Schoot (2005, 2005), Srivastava (2007). 4.3 p n.,, n, p. Hall et al. (2005) n,., p X = (X 1,..., X p )., X 1,..., X p N(0, 1). X n X i = (X i1,..., X ip ), i = 1,..., n, X i, X i X j (i j),, X i X j, p,. (1) X i = p + O(1), i = 1,..., n. (2) X i X j = 2p + O(1), i, j = 1,..., n, i j. (3) ang(x i, X j ) = 1 2 π + O(p 1/2 ), i, j = 1,..., n, i j.,,. (2),, (3) 90.,. (C1),

60 (C2) σ 2, 1 p p k=1 Var(x k) σ 2. (C3) ρ mixing ; r, sup E(X i X j ) ρ(r) 0 i j r. (C3), Ahn et al. (2007)., p. Π 1, Π 2 p X, Y. X, Y., (2) σ 2 τ 2., 1 p {E(X k ) E(Y k )} 2 µ 2 p k=1. X n X 1,..., X n, Y m Y 1,..., Y m., σ 2 /n τ 2 /m, µ 2 > σ 2 /n τ 2 /m, Π 1 p, 1., ( Hastie, Buja. and Tibshirani (1994) ). p X = (X 1,..., X p ) h 1 (X),..., h m (X), b 0 + b 1 h 1 (X) + + b m h m (X)., b 0, b 1,..., b m. m,.,

61 5 : ( ), 2.,,.,., Anderson (2003), Siotani et al.(1985), (1990), Fujikoshi et al.(2010). 5.1 p X 1,..., X p p X = (X 1,..., X p ). X, E(X) = µ = (µ 1,..., µ p ), Var(X) = Σ = (σ ij )., E(X i ) = µ i, Cov(X i, X j ) = σ ij., X. γ 1 (X 1 µ 1 ) + + γ p (X p µ p ),.,, , p,. Σ λ 1... λ p > 0, γ 1,, γ p. Σγ i = λ i γ i, γ iγ j = δ ij, i, j = 1,..., p., δ ij, i = j 1, i j 0., i Y i = γ 1i (X 1 µ 1 ) + + γ pi (X p µ p ) = γ i(x µ), i = 1,, p., γ i = (γ 1i,..., γ pi ). i λ i i 19 51

62 E(Y i ) = 0, Var(Y i ) = λ i, Cov(Y i, Y j ) = 0 (i j). i λ i i Y i, λ i /(λ 1 + λ p ) i. q (λ λ q )/(λ λ p ) q i, γ 1i σ 11,..., γ pi σ pp., Y i X j ρ(y i, X j ) = Cov(Y i, X j ) Var(Yi )Var(X j ) = λj γ ji σjj. ρ(y i, X j ) Y i X j. N = n + 1( p) X S S l 1,, l p (l 1 l p > 0), h 1,, h p. i Y i = h 1i (X 1 X 1 ) + + h pi (X p X p ) = h i(x X), i = 1,, p, h i = (h 1i,, h pi ).,,,., λ i l i. ( ). i j Y ij = h 1j (X i1 X 1 ) + + h pj (X ip X p ) = h j(x i X), i = 1,..., n; j = 1,..., p.,,., 2, j (Y j1, Y j2 )

63 ,., 0.8., λ q+1 = = λ p, q (0 q < p 1).,, q. 5.2 X = (X 1,, X p ) Y = (Y 1,, Y q ),. p q Var( ( X Y X, Y ) ) = Σ = ( Σ 11 Σ 12 Σ 21 Σ 22 ). ξ = α 1 X α p X p = α X, η = β 1 Y β q Y q = β y., α = (α 1,..., α p ), β = (β 1,..., β q )., (1), (2), (3) ξ i = α ix, (i = 1,, p), η j = β jy, (j = 1,, q)., α i = (α 1i,..., α pi ), β j = (β 1j,..., β qj ).,, Var(ξ) = 1, Var(η) = 1,. (1) ρ(ξ 1, η 1 ) = max α, β ρ(ξ, η). (2) k p ρ(ξ, ξ i ) = ρ(η, η i ) = 0, i = 1,, k 1 ρ(ξ, η) α, β α = α k, β = β k. (3) k > p, ρ(η k, η i ) = 0, i = 1,, k 1. ξ i = α ix, η i = β jy ρ(ξ i, η i ) = ρ i ρ 1 ρ p 0, ρ i i (ξ i, η i ) i 21 53

64 p = q = 1, p = 1 < q. ρ 2 1 ρ 2 p Σ 12 Σ 1 22 Σ 21 ρ 2 Σ 11 = 0 α i, β j Σ 12 Σ 1 22 Σ 21 α i = ρ 2 i Σ 11 α i, α iσ 11 α j = δ ij, Σ 21 Σ 1 11 Σ 12 β j = ρ 2 jσ 22 β j, β iσ 22 β j = δ ij ρ p+1 = = ρ q = 0, rank(σ 12 ),., k (ρ ρ 2 k)/(ρ ρ 2 p), k X = (X 1,, X p ) Y = (Y 1,, Y q ), N = n + 1 S, R,. S = ( S 11 S 12 S 21 S 22 ), R = ( R 11 R 12 R 21 R 22, S 12 : p q, R 12 : p q., µ, Σ X, S., S R, r 1 >... > r p > 0., S 12 S 1 22 S 21 r 2 S 11 = 0 R 12 R 1 22 R 21 r 2 R 11 = 0 ). 5.3, ( )

65 ,,.,,. (1), 2. (2),. (3) A, B, C, D 4.,. (4), 2,. (5),,,,,,. (6),., (p )., n, p., (6),,p = 3, 000 n = 100,., n << p,.,.. G i (i = 1, 2) n i G 1 ; X (1) 1, X (1) 2,..., X (1) n 1, G 2 ; X (2) 1, X (2) 2,..., X (2) n 2, X G 1 G 2., G 1 G 2.,, X (i), S (i) (i = 1, 2), S = (1/n){(n 1 1)S (1) + (n 2 1)S (2) }., n = n 1 + n 2 2., W = ( X (1) X (2) ) S {X 1 1 } 2 ( X (1) + X (2) ) 23 55

66 , W 0 X G 1, W < 0 X G 2.., G 1 G 2 p(2 1), G 2 G 1 p(1 2), p(2 1) = P(W < 0 X G 1 ), p(1 2) = P(W 0 X G 2 ). p(2 1), G 1 n 1 W,.,, (n 1 + n 2 1),. cross-validation,,., n 1 n 2.,, p(2 1) Φ ( 12 ) D + ϕ ( 1 2 D ) [ p 1 n 1 D + D { }] 4(4p 1) D 2 32(n 2)., Φ, ϕ, D D 2 = ( X (1) X (2) ) S 1 ( X (1) X (2) ). p(1 2) n 1 n 2., X G i d 2 i = (X X (i) ) (S (i) ) 1 (X X(i) ),.., S (i) S,..,,., q G i (i = 1,..., q) n i

67 S b, S w. S b = n 1 ( X (1) X)( X (1) X) + + n 1 ( X (q) X)( X (q) X), S w = (n 1 1)S (1) + + (n q 1)S (q)., X. p X = (X 1,, X p ) Z = a 1 X a p X p = a X. a = (a 1,..., a p )., Z = a X, a S b a, a S w a., (a S b a)/(a S w a) a,., S 1 w S b., S 1 w S b s ( min(p, q 1)) l 1 >... > l s > 0, a 1, a 2,..., a s., S b a i = l i S w a i, a is w a j = nδ ij., n = n n q., Z i = a ix, i = 1,, s i., a = a 1 (a S b a)/(a S w a)., Z k = a kx, z 1,..., z k 1 a S w a i = 0, i = 1,..., k 1,, (a S b a)/(a S w a). k X (i) j Z (i) j = Z (i) j1. Z (i) jk = a 1. a k X (i) j, j = 1,..., n i; i = 1,..., q., k = 2,., X 25 57

68 , Z = (Z 1,..., Z k ) = (a 1,..., a k ) X = AX, Z (i) = A X (i),., Z 1,..., Z k,., d i = Z Z (i), i = 1,..., q min{d 1,..., d q } = d i X G i. q = 2,. 5.4,.,., p W n p U j N p (µ j, Σ), j = 1,, n W = n U j U j j=1 W n, = µ 1 µ µ n µ n p W W p (n, Σ; ) = O µ j = 0, j = 1,, n W W W p (n, Σ), N = n + 1 N p (µ, Σ)., S, ns W p (n, Σ). i S i., i S i.,., (p + q) S, S 11 : p p, S 12 : p q, S 22 : q q, S 1 11 S 12 S 1 22 S 21 S 1 22 S 21 S 1 11 S

69 , S b S w., S 1 w S b. G i µ i Σ., S b S w, S b W p (q 1, Σ; Ω), S w W p (n q, Σ)., n = n n q, µ = (1/n)(n 1 µ n 1 µ q ) Ω = q i=1 n i(µ i µ)(µ i µ).,, µ 1 = = µ q,. (i) ; T LR = (n + d 1 ) log( S w / S w + S b ). (ii) ; T LH = (n + d 2 ) tr S b S 1 w. (iii) ; T BNP = (n + d 3 ) tr S b (S w + S b ) 1., d j d 1 = (p+q+2)/2, d 2 = (p+q+1), d 3 = 1 l 1.,,.,., (Anderson (2003), (2003), Fujikoshi et al. (2010) ).,.. [1] Ahn, J., Marron, J. S., Muller, K. M. and Chi, Y.-Y. (2007). The high-dimensional, low-sample-size geometric representation holds under mild conditions. Biometrika, 94, [2] Anderson, T. W. (2003). An Introduction to Multivariate Statistical Analysis (3rd ed.). John Wiley & Sons, New York

70 [3] Bai, Z. D. (1999). Methodologies in spectoral analysis of large dimensional random marices, a review. Statistica Sinica, 9, [4] Cuadrass, C. M. and Rao, C. R. (Ed.) (1995). Multivariate Analysis 2; Future Direction. North-Holland, Amsterdam. [5] Dempster, A. P. (1958). A high dimensional two sample signicance test. Ann. Math. Statist., 29, [6] Dudoit, S., Fridltano, J. and Speed, T. P. (2002). Comparison of discrimination methods for classication of Tumors using gene expression data. J. Amer. Stat. Assoc., 97, [7] Friedman, J. H. (1989). Reguralized discriminant analysis. J. Amer. Statist. Assoc., 84, [8],, ( )(1993)... : Rao, C. R. (1997). Statistics and Truth. World Scientic. [9],, ( )(2010)... : Rao, C. R. (1997). Statistics and Truth. World Scientic. [10] Fujikoshi, Y. (2000). Error bounds for asymptotic approximations of the linear discriminant function when the sample size and dimensionality are large. J. Multivariate Anal., 73, [11] (2003).., 33, [12] Fujikoshi, Y., Himeno, T. and Wakaki, H. (2004). Asymptotic results of a high dimensional MANOVA test and power comparison when the dimension is large. J. Japan Statist. Soc., 34, [13] Fujikoshi,Y., Yamada, T., Watanabe, D. and Sugiyama, T. (2007). Asymptotic distribution of the LR statistic for equality of the smallest eigenvalues in high-dimensional principal component analysis. J. Multivariate Anal., 98, [14] Fujikoshi, Y., Himeno, T. and Wakaki, H. (2008). Asymptotic results in MANOVA model when the dimension is large compared to the sample size. J. Statist. Plann. Inf., 138,

71 [15] Fujikoshi, Y. and Sakurai, T. (2009). High-dimensional asymptotic expansions of the distributions of canonical correlations. J. Multivariate Anal., 100, [16] Fujikoshi, Y., Ulyanov, V. V. and Shimizu, R. (2010). Multivariate Statistics: High-Dimensional and Large-Sample Approximations. Wiley, Hoboken, New Jersy. [17] Ghosh, D. (2003). Penalized discriminant methods for the classication of tumors from gene expression data. Biometrics, 59, [18] Hall, P., Marron, J, S. and Neeman, A. (2005). Geometric representation of high dimension, low sample size data. J. R. Statist. Soc. B, 67, [19] Hastie, T., Buja, A. and Tibshirani, R. (1994). Penalized discriminant analysis. Ann. Statist., 23, [20] (2007).., 36, [21] Johnstone, I. M. (2001). On the distribution of the largest eigenvalue in principal component analysis. Ann. Statist., 29, [22] Johnstone, I. M. (2008). Multivariate analysis and Jacobi ensembles: Largest eigenvalue, Tracy-Widom limits and rates of convergence. Ann. Statist., 36, [23] Ledoit, O. and Wolf, M. (2002). Some hypothesis tests for the covariance matrix when the dimension is large compared to the sample size. Ann. Statist., 30, [24] Raftery, A. E., Tanner, M. A. and Wells, M. T. (Ed.)(2002). Statistics in the 21st Century. Chapman & Hall/CRC. [25] Rao, C. R. (1997). Statistics and Truth (2nd Ed.). World Scientic. [26] Rao, C. R. (Ed.) (1993). Multivariate Analysis; Future Direction. North-Holland, Amsterdam. [27] Rao, C. R. (2006). The past, present and future of statistics. IMS Bulletin, 35-2, 4-5. [28] Schott, J. R. (2005). Testing for complete independence in high dimensions. Biometrika, 92,

72 [29] Schott, J. R. (2006). A high-dimensional test for the equality of the smallest eigenvalues of a covariance matrix. J. Multivariate Anal., 97, [30] (1990)... [31] Siotani, M., Hayakawa, T., and Fujikoshi, Y. (1985). Modern Multivariate Statistical Analysis: A Graduate Course and Handbook. American Sciences Press, Columbus, Ohio. [32] Srivastava, M. (2007). Multivariate analysis for analyzing high dimensional data. J. Japan Statist. Soc., 37, [33] Srivastava, M. S. and Kubokawa, T. (2007). Comparison of discrimination methods for high dimensional data. J. Japan Statist. Soc., 37, [34] (1998).., ,. [35],,,, C. R. (2007)... [36] Tracy, C. A. and Widom, H. (1996). On orthogonal and symplectic matrix ensembles. Comm. Math. Phys., 177, [37] Wakaki, H., Fujikoshi, Y. and Ulyanov, V. (2003). Asymptotic expansions of the distributions of MANOVA test statistics when the dimension is large. TR 02-9, Statistical Research Group, Hiroshima Univ., Japan

73 21 III HP, ( ), 1 tatsuya@e.u-tokyo.ac.jp 63

74 1 (Linear Mixed Model, LMM) (Best Linear Unbiased Predictor, BLUP) C.R. Henderson 50 LMM (Generalized Linear Mixed Model, GLMM) LMM LMM (Empirical Best Linear Unbiased Predictor, EBLUP) LMM LMM LMM 64

75 EBLUP EBLUP LMM 2 LMM BLUP 3 EBLUP 4 LMM LMM GLMM LMM GLMM (1992), McCulloch and Searle (2001), McCulloch (2003), (2007), Searle, Casella and McCulloch (1992), Demidenko (2004), Rao (2003) [1]. Battese, Harter and Fuller (1988) k ( ) k 250h (segment) n i i j y ij, LANDSAT, 0.45h 65

76 (picture element, ), k i j, x 1ij, x 2ij y i (x 1ij, x 2ij ) y ij = x ijβ + u ij, i = 1,..., k, j = 1,..., n i x ij = (1, x 1ij, x 2ij ), β = (β 0, β 1, β 2 ) x ijβ = β 0 + x 1ij β 1 + x 2ij β 2 u ij y ij x ij v i e ij u ij = v i + e ij (2.1) v i v i v i v i v i, e ij v i N (0, σ 2 v), e ij N (0, σ 2 e) y ij = x ijβ + v i + e ij, i = 1,..., k, j = 1,..., n i (2.2) σv, 2 σe 2 σv 2 σ2 e β (2.2) (LMM) y ij (Variance Component Model) (2.2) (Nested Error Regression Model) [2]. x ij, β 3 1 p 1 66

77 y i = y i1., y = y 1., x i = x i1. y ini, X = x 1. y k, β = β 0. x in i x k β p 1, e y e i = (e i1,..., e ini ), e = (e 1,..., e k ), 1 n i 1 j ni block diag( ) Z = block diag(j n1,..., j nk ) v = (v 1,..., v k ) (2.2) N = k i=1 n i y =Xβ + Zv + e, (2.3) v N (0, σ 2 vi k ), e N (0, σ 2 ei N ) y i Cov (y i ) = Σ i (σ 2 e, σ 2 v) = σ 2 ei ni + σ 2 vj ni I ni n i n i, J ni = j ni j n i 1 n i n i y Cov (y) = Σ(σ 2 e, σ 2 v) = block diag(σ 1 (σ 2 e, σ 2 v),..., Σ k (σ 2 e, σ 2 v)) v i Cov (y i ) = σei 2 ni y i σei 2 ni + σvj 2 ni (MCMC) 67

78 [3]. (2.3) y =Xβ + Zv + e, (2.4) v N q (0, G), e N N (0, R) y N 1 X N p Z N q y Cov (y) = Σ = R + ZGZ (2.5) G, R α = (α 1,..., α m ) Σ = Σ(α) 2.2 (BLUP) [1] BLUP. (2.4) β v v G, R v y (BLUP) v β β Henderson (1950) ( ) ( ) X R 1 X Z R 1 X X R 1 Z Z R 1 Z + G 1 ) ( β v = X R 1 y Z R 1 y (2.6) β = (X Σ 1 X) X Σ 1 y, v = GZ Σ 1 (y X β) (2.7) (X Σ 1 X) X X Σ 1 X β β 2 (GLS) a R p, b R q µ = a β + b v BLUP µ = a β + b GZ Σ 1 (y X β) (2.8) 68

79 (2.6) (2.7) (2.6) 2 Z R 1 X β + (Z R 1 Z + G 1 ) v = Z R 1 y v = (Z R 1 Z + G 1 ) 1 Z R 1 (y Xβ) (2.9) (Z R 1 Z + G 1 ) 1 Z R 1 (Z R 1 Z + G 1 ) 1 Z R 1 =GZ R 1 G { (Z R 1 Z + G 1 ) G 1} (Z R 1 Z + G 1 ) 1 Z R 1 =GZ R 1 GZ R 1 Z(Z R 1 Z + G 1 ) 1 Z R 1 =GZ { R 1 R 1 Z(G 1 + Z R 1 Z) 1 Z R 1} =GZ Σ 1 Σ 1 = (ZGZ + R) 1 = R 1 R 1 Z(G 1 + Z R 1 Z) 1 Z R 1 (2.10) (2.9) (2.7) v v (2.6) 1 X R 1 X β + X R 1 Z v = X R 1 y X R 1 X β + X R 1 ZGZ Σ 1 (y X β) = X R 1 y X R 1 (Σ ZGZ )Σ 1 X β = X R 1 (Σ ZGZ )Σ 1 y Σ = ZGZ + R R 1 (Σ ZGZ ) = I, X Σ 1 X β = X Σ 1 y (2.7) β [2]. (2.6) y v G 1/2 R 1/2 { ( exp 1 2 v y Xβ Zv ) ( G R 1 ) ( v y Xβ Zv )} 69

80 exp{ } ( 2) h(β, v) = v G 1 v + (y Xβ Zv) R 1 (y Xβ Zv) β v β, v h(β, v) β h(β, v) v = 2X R 1 (y Xβ Zv), =2G 1 v 2Z R 1 (y Xβ Zv), h(β, v)/ β = 0, h(β, v)/ v = 0 (2.6) 1 y v (y, v) ( ) Σ ZG Cov (y, v) = GZ (2.11) G y v E[v y] = GZ Σ 1 (y Xβ),, y v ( v y N q GZ Σ 1 (y Xβ), G GZ Σ 1 ZG ) (2.10) y y N N (Xβ, Σ) { Σ 1/2 exp 1 } 2 (y Xβ) Σ 1 (y Xβ) (2.12) β 2 β v GZ Σ 1 (y Xβ) β GZ Σ 1 (y X β) v 70

81 2 (2.11) EM [3] BLUP. 2.1 (2.2), µ i = x iβ + v i BLUP x i = n i j=1 x ij/n i G(σ 2 v) = σ 2 vi k, Σ i (σ 2 e, σ 2 v) = σ 2 ei ni + σ 2 vj ni, Σ(σ 2 e, σ 2 v) = block diag(σ 1 (σ 2 e, σ 2 v),..., Σ k (σ 2 e, σ 2 v)) Σ 1 i = 1 σ 2 e ( I ni ) σv 2 J σe 2 + n i σv 2 ni θ = σ 2 v/σ 2 e µ i BLUP µ i (θ) (2.8) µ i (θ) = x i β(θ) + θn { } i y 1 + θn i x i β(θ) i (2.13) y i = n i j=1 y ij β GLS { k ( β(θ) = xi x i n2 i θ 1 + n i θ x ) } ix 1 i i=1 k i=1 ( xi y i n iθ 1 + n i θ x iy i ) 2.3 (2.4) 71

82 (2.2) µ i = x iβ + v i y i n i 1 5 (2.13) BLUP µ i (θ) y i x i β(θ) y i n i y i x i β(θ) y i BLUP n i θ y i x i β(θ) n i BLUP BLUP [1]. v i β = 0 µ i y i v i (y i, v i ) ( ) σv 2 + σ 2 Cov (y i, v i ) = e/n i σv 2 y i v i E[v i y i ] = θn i (1 + θn i ) 1 (y i x iβ) y i x iβ y i [2]. (2.2) y i E[y i ] = x iβ i β β y 1,..., y k β(θ) y i σ 2 v σ 2 v 72

83 Efron and Morris (1975) 2.4 (EBLUP) [1] (ML) (REML) (2.4) G, R α = (α 1,..., α m ) Σ(α) = R(α) + ZG(α)Z BLUP (2.8) µ(α) = a β(α)+b G(α)Z {Σ(α)} 1 {y X β(α)} α α µ( α) (EBLUP) α (Maximum Likelihood, ML) (Restricted Maximum Likelihood, REML) y (2.12) y N N (Xβ, Σ(α)) ML β GLS β(α) α ML log Σ(α) + (y X β(α)) Σ(α) 1 (y X β(α)) X r K K X = 0 N (N r) K y N N r (0, K Σ(α)K) REML log K Σ(α)K + y K(K Σ(α)K) 1 K y REML P (α) = Σ(α) 1 Σ(α) 1 X { X Σ(α) 1 X } X Σ(α) 1 y P (α)y =(y X β(α)) Σ(α) 1 (y X β(α)), P (α) =K(K Σ(α)K) 1 K ( / α i ) log Σ =tr (Σ 1 Σ/ α i ), P / α i = P ( Σ/ α i )P, ( / α i ) log K ΣK =tr (P Σ/ α i ) 73

84 ML REML ( ) 1 Σ(α) [ML] tr Σ(α) = y P (α) Σ(α) P (α)y (2.14) α i α ( i [REML] tr P (α) Σ(α) ) = y P (α) Σ(α) P (α)y (2.15) α i α i y P (α){ Σ(α)/ α i }P (α)y =(y X β(α)) Σ(α) 1 { Σ(α)/ α i }Σ(α) 1 (y X β(α)) = (y X β(α)) { Σ(α) 1 / α i }(y X β(α)) ML REML McCulloch and Searle (2001) 6.10 REML ML REML 3.1 [2]. ML, REML (2.14), (2.15) (2.2) n 1,..., n k σ 2 e ˆσ 2UB e = S 1 N k p + λ, S 1 = k n i } 2 {(y ij y i ) (x ij x i ) β1 i=1 (2.16) p λ k ni i=1 j=1 (x ij x i )(x ij x i ) (2.2) λ = 1, λ = 0 β 1 k ni i=1 j=1 {(y ij y i ) (x ij x i ) β} 2 β y 1,..., y k ˆσ e 2UB y i N (x iβ, σe/n 2 i + σv), 2 i = 1,..., k, σv 2 III β 0 = (X X) 1 X y S = (y X β 0 ) (y X β 0 ) N = k i=1 n i, N = N tr (X X) 1 k i=1 n2 i x i x i S E[S] = (N p)σ2 e + N σv 2 σv 2 j=1 ˆσ 2UB v = N 1 {S (N p)ˆσ e 2UB } 74

85 σv 2 ˆσ2UB v Kubokawa (2000) σe, 2 σv 2 θ = σv/σ 2 e 2 { 1 { S ˆθ = max N ˆσ e 2UB (N p) }, 1 } k 2/3 (2.17) ˆθ (2.13) EBLUP µ i (ˆθ) EBLUP EBLUP EBLUP EBLUP 3.1 Fay and Herriot (1979) y i = x iβ + v i + e i, i = 1,..., k, (3.1) e i e i N (0, σe/n 2 i ) (2.2) Fay-Herriot σe 2 σe 2 (2.2) y i = n i j=1 y ij/n i, x i = n i j=1 x ij/n i y i, x i 75

86 y = (y 1,..., y k ), X = (x 1,..., x k ), e = (e 1,..., e k ) y =Xβ + v + e, (3.2) v N (0, σ 2 vi k ), e N (0, D) D D = diag (σ 2 e/n 1,..., σ 2 e/n k ) (3.1) σ 2 v θ = σ2 v/σ 2 e (y 1,..., y k ) θ ˆθ = ˆθ(y 1,..., y k ) µ i = x iβ + v i EBLUP (2.13) ˆθ µ i (ˆθ) = y i ˆγ i (ȳ i x i β(ˆθ)), ˆγ i = γ i (ˆθ) = (1 + n iˆθ) 1 (3.3) β GLS ( k β(θ) = i=1 n i x i x ) 1 k i 1 + n i θ i=1 n i x i y i 1 + n i θ (3.4) θ ML, REML (2.14), (2.15) [ML] [REML] σ 2 e σ 2 e k i=1 k i=1 n i k 1 + n i θ = i=1 n i 1 + n i θ σ2 etr = n 2 i {y i x i β(θ)} 2 ( k i=1 (1 + n i θ) 2 n i x i x ) 1 k i n 2 i x i x i 1 + n i θ i=1 (1 + n i θ) 2 k n 2 i {y i x i β(θ)} 2 i=1 (1 + n i θ) 2 0 ˆθ ML, ˆθ REML Fay and Herriot (1979) [FH] σ 2 e(k p) = k n i {y i x i β(θ)} 2 i=1 1 + n i θ 76

87 ˆθ F H β 2 = ( k j=1 n jx j x j) 1 k j=1 n jx j y j S 2 = k i=1 n i(y i x i β 2 ) 2 (2.16) [TR] { 1 {S ˆθT R 2 = max (k p) } 1 }, n σe 2 k 2/3 n = N tr ( k n i x i x i) 1 i=1 k n 2 i x i x i n 1 = = n k = n θ ML n 1 max{n k i=1 {y i x i β(θ)} 2 /(kσe) 2 1, 0}, REML n 1 max{n k i=1 {y i x i β(θ)} 2 /((k p)σe) 2 1, 0} REML FH REML ˆθ T R i=1 3.2 EBLUP EBLUP EBLUP µ i = x iβ + v i M i (θ, µ i (ˆθ)) = E [{ µ i (ˆθ) µ i } 2 ] /σ 2 e (3.5) (Mean Squared Error, MSE) MSE EBLUP y i N (x iβ, σ 2 e/n i + σ 2 v) Stein EBLUP MSE (2007) Datta, Kubokawa, Rao and Molina (2011) MSE MSE n i k k ˆθ 77

88 Bias θ (ˆθ) = E[ˆθ θ], V ar θ (ˆθ) = E[(ˆθ E[ˆθ]) 2 ] Bias θ (ˆθ) = O p (k 1 ) ˆθ ML, REML, FH, TR g 1i (θ) =n 1 i n 1 i γ i (θ), { k g 2i (θ) = {γ i (θ)} 2 x i j=1 g 3i (θ) =n i {γ i (θ)} 3 V ar θ (ˆθ), γ j n j x j x j} 1xi, EBLUP MSE k M i (θ, µ i (ˆθ)) = g 1i (θ) + g 2i (θ) + g 3i (θ) + O(k 3/2 ) MSE MSE { 2 M i U (ˆθ) = g 1i (ˆθ) + g 2i (ˆθ) + 2g 3i (ˆθ) Biasˆθ(ˆθ) γ i (ˆθ)} (3.6) E[ M U i (ˆθ)] = M i (θ, µ i (ˆθ)) + O(k 3/2 ) θ ˆθ ML, ˆθREML, ˆθF H, ˆθT R V ar θ (ˆθ ML ) = V ar θ (ˆθ REML ) = 2/ k i=1 (n iγ i ) 2 + O(k 3/2 ), V ar θ (ˆθ F H ) = 2k/( k i=1 n iγ i ) 2 + O(k 3/2 ), V ar θ (ˆθ T R ) = 2 k i=1 γ 2 i /N 2 + O(k 3/2 ) Bias θ (ˆθ REML ) = Bias θ (ˆθ T R ) = O(k 3/2 ), Bias θ (ˆθ ML ) = tr ( i n iγ i x i x i) 1 i (n iγ i ) 2 x i x i i (n iγ i ) 2 + O(k 3/2 ), Bias θ (ˆθ F H ) =2 k i (n iγ i ) 2 ( i n iγ i ) 2 ( i n iγ i ) 3 + O(k 3/2 ) MSE EBLUP MSE M i (θ, µ i (ˆθ)) ˆθ g 3i (θ) V ar θ (ˆθ) ˆθ MSE V ar θ (ˆθ ML ) = V ar θ (ˆθ REML ) V ar θ (ˆθ F H ) FH ML, REML MSE (3.6) M U i (ˆθ) Bias(ˆθ) = 0 θ REML 78

89 Datta et al. (2011), Rao (2003), Datta, Rao and Smith (2005), Kubokawa (2011b) Butar and Lahiri (2003), Hall and Maiti (2006a,b), Kubokawa and Nagashima (2011) 3.3 EBLUP EBLUP i µ i = x β +v i σ 2 e µ i µ i y i y i µ i N (µ i, σ 2 e/n i ) I i : y i ± z α/2 σ 2 e /n i (3.7) z α/2 α/2 1 α n i y i (3.1) µ i µ i N (0, σ 2 vi k ) µ i γ i = (1 + n i θ) 1 µ B i (β, θ) = x iβ + (1 γ i )(y i x iβ) y i µ i µ i y i N ( µ B i (β, θ), (σ 2 e/n i )(1 γ i ) ) µ i 1 α I B i (β, θ) : µ B i (β, θ) ± z α/2 (σ 2 e /n i )(1 γ i ) β, θ ˆθ y 1,..., y k θ β (3.4) 2 β(ˆθ) µ i µ EB i (ˆθ) = x i β(ˆθ) + (1 ˆγ i ) ( ) y i x 1 i β(ˆθ), ˆγ i = (1 + n iˆθ) 79

90 I B i (β, θ) Ii EB (ˆθ) : µ EB i (ˆθ) ± z α/2 (σ 2 e /n i )(1 ˆγ i ) µ EB i (ˆθ) y i 1 α k = 50, p = 3, 1 α = 0.95 β, n i, x i θ 1 Ii EB (ˆθ) 95% 0.99 I i * I i EB I i AEB 0.98 I i AEB I i * I i EB θ 1: I i, IEB i, Ii AEB 3.2 k 1 α, n 1 = = n k σe 2 Basu, Ghosh and Mukerjee (2003) n 1,..., n k σe 2 - (2005), Kubokawa (2010) Basu et al. (2003), z α/2 z α/2 {1 + (2k) 1 h(ˆθ)} I AEB i : µ EB i (ˆθ) ± z α/2 [1 + (2k) 1 h(ˆθ) ] (σ 2 e/n i )(1 ˆγ i ) (3.8) 80

91 h(ˆθ) h(ˆθ) = kn iˆγ i 2 Biasˆθ(ˆθ) + (1 + z 2 kn 2 i ˆγ i 4 1 ˆγ α/2) i 4(1 ˆγ i ) V arˆθ(ˆθ) 2 + kn iˆγ i 2 { x i 1 ˆγ i ( k j=1 n j x j x ) (ˆθ)} 1xi j + 2n iˆγ i V arˆθ 1 + n j ˆθ (3.9) Bias θ (ˆθ) = O p (k 1 ), ˆθ/ y i = O p (k 1 ) P [µ i I AEB i ] = 1 α + o(k 1 ), (k ) (3.1) ˆθ T R Bias θ (ˆθ T R ) = o(k 1 ), V ar θ (ˆθ T R ) = 2 k i=1 (1 + n iθ) 2 /N 2 + o(k 1 ) (3.9) n 1 = = n k = n h(ˆθ) = 1 + z2 α/2 2n 2ˆθ nˆθ { kx i ( k j=1 1xi } x j x j) Ii AEB 95%, θ > % 2 Ii AEB Ii Ii AEB LMM Kubokawa (2011b) Chatterjee, Lahiri and Li (2008), Hall and Maiti (2006b), Kubokawa and Nagashima (2011) 3.4 EBLUP EBLUP m 2 81

92 I i * I i * I i EB I i AEB I i AEB I i EB θ 2: I i, IEB i, Ii AEB i n i k = 48 n i y ij (2.2) y ij = β 0 + x 1i β 1 + x 2ij β 2 + x 3ij β 3 + v i + e ij x 1i i x 2ij (i, j) (i), x 3ij (i, j) x ij = (1, x 1i, x 2ij, x 3ij ), x i = (1, x 1i, x 2i, x 3i ), k ni i=1 j=1 (x ij x i )(x ij x i ) 2 (2.16) ˆσ e 2 λ λ = 2 ( ) µ i = β 0 + x 1i β 1 + x 2i β 2 + x 3i β 3 + v i ˆθ 3.1 ˆθ T R ˆθ T R, β(ˆθ T R ), ˆσ 2 e ˆθ T R = , β(ˆθ T R ) = (12.927, , , ), ˆσ 2 e = β 1 82

93 1: 1m 2 (EBLUP i (4.6) ) No. n i ˆv i y i EBLUP i β(ˆθt R ) 1/n i M U i EBLUP i

94 MSE Sample Mean EBLUP n i : y i MSE EBLUP i MSE M U i (No.1 No.48 ) 1 1m 2 No.1 No n i y i EBLUP i (3.3) β(ˆθ T R ) 1 EBLUP i y i β(ˆθ T R ) n i n i 1 (3.5) (MSE) 1/n i v i y i MSE M i U EBLUP i MSE (3.6) 3 y i MSE M i U n i No.1 No.48 EBLUP i y i n i n i 1 MSE MSE 0 84

95 1 ˆv i 1.42, 2.31, , ˆv i 4 Ii AEB y i Ii No.1 No.48 I AEB i (3.8) Ii n i I AEB i I AEB i 5 n i Ii n i Ii AEB, n i Ii AEB I i * (upper) I i AEB (upper) I i * (lower) I i AEB (lower) : I AEB i I i (No.1 No.48 ) 85

96 I i * I i AEB n i : I AEB i I i n i (n i No.1 No.48 ) 4 (LMM) 2.3 LMM, (2007) 4.1 Laird and Ware (1982), Tsimikas and Ledolter (1997), Das, Jiang and Rao (2004) Diggle, Liang and Zeger 86

97 (1994), Verbeke and Molenberghs (2000), McCullock and Searle (2001), Demidenko (2004), Fitzmaurice, Laird and Ware (2004), Molenberghs and Verbeke (2006) Hsiao (2003) (3.1) T (repeated measures data, longitudinal data) t = 1,..., T y i1,..., y it x i1,..., x it y i = (y i1,..., y it ), X i = (x i1,..., x it ) y i y i = X iβ + j T v i + e i (4.1) e i v i e i N T (0, (σ 2 e/n i )Q), v i N (0, σ 2 v) e i = (e i1,..., e it ) y it = x itβ + v i + e it, i = 1,..., k, t = 1,..., T, e is e it, s t, Q AR(1) T = 4 ρ < 1 Q 1 = Q 2 = 1 1 ρ 2 1 ρ ρ ρ ρ 1 ρ ρ ρ ρ 1 ρ ρ ρ ρ 1 = (1 ρ)i 4 + ρj 4, 1 ρ ρ 2 ρ 3 ρ 1 ρ ρ 2 ρ 2 ρ 1 ρ ρ 3 ρ 2 ρ 1 = (1 ρ2 ) ( 1 ρ i j ) 87

98 k y = y 1., X = X 1., v = v 1., e = e 1., y k X k v k e k Z = block diag(j T,..., j T ) y = Xβ + Zv + e (4.2) p q A = (a ij ), r s B A B = (a ij B) Z Z = I k j T v Cov (v) = G D = diag (σ 2 e/n 1,..., σ 2 e/n k ) y Σ = ZGZ + D Q ZGZ = (I k j T )(G 1)(I k j T ) = G J T Σ = G J T + D Q G = Cov (v) G = σ 2 v {(1 ρ v )I k + ρ v J k } 2 G (2.2), (3.1) Cov (v) = σ 2 vi k β v Σ = Cov (y) Σ =σ 2 vi k J T + D Q = diag (Σ 1,..., Σ k ), Σ i =σ 2 vj T + (σ 2 e/n i )Q, i = 1,..., k Σ 1 = diag ( ) Σ 1 1,..., Σ 1 k θ = σ 2 v /σe 2 Σ 1 i = n i {Q 1 n iθq 1 j T j T Q 1 } σe n i θj T Q 1 j T 88

99 v = (ˆv 1,..., ˆv k ) = GZ Σ 1 (y X β) ( ) ˆv i =σvj 2 T Σ 1 i y i X i β n i θ ( ) = 1 + n i θj T Q 1 j T Q 1 y i X j i β T β GLS ( k β = X i Σ 1 i X i i=1 ) 1 k i=1 X i Σ 1 i y i T µ i = j T X iβ/t + v i µ i = j T X i β/t + ˆv i Q Q 1 = (1 ρ)i T + ρj T Q 1 1 = 1 { ρ } I T 1 ρ 1 + (T 1)ρ J T j T Q 1 1 = {1 + (T 1)ρ} 1 j T ˆv i = n i θ 1 + (T 1)ρ + n i θt T t=1 ( ) y it x it β Q Q 2 = (1 ρ 2 ) 1 (ρ i j ) T = 4 1 ρ 0 0 Q 1 ρ 1 + ρ 2 ρ 0 2 = 0 ρ 1 + ρ 2 ρ 0 0 ρ 1 j T Q 1 2 = (1 ρ)(1, 1 ρ,..., 1 ρ, 1) = (1 ρ) 2{ j T + ρ(1 ρ) 1 (1, 0,..., 0, 1) }, j T Q 1 2 j T = (1 ρ) 2{ T + 2ρ/(1 ρ) } n i θ ˆv i = (1 ρ) 2 + n i θ{t + 2ρ/(1 ρ)} { T ( ) y it x it β + ρ ( y 1 ρ i1 x i1 β + y it x β )} it t=1 n i ρ 0 T t=1( yit x it β ) /T 89

100 4.2 v i t = 1,..., T v i T T t=1 (y it x itβ) T (4.1) j T v i y i = X iβ + v i + e i (4.3) v i = (v i1,..., v it ) N T (0, σ 2 vi T ) y i Σ i = σ 2 vi T + (σ 2 e/n i )Q, i = 1,..., k T µ it = x it β + v it E[v it y i ] = σv(0, 2..., 0, 1)Σ 1 i (y i x iβ) µ it = x β it + σv(0, 2..., 0, 1)Σ 1 i (y i X i β) Σ i = (σe/n 2 i )(n i θi T + Q) ( ) ( A 11 a 12 n i θi T + Q = A =, A 1 A 11 a 12 = a 21 a 22 a 21 a 22 a 22 a 22.1 = a 22 a 21 A 1 11 a 12 a 21 = a 21 A 1 11 /a 22.1, a 22 = 1/a 22.1 µ it µ it = x β it + n { iθ (y a it x β) } it (a 21 A 1 11, 0)(y i X i β) (4.4) 22.1 y it x it β T 1 Q Q 1 = (1 ρ)i T + ρj T a 22.1 = (n i θ + 1 ρ)(n i θ (T 1)ρ)/(n i θ (T 2)ρ), a 21 A 1 11 = {ρ/(n i θ ) 90

101 (T 2)ρ)}j T 1, µ it =x β n i θ(n i θ (T 2)ρ) { it + (y (n i θ + 1 ρ)(n i θ (T 1)ρ) it x β) it ρ T 1 } (y n i θ (T 2)ρ it x it β) t=1 (4.5) n i µ it y it, ρ 0 µ it x it β + {n i θ/(1 + n i θ)}(y it x it β) Q Q 2 = (1 ρ 2 ) 1 (ρ i j ) (4.4) T = 3 µ it =x β it + n iθ (n i θ + 1) 2 ρ 2 (n i θ) 2 n i θ + 1 (n i θ + 1) 2 ρ 2 n i θ(n i θ 1) { (y it x it β) ρ (n i θ + 1) 2 ρ 2 (n i θ) 2 [ ni θρ(y i,t 2 x i,t 2 β) + (n i θ + 1)(y i,t 1 x i,t 1 β) ]} (4.6) (4.5) n i θρ < n i θ + 1 (4.6) n i µ it y it, ρ 0 µ it x it β + {n i θ/(1 + n i θ)}(y it x it β) θ, ρ ML REML t β β t = ( k j=1 n jx jt x jt) 1 k j=1 n jx jt y jt ê it = y it x it β t θ (2.16), S 2 = T k t=1 i=1 n iê 2 it, n = T k k k { n i tr ( n 2 i x it x it)( n i x it x it) 1 } t=1 i=1 i=1 i=1 ˆθ T R = max{ 1 n (S 2 σ 2 e T (k p) ), 1 } k 2/3 91

102 ρ, ˆρ = k i=1 n i ˆρ i /N ˆρ i Q 1, e i = T t=1 êit/t, T 1 ˆρ i = 2 t=1 s=t+1 T (ê is e i )(ê it e i ) /{(T 1) T (ê it e i ) 2 } Q 2, ˆρ i = T t=2 (ê it e i )(ê i,t 1 e i )/ T t=1 (ê it e i ) 2 ˆρ < 1, ML, REML Q 2 (4.4) (4.6) (T = 5) ˆθ T R = , ˆρ = EBLUP i 2001 EBLUP i 5, (4.4) 6 ˆv it ˆv it Nos.1, 3, 4, 13, 14, 33 ˆv it ˆv it (No.1, 3, 4) No.13, 14, 33 t=1 4.3 (GLMM) k i n i y i1,..., y ini 92

103 2.0 No. 1 No.13 No. 3 No. 14 No. 4 No No No No.4 No No.3 No : ˆv it ˆv it v i y ij f(y ij v i ) = exp {[y ij θ ij b(θ ij )]/τ ij + c(y ij, τ ij )}, j = 1,..., n i ; i = 1,..., k, θ ij τ ij (> 0) τ ij y ij E[y ij v i ] = µ ij µ ij g( ) x ij g(µ ij ) = x ijβ + v i v i N (0, σ 2 v) GLMM GLMM McCullagh and Nelder (1989, 14.5 ) Fahrmeir and Tutz (2001), McCulloch (2003) McCulloch and Searle (2000) Lawson, Browne and Vidal Rodeiro (2003), Lawson (2006) GLMM 93

104 (Standardized Mortality Rate, SMR), ( )/( ) SMR GLMM (1988) 4.4 (2.2) y ij = µ ij + e ij, µ ij = x ijβ + v i y ij (µ ij, σ 2 e) N (µ ij, σ 2 e) µ ij µ ij N (x ijβ, σ 2 v) β, σ 2 v, σ 2 e β, σ 2 v, σ 2 e (2.2) β, σ 2 v, σ 2 e (i, j)- µ ij µ (µ, σ 2 e) (β, σ 2 v) π 1 (µ, σ 2 e β, σ 2 v) (β, σ 2 v) π 2 (β, σ 2 v) π 1 (µ, σ 2 e β, σ 2 v) µ ij (β, σ 2 v) N (x iβ, σ 2 v), σe 2 σe 2 dσe 2 σe 2 dσe 2 π 2 (β, σv) 2 σv 2 dσv 2 β (1) dβ, (2) β σ2 v N (β 0, σva), 2 σ 2 v 94

105 (3) β (σ 2 v, λ) N (β 0, λσ 2 va), λ π 3 (λ), β 0, A Kubokawa and Strawderman (2007) Banerjee, Carlin and Gelfand (2004) 5 C. Stein (MSE) n 1 = = n k = n (3.1) µ i = x iβ + v i µ = (µ 1,..., µ k ) µ S = X β { + max 1 (k p } 2)σ2 e n y X β, 0 (y X β) 2 β = (X X) 1 Xy β OLS Stein k p 3 µ S MSE y MSE Stein (1991) (2004) 20 n 1 = = n k = n (3.1) µ EBLUP θ REML (3.3) µ S (k p 2) (k p) Henderson BLUP 1950 Stein EBLUP Henderson Stein 95

106 2.4 LMM LMM, n 1 = = n k = n (2.2) (2.16) S 1 S 2 S 1 /σ 2 1 χ 2 m 1, S 2 /σ 2 2 χ 2 m 2 m 1, m 2 σ 2 1 = σ 2 e, σ 2 2 = σ 2 e + nσ 2 v σ2 1, σ 2 2 σ2 1 < σ 2 2 Srivastava and Kubokawa (1999), Kubokawa and Tsai (2006) LMM Jiang, Rao, Gu and Nguyen (2008) Fence Kubokawa and Srivastava (2010) LMM (AIC) Vaida and Blanchard (2005) AIC Kubokawa (2011a), Kubokawa and Nagashima (2011) LMM Carleton John N.K. Rao LMM Rao (Statistica Canada) PhD PhD Rao ,

107 [1] Banerjee, S., Carlin, B.P. and Gelfand, A.E. (2004). Hierarchical Modeling and Analysis for Spatial Data. Chapman and Hall, New York. [2] Basu, R., Ghosh, J.K., and Mukerjee, R. (2003). Empirical Bayes prediction intervals in a normal regression model: higher order asymptotics. Statist. Prob. Letters, 63, [3] Battese, G.E., Harter, R.M. and Fuller, W.A. (1988). An errorcomponents model for prediction of county crop areas using survey and satellite data. J. Amer. Statist. Assoc., 83, [4] Butar, F.B. and Lahiri, P. (2003). On measures of uncertainty of empirical Bayes small-area estimators. J. Statist. Plan. Inf., 112, [5] Chatterjee, S., Lahiri, P., and Li, H. (2008). Parametric bootstrap approximation to the distribution of EBLUP and related prediction intervals in linear mixed models. Ann. Statist., 36, [6] Das, K., Jiang, J. and Rao, J.N.K. (2004). Mean squared error of empirical predictor. Ann. Statist., 32, [7] Datta, G.S., Kubokawa, T., Rao, J.N.K., and Molina, I. (2011). Estimation of mean squared error of model-based small area estimators. Test, an Official Journal of the Spanish Society and Operations Research, 20, [8] Datta, G.S., Rao, J.N.K. and Smith, D.D. (2005). On measuring the variability of small area estimators under a basic area level model. Biometrika, 92, [9] Demidenko, E. (2004). Mixed Models: Theory and Applications. Wiley. [10] Diggle, P., Liang, K.-Y., and Zeger, S.L. (1994). Longitudinal Data Analysis. Oxford Univ. Press. 97

108 [11] Efron, B. and Morris, C. (1975). Data analysis using Stein s estimator and its generalizations. J. Amer. Statist. Assoc., 70, [12] Fahrmeir, L. and Tutz, G. (2001). Multivariate Statistical Modelling Based on Generalized Linear Models. 2nd ed. Springer, New York. [13] Fay, R.E. and Herriot, R. (1979). Estimates of income for small places: An application of James-Stein procedures to census data. J. Amer. Statist. Assoc., 74, [14] Fitzmaurice, G.M., Laird, N.M., and Ware, J.H. (2004). Applied Longitudinal Analysis. Wiley. [15] Hall, P. and Maiti, T. (2006a). Nonparametric estimation of meansquared prediction error in nested-error regression models. Ann. Statist., 34, [16] Hall, P. and Maiti, T. (2006b). On parametric bootstrap methods for small area prediction. J. Royal Statist. Soc., 68, [17] Henderson, C.R. (1950). Estimation of genetic parameters. Ann. Math. Statist., 21, [18] Hsiao, C. (2003). Analysis of Panel Data. Cambridge University Press. (2007) [19] Jiang, J., Rao, J.S., Gu, Z., and Nguyen, T. (2008). Fence methods for mixed model selection. Ann. Statist., 36, [20] Kubokawa, T. (2000). Estimation of variance and covariance components in elliptically contoured distributions. J. Japan Statist. Soc., 30, [21] Kubokawa, T. (2010). Corrected empirical Bayes confidence intervals in nested error regression models. J. Korean Statist. Soc., 39, [22] Kubokawa, T. (2011a). Conditional and unconditional methods for selecting variables in linear mixed models. J. Multivariate Analysis, 102,

109 [23] Kubokawa, T. (2011b). On measuring uncertainty of small area estimators with higher order accuracy. J. Japan Statist. Soc., to appear. [24] Kubokawa,T., and Nagashima, B. (2011). Parametric bootstrap methods for bias correction in linear mixed models. Discussion Paper Series, CIRJE-F-801. [25] Kubokawa, T., and Srivastava, M.S. (2010). An empirical Bayes information criterion for selecting variables in linear mixed models. J. Japan Statist. Soc., 40, [26] Kubokawa, T. and Strawderman, W.E. (2007). On minimaxity and admissibility of hierarchical Bayes estimators. J. Multivariate Analysis, 98, [27] Kubokawa, T. and Tsai, M.-T. (2006). Estimation of covariance matrices in fixed and mixed effects linear models. J. Multivariate Analysis, 97, [28] Laird, N.M. and Ware, J.H. (1982). Random-effects models for longitudinal data. Biometrics, 38, [29] Lawson, A.B. (2006). Statistical Methods in Spacial Epidemiology. 2nd ed. Wiley, England. [30] Lawson, A.B., Browne, W.J. and Vidal Rodeiro, C.L. (2003). Disease Mapping with WinBUGS and MLwiN. Wiley, England. [31] McCulloch, C.E. (2003). Generalized Linear Mixed Models. NSF-CBMS Regional Conference Series in Probability and Statistics, Volume 7. IMS, USA. [32] McCulloch, C.E. and Searle, S.R. (2001). Generalized, Linear and Mixed Models. Wiley, New York. [33] Molenberghs, G. and Verbeke, G. (2006). Models for Discrete Longitudinal Data. Springer. [34] Rao, J.N.K. (2003). Small Area Estimation. Wiley, New Jersey. 99

21世紀の統計科学 <Vol. III>

21世紀の統計科学 <Vol. III> 21 III HP, 2011 10 4 1 ( ), 1 tatsuya@e.u-tokyo.ac.jp 63 1 (Linear Mixed Model, LMM) (Best Linear Unbiased Predictor, BLUP) C.R. Henderson 50 LMM (Generalized Linear Mixed Model, GLMM) LMM LMM (Empirical

More information

カルマンフィルターによるベータ推定( )

カルマンフィルターによるベータ推定( ) β TOPIX 1 22 β β smoothness priors (the Capital Asset Pricing Model, CAPM) CAPM 1 β β β β smoothness priors :,,. E-mail: koiti@ism.ac.jp., 104 1 TOPIX β Z i = β i Z m + α i (1) Z i Z m α i α i β i (the

More information

Kobe University Repository : Kernel タイトル Title 著者 Author(s) 掲載誌 巻号 ページ Citation 刊行日 Issue date 資源タイプ Resource Type 版区分 Resource Version 権利 Rights DOI

Kobe University Repository : Kernel タイトル Title 著者 Author(s) 掲載誌 巻号 ページ Citation 刊行日 Issue date 資源タイプ Resource Type 版区分 Resource Version 権利 Rights DOI Kobe University Repository : Kernel タイトル Title 著者 Author(s) 掲載誌 巻号 ページ Citation 刊行日 Issue date 資源タイプ Resource Type 版区分 Resource Version 権利 Rights DOI 平均に対する平滑化ブートストラップ法におけるバンド幅の選択に関する一考察 (A Study about

More information

Part. 4. () 4.. () 4.. 3 5. 5 5.. 5 5.. 6 5.3. 7 Part 3. 8 6. 8 6.. 8 6.. 8 7. 8 7.. 8 7.. 3 8. 3 9., 34 9.. 34 9.. 37 9.3. 39. 4.. 4.. 43. 46.. 46..

Part. 4. () 4.. () 4.. 3 5. 5 5.. 5 5.. 6 5.3. 7 Part 3. 8 6. 8 6.. 8 6.. 8 7. 8 7.. 8 7.. 3 8. 3 9., 34 9.. 34 9.. 37 9.3. 39. 4.. 4.. 43. 46.. 46.. Cotets 6 6 : 6 6 6 6 6 6 7 7 7 Part. 8. 8.. 8.. 9..... 3. 3 3.. 3 3.. 7 3.3. 8 Part. 4. () 4.. () 4.. 3 5. 5 5.. 5 5.. 6 5.3. 7 Part 3. 8 6. 8 6.. 8 6.. 8 7. 8 7.. 8 7.. 3 8. 3 9., 34 9.. 34 9.. 37 9.3.

More information

0.,,., m Euclid m m. 2.., M., M R 2 ψ. ψ,, R 2 M.,, (x 1 (),, x m ()) R m. 2 M, R f. M (x 1,, x m ), f (x 1,, x m ) f(x 1,, x m ). f ( ). x i : M R.,,

0.,,., m Euclid m m. 2.., M., M R 2 ψ. ψ,, R 2 M.,, (x 1 (),, x m ()) R m. 2 M, R f. M (x 1,, x m ), f (x 1,, x m ) f(x 1,, x m ). f ( ). x i : M R.,, 2012 10 13 1,,,.,,.,.,,. 2?.,,. 1,, 1. (θ, φ), θ, φ (0, π),, (0, 2π). 1 0.,,., m Euclid m m. 2.., M., M R 2 ψ. ψ,, R 2 M.,, (x 1 (),, x m ()) R m. 2 M, R f. M (x 1,, x m ), f (x 1,, x m ) f(x 1,, x m ).

More information

24.15章.微分方程式

24.15章.微分方程式 m d y dt = F m d y = mg dt V y = dy dt d y dt = d dy dt dt = dv y dt dv y dt = g dv y dt = g dt dt dv y = g dt V y ( t) = gt + C V y ( ) = V y ( ) = C = V y t ( ) = gt V y ( t) = dy dt = gt dy = g t dt

More information

80 X 1, X 2,, X n ( λ ) λ P(X = x) = f (x; λ) = λx e λ, x = 0, 1, 2, x! l(λ) = n f (x i ; λ) = i=1 i=1 n λ x i e λ i=1 x i! = λ n i=1 x i e nλ n i=1 x

80 X 1, X 2,, X n ( λ ) λ P(X = x) = f (x; λ) = λx e λ, x = 0, 1, 2, x! l(λ) = n f (x i ; λ) = i=1 i=1 n λ x i e λ i=1 x i! = λ n i=1 x i e nλ n i=1 x 80 X 1, X 2,, X n ( λ ) λ P(X = x) = f (x; λ) = λx e λ, x = 0, 1, 2, x! l(λ) = n f (x i ; λ) = n λ x i e λ x i! = λ n x i e nλ n x i! n n log l(λ) = log(λ) x i nλ log( x i!) log l(λ) λ = 1 λ n x i n =

More information

Duality in Bayesian prediction and its implication

Duality in Bayesian prediction and its implication $\theta$ 1860 2013 104-119 104 Duality in Bayesian prediction and its implication Toshio Ohnishi and Takemi Yanagimotob) a) Faculty of Economics, Kyushu University b) Department of Industrial and Systems

More information

Power Transformation and Its Modifications Toshimitsu HAMASAKI, Tatsuya ISOMURA, Megu OHTAKI and Masashi GOTO Key words : identity transformation, pow

Power Transformation and Its Modifications Toshimitsu HAMASAKI, Tatsuya ISOMURA, Megu OHTAKI and Masashi GOTO Key words : identity transformation, pow Power Transformation and Its Modifications Toshimitsu HAMASAKI, Tatsuya ISOMURA, Megu OHTAKI and Masashi GOTO Key words : identity transformation, power-normal distribution, structured data, unstructured

More information

03.Œk’ì

03.Œk’ì HRS KG NG-HRS NG-KG AIC Fama 1965 Mandelbrot Blattberg Gonedes t t Kariya, et. al. Nagahara ARCH EngleGARCH Bollerslev EGARCH Nelson GARCH Heynen, et. al. r n r n =σ n w n logσ n =α +βlogσ n 1 + v n w

More information

seminar0220a.dvi

seminar0220a.dvi 1 Hi-Stat 2 16 2 20 16:30-18:00 2 2 217 1 COE 4 COE RA E-MAIL: ged0104@srv.cc.hit-u.ac.jp 2004 2 25 S-PLUS S-PLUS S-PLUS S-code 2 [8] [8] [8] 1 2 ARFIMA(p, d, q) FI(d) φ(l)(1 L) d x t = θ(l)ε t ({ε t }

More information

確率論と統計学の資料

確率論と統計学の資料 5 June 015 ii........................ 1 1 1.1...................... 1 1........................... 3 1.3... 4 6.1........................... 6................... 7 ii ii.3.................. 8.4..........................

More information

橡表紙参照.PDF

橡表紙参照.PDF CIRJE-J-58 X-12-ARIMA 2000 : 2001 6 How to use X-12-ARIMA2000 when you must: A Case Study of Hojinkigyo-Tokei Naoto Kunitomo Faculty of Economics, The University of Tokyo Abstract: We illustrate how to

More information

http://www2.math.kyushu-u.ac.jp/~hara/lectures/lectures-j.html 2 N(ε 1 ) N(ε 2 ) ε 1 ε 2 α ε ε 2 1 n N(ɛ) N ɛ ɛ- (1.1.3) n > N(ɛ) a n α < ɛ n N(ɛ) a n

http://www2.math.kyushu-u.ac.jp/~hara/lectures/lectures-j.html 2 N(ε 1 ) N(ε 2 ) ε 1 ε 2 α ε ε 2 1 n N(ɛ) N ɛ ɛ- (1.1.3) n > N(ɛ) a n α < ɛ n N(ɛ) a n http://www2.math.kyushu-u.ac.jp/~hara/lectures/lectures-j.html 1 1 1.1 ɛ-n 1 ɛ-n lim n a n = α n a n α 2 lim a n = 1 n a k n n k=1 1.1.7 ɛ-n 1.1.1 a n α a n n α lim n a n = α ɛ N(ɛ) n > N(ɛ) a n α < ɛ

More information

1 1 3 1.1 (Frequecy Tabulatios)................................ 3 1........................................ 8 1.3.....................................

1 1 3 1.1 (Frequecy Tabulatios)................................ 3 1........................................ 8 1.3..................................... 1 1 3 1.1 (Frequecy Tabulatios)................................ 3 1........................................ 8 1.3........................................... 1 17.1................................................

More information

June 2016 i (statistics) F Excel Numbers, OpenOffice/LibreOffice Calc ii *1 VAR STDEV 1 SPSS SAS R *2 R R R R *1 Excel, Numbers, Microsoft Office, Apple iwork, *2 R GNU GNU R iii URL http://ruby.kyoto-wu.ac.jp/statistics/training/

More information

5 36 5................................................... 36 5................................................... 36 5.3..............................

5 36 5................................................... 36 5................................................... 36 5.3.............................. 9 8 3............................................. 3.......................................... 4.3............................................ 4 5 3 6 3..................................................

More information

z.prn(Gray)

z.prn(Gray) 1. 90 2 1 1 2 Friedman[1983] Friedman ( ) Dockner[1992] closed-loop Theorem 2 Theorem 4 Dockner ( ) 31 40 2010 Kinoshita, Suzuki and Kaiser [2002] () 1) 2) () VAR 32 () Mueller[1986], Mueller ed. [1990]

More information

Stata 11 Stata ts (ARMA) ARCH/GARCH whitepaper mwp 3 mwp-083 arch ARCH 11 mwp-051 arch postestimation 27 mwp-056 arima ARMA 35 mwp-003 arima postestim

Stata 11 Stata ts (ARMA) ARCH/GARCH whitepaper mwp 3 mwp-083 arch ARCH 11 mwp-051 arch postestimation 27 mwp-056 arima ARMA 35 mwp-003 arima postestim TS001 Stata 11 Stata ts (ARMA) ARCH/GARCH whitepaper mwp 3 mwp-083 arch ARCH 11 mwp-051 arch postestimation 27 mwp-056 arima ARMA 35 mwp-003 arima postestimation 49 mwp-055 corrgram/ac/pac 56 mwp-009 dfgls

More information

( )/2 hara/lectures/lectures-j.html 2, {H} {T } S = {H, T } {(H, H), (H, T )} {(H, T ), (T, T )} {(H, H), (T, T )} {1

( )/2   hara/lectures/lectures-j.html 2, {H} {T } S = {H, T } {(H, H), (H, T )} {(H, T ), (T, T )} {(H, H), (T, T )} {1 ( )/2 http://www2.math.kyushu-u.ac.jp/ hara/lectures/lectures-j.html 1 2011 ( )/2 2 2011 4 1 2 1.1 1 2 1 2 3 4 5 1.1.1 sample space S S = {H, T } H T T H S = {(H, H), (H, T ), (T, H), (T, T )} (T, H) S

More information

Part () () Γ Part ,

Part () () Γ Part , Contents a 6 6 6 6 6 6 6 7 7. 8.. 8.. 8.3. 8 Part. 9. 9.. 9.. 3. 3.. 3.. 3 4. 5 4.. 5 4.. 9 4.3. 3 Part. 6 5. () 6 5.. () 7 5.. 9 5.3. Γ 3 6. 3 6.. 3 6.. 3 6.3. 33 Part 3. 34 7. 34 7.. 34 7.. 34 8. 35

More information

Isogai, T., Building a dynamic correlation network for fat-tailed financial asset returns, Applied Network Science (7):-24, 206,

Isogai, T., Building a dynamic correlation network for fat-tailed financial asset returns, Applied Network Science (7):-24, 206, H28. (TMU) 206 8 29 / 34 2 3 4 5 6 Isogai, T., Building a dynamic correlation network for fat-tailed financial asset returns, Applied Network Science (7):-24, 206, http://link.springer.com/article/0.007/s409-06-0008-x

More information

地域総合研究第40巻第1号

地域総合研究第40巻第1号 * abstract This paper attempts to show a method to estimate joint distribution for income and age with copula function. Further, we estimate the joint distribution from National Survey of Family Income

More information

日本統計学会誌, 第45巻, 第2号, 217頁-230頁

日本統計学会誌, 第45巻, 第2号, 217頁-230頁 45, 2, 2016 3 217 230 The Role of Statisticians: Past, Present, and Future Manabu Iwasaki In this big data era, the roles of statisticians in academia and industries and of academic societies such as the

More information

dvi

dvi 2017 65 2 185 200 2017 1 2 2016 12 28 2017 5 17 5 24 PITCHf/x PITCHf/x PITCHf/x MLB 2014 PITCHf/x 1. 1 223 8522 3 14 1 2 223 8522 3 14 1 186 65 2 2017 PITCHf/x 1.1 PITCHf/x PITCHf/x SPORTVISION MLB 30

More information

,.,.,,. [15],.,.,,., 2003 3 2006 2 3. 2003 3 2004 2 2004 3 2005 2, 1., 2005 3 2006 2, 1., 1,., 1,,., 1. i

,.,.,,. [15],.,.,,., 2003 3 2006 2 3. 2003 3 2004 2 2004 3 2005 2, 1., 2005 3 2006 2, 1., 1,., 1,,., 1. i 200520866 ( ) 19 1 ,.,.,,. [15],.,.,,., 2003 3 2006 2 3. 2003 3 2004 2 2004 3 2005 2, 1., 2005 3 2006 2, 1., 1,., 1,,., 1. i 1 1 1.1..................................... 1 1.2...................................

More information

わが国企業による資金調達方法の選択問題

わが国企業による資金調達方法の選択問題 * takeshi.shimatani@boj.or.jp ** kawai@ml.me.titech.ac.jp *** naohiko.baba@boj.or.jp No.05-J-3 2005 3 103-8660 30 No.05-J-3 2005 3 1990 * E-mailtakeshi.shimatani@boj.or.jp ** E-mailkawai@ml.me.titech.ac.jp

More information

..3. Ω, Ω F, P Ω, F, P ). ) F a) A, A,..., A i,... F A i F. b) A F A c F c) Ω F. ) A F A P A),. a) 0 P A) b) P Ω) c) [ ] A, A,..., A i,... F i j A i A

..3. Ω, Ω F, P Ω, F, P ). ) F a) A, A,..., A i,... F A i F. b) A F A c F c) Ω F. ) A F A P A),. a) 0 P A) b) P Ω) c) [ ] A, A,..., A i,... F i j A i A .. Laplace ). A... i),. ω i i ). {ω,..., ω } Ω,. ii) Ω. Ω. A ) r, A P A) P A) r... ).. Ω {,, 3, 4, 5, 6}. i i 6). A {, 4, 6} P A) P A) 3 6. ).. i, j i, j) ) Ω {i, j) i 6, j 6}., 36. A. A {i, j) i j }.

More information

Mantel-Haenszelの方法

Mantel-Haenszelの方法 Mantel-Haenszel 2008 6 12 ) 2008 6 12 1 / 39 Mantel & Haenzel 1959) Mantel N, Haenszel W. Statistical aspects of the analysis of data from retrospective studies of disease. J. Nat. Cancer Inst. 1959; 224):

More information

7 9 7..................................... 9 7................................ 3 7.3...................................... 3 A A. ω ν = ω/π E = hω. E

7 9 7..................................... 9 7................................ 3 7.3...................................... 3 A A. ω ν = ω/π E = hω. E B 8.9.4, : : MIT I,II A.P. E.F.,, 993 I,,, 999, 7 I,II, 95 A A........................... A........................... 3.3 A.............................. 4.4....................................... 5 6..............................

More information

untitled

untitled 2 : n =1, 2,, 10000 0.5125 0.51 0.5075 0.505 0.5025 0.5 0.4975 0.495 0 2000 4000 6000 8000 10000 2 weak law of large numbers 1. X 1,X 2,,X n 2. µ = E(X i ),i=1, 2,,n 3. σi 2 = V (X i ) σ 2,i=1, 2,,n ɛ>0

More information

& 3 3 ' ' (., (Pixel), (Light Intensity) (Random Variable). (Joint Probability). V., V = {,,, V }. i x i x = (x, x,, x V ) T. x i i (State Variable),

& 3 3 ' ' (., (Pixel), (Light Intensity) (Random Variable). (Joint Probability). V., V = {,,, V }. i x i x = (x, x,, x V ) T. x i i (State Variable), .... Deeping and Expansion of Large-Scale Random Fields and Probabilistic Image Processing Kazuyuki Tanaka The mathematical frameworks of probabilistic image processing are formulated by means of Markov

More information

ヒストリカル法によるバリュー・アット・リスクの計測:市場価格変動の非定常性への実務的対応

ヒストリカル法によるバリュー・アット・リスクの計測:市場価格変動の非定常性への実務的対応 VaR VaR VaR VaR GARCH E-mail : yoshitaka.andou@boj.or.jp VaR VaR LTCM VaR VaR VaR VaR VaR VaR VaR VaR t P(t) P(= P() P(t)) Pr[ P X] =, X t100 (1 )VaR VaR P100 P X X (1 ) VaR VaR VaR VaR VaR VaR VaR VaR

More information

untitled

untitled 18 1 2,000,000 2,000,000 2007 2 2 2008 3 31 (1) 6 JCOSSAR 2007pp.57-642007.6. LCC (1) (2) 2 10mm 1020 14 12 10 8 6 4 40,50,60 2 0 1998 27.5 1995 1960 40 1) 2) 3) LCC LCC LCC 1 1) Vol.42No.5pp.29-322004.5.

More information

(a) (b) (c) Canny (d) 1 ( x α, y α ) 3 (x α, y α ) (a) A 2 + B 2 + C 2 + D 2 + E 2 + F 2 = 1 (3) u ξ α u (A, B, C, D, E, F ) (4) ξ α (x 2 α, 2x α y α,

(a) (b) (c) Canny (d) 1 ( x α, y α ) 3 (x α, y α ) (a) A 2 + B 2 + C 2 + D 2 + E 2 + F 2 = 1 (3) u ξ α u (A, B, C, D, E, F ) (4) ξ α (x 2 α, 2x α y α, [II] Optimization Computation for 3-D Understanding of Images [II]: Ellipse Fitting 1. (1) 2. (2) (edge detection) (edge) (zero-crossing) Canny (Canny operator) (3) 1(a) [I] [II] [III] [IV ] E-mail sugaya@iim.ics.tut.ac.jp

More information

01.Œk’ì/“²fi¡*

01.Œk’ì/“²fi¡* AIC AIC y n r n = logy n = logy n logy n ARCHEngle r n = σ n w n logσ n 2 = α + β w n 2 () r n = σ n w n logσ n 2 = α + β logσ n 2 + v n (2) w n r n logr n 2 = logσ n 2 + logw n 2 logσ n 2 = α +β logσ

More information

AR(1) y t = φy t 1 + ɛ t, ɛ t N(0, σ 2 ) 1. Mean of y t given y t 1, y t 2, E(y t y t 1, y t 2, ) = φy t 1 2. Variance of y t given y t 1, y t

AR(1) y t = φy t 1 + ɛ t, ɛ t N(0, σ 2 ) 1. Mean of y t given y t 1, y t 2, E(y t y t 1, y t 2, ) = φy t 1 2. Variance of y t given y t 1, y t 87 6.1 AR(1) y t = φy t 1 + ɛ t, ɛ t N(0, σ 2 ) 1. Mean of y t given y t 1, y t 2, E(y t y t 1, y t 2, ) = φy t 1 2. Variance of y t given y t 1, y t 2, V(y t y t 1, y t 2, ) = σ 2 3. Thus, y t y t 1,

More information

n ξ n,i, i = 1,, n S n ξ n,i n 0 R 1,.. σ 1 σ i .10.14.15 0 1 0 1 1 3.14 3.18 3.19 3.14 3.14,. ii 1 1 1.1..................................... 1 1............................... 3 1.3.........................

More information

x T = (x 1,, x M ) x T x M K C 1,, C K 22 x w y 1: 2 2

x T = (x 1,, x M ) x T x M K C 1,, C K 22 x w y 1: 2 2 Takio Kurita Neurosceince Research Institute, National Institute of Advanced Indastrial Science and Technology takio-kurita@aistgojp (Support Vector Machine, SVM) 1 (Support Vector Machine, SVM) ( ) 2

More information

基礎数学I

基礎数学I I & II ii ii........... 22................. 25 12............... 28.................. 28.................... 31............. 32.................. 34 3 1 9.................... 1....................... 1............

More information

研究シリーズ第40号

研究シリーズ第40号 165 PEN WPI CPI WAGE IIP Feige and Pearce 166 167 168 169 Vector Autoregression n (z) z z p p p zt = φ1zt 1 + φ2zt 2 + + φ pzt p + t Cov( 0 ε t, ε t j )= Σ for for j 0 j = 0 Cov( ε t, zt j ) = 0 j = >

More information

GLM PROC GLM y = Xβ + ε y X β ε ε σ 2 E[ε] = 0 var[ε] = σ 2 I σ 2 0 σ 2 =... 0 σ 2 σ 2 I ε σ 2 y E[y] =Xβ var[y] =σ 2 I PROC GLM

GLM PROC GLM y = Xβ + ε y X β ε ε σ 2 E[ε] = 0 var[ε] = σ 2 I σ 2 0 σ 2 =... 0 σ 2 σ 2 I ε σ 2 y E[y] =Xβ var[y] =σ 2 I PROC GLM PROC MIXED ( ) An Introdunction to PROC MIXED Junji Kishimoto SAS Institute Japan / Keio Univ. SFC / Univ. of Tokyo e-mail address: jpnjak@jpn.sas.com PROC MIXED PROC GLM PROC MIXED,,,, 1 1.1 PROC MIXED

More information

8.1 Fubini 8.2 Fubini 9 (0%) 10 (50%) 10.1 10.2 Carathéodory 10.3 Fubini 1 Introduction [1],, [2],, [3],, [4],, [5],, [6],, [7],, [8],, [1, 2, 3] 1980

8.1 Fubini 8.2 Fubini 9 (0%) 10 (50%) 10.1 10.2 Carathéodory 10.3 Fubini 1 Introduction [1],, [2],, [3],, [4],, [5],, [6],, [7],, [8],, [1, 2, 3] 1980 % 100% 1 Introduction 2 (100%) 2.1 2.2 2.3 3 (100%) 3.1 3.2 σ- 4 (100%) 4.1 4.2 5 (100%) 5.1 5.2 5.3 6 (100%) 7 (40%) 8 Fubini (90%) 2006.11.20 1 8.1 Fubini 8.2 Fubini 9 (0%) 10 (50%) 10.1 10.2 Carathéodory

More information

untitled

untitled MCMC 2004 23 1 I. MCMC 1. 2. 3. 4. MH 5. 6. MCMC 2 II. 1. 2. 3. 4. 5. 3 I. MCMC 1. 2. 3. 4. MH 5. 4 1. MCMC 5 2. A P (A) : P (A)=0.02 A B A B Pr B A) Pr B A c Pr B A)=0.8, Pr B A c =0.1 6 B A 7 8 A, :

More information

1 4 1 ( ) ( ) ( ) ( ) () 1 4 2

1 4 1 ( ) ( ) ( ) ( ) () 1 4 2 7 1995, 2017 7 21 1 2 2 3 3 4 4 6 (1).................................... 6 (2)..................................... 6 (3) t................. 9 5 11 (1)......................................... 11 (2)

More information

toukei13.dvi

toukei13.dvi 25 53 2 375 389 c 25 2 25 3 25 6 3.. 5 4464 2 2 2 2 2 252 852 5322 2 5 93 376 53 2 25. 2...2.6..4 5 5 2 incoe...2.6..4 2. 5 5 incoe 2 adaptive histogra Kogure 987 Terrell and Scott 992 Lecoutre 987 Scott

More information

(X) (Y ) Y = intercept + c X + e (1) e c c M = intercept + ax + e (2) a Y = intercept + cx + bm + e (3) (1) X c c c (3) b X M Y (indirect effect) a b

(X) (Y ) Y = intercept + c X + e (1) e c c M = intercept + ax + e (2) a Y = intercept + cx + bm + e (3) (1) X c c c (3) b X M Y (indirect effect) a b 21 12 23 (mediation analysis) Figure 1 X Y M (mediator) mediation model Baron and Kenny (1986) 1 1) mediated moderation ( moderated mediation) 2) (multilevel mediation model) a M b X c (c ) Y 1: 1 1.1

More information

2 fukui@econ.tohoku.ac.jp http://www.econ.tohoku.ac.jp/~fukui/site.htm 200 7 Cookbook-style . (Inference) (Population) (Sample) f(x = θ = θ ) (up to parameter values) (estimation) 2 3 (multicolinearity)

More information

ohpmain.dvi

ohpmain.dvi fujisawa@ism.ac.jp 1 Contents 1. 2. 3. 4. γ- 2 1. 3 10 5.6, 5.7, 5.4, 5.5, 5.8, 5.5, 5.3, 5.6, 5.4, 5.2. 5.5 5.6 +5.7 +5.4 +5.5 +5.8 +5.5 +5.3 +5.6 +5.4 +5.2 =5.5. 10 outlier 5 5.6, 5.7, 5.4, 5.5, 5.8,

More information

土木学会論文集 D3( 土木計画学 ), Vol. 71, No. 2, 31-43,

土木学会論文集 D3( 土木計画学 ), Vol. 71, No. 2, 31-43, 1 2 1 305 8506 16 2 E-mail: murakami.daisuke@nies.go.jp 2 305 8573 1 1 1 E-mail: tsutsumi@sk.tsukuba.ac.jp Key Words: sampling design, geostatistics, officially assessed land price, prefectural land price

More information

1 No.1 5 C 1 I III F 1 F 2 F 1 F 2 2 Φ 2 (t) = Φ 1 (t) Φ 1 (t t). = Φ 1(t) t = ( 1.5e 0.5t 2.4e 4t 2e 10t ) τ < 0 t > τ Φ 2 (t) < 0 lim t Φ 2 (t) = 0

1 No.1 5 C 1 I III F 1 F 2 F 1 F 2 2 Φ 2 (t) = Φ 1 (t) Φ 1 (t t). = Φ 1(t) t = ( 1.5e 0.5t 2.4e 4t 2e 10t ) τ < 0 t > τ Φ 2 (t) < 0 lim t Φ 2 (t) = 0 1 No.1 5 C 1 I III F 1 F 2 F 1 F 2 2 Φ 2 (t) = Φ 1 (t) Φ 1 (t t). = Φ 1(t) t = ( 1.5e 0.5t 2.4e 4t 2e 10t ) τ < 0 t > τ Φ 2 (t) < 0 lim t Φ 2 (t) = 0 0 < t < τ I II 0 No.2 2 C x y x y > 0 x 0 x > b a dx

More information

…K…E…X„^…x…C…W…A…fi…l…b…g…‘†[…N‡Ì“‚¢−w‘K‡Ì‹ê™v’«‡É‡Â‡¢‡Ä

…K…E…X„^…x…C…W…A…fi…l…b…g…‘†[…N‡Ì“‚¢−w‘K‡Ì‹ê™v’«‡É‡Â‡¢‡Ä 2009 8 26 1 2 3 ARMA 4 BN 5 BN 6 (Ω, F, µ) Ω: F Ω σ 1 Ω, ϕ F 2 A, B F = A B, A B, A\B F F µ F 1 µ(ϕ) = 0 2 A F = µ(a) 0 3 A, B F, A B = ϕ = µ(a B) = µ(a) + µ(b) µ(ω) = 1 X : µ X : X x 1,, x n X (Ω) x 1,,

More information

1 Tokyo Daily Rainfall (mm) Days (mm)

1 Tokyo Daily Rainfall (mm) Days (mm) ( ) r-taka@maritime.kobe-u.ac.jp 1 Tokyo Daily Rainfall (mm) 0 100 200 300 0 10000 20000 30000 40000 50000 Days (mm) 1876 1 1 2013 12 31 Tokyo, 1876 Daily Rainfall (mm) 0 50 100 150 0 100 200 300 Tokyo,

More information

S I. dy fx x fx y fx + C 3 C dy fx 4 x, y dy v C xt y C v e kt k > xt yt gt [ v dt dt v e kt xt v e kt + C k x v + C C k xt v k 3 r r + dr e kt S dt d

S I. dy fx x fx y fx + C 3 C dy fx 4 x, y dy v C xt y C v e kt k > xt yt gt [ v dt dt v e kt xt v e kt + C k x v + C C k xt v k 3 r r + dr e kt S dt d S I.. http://ayapin.film.s.dendai.ac.jp/~matuda /TeX/lecture.html PDF PS.................................... 3.3.................... 9.4................5.............. 3 5. Laplace................. 5....

More information

1 (1997) (1997) 1974:Q3 1994:Q3 (i) (ii) ( ) ( ) 1 (iii) ( ( 1999 ) ( ) ( ) 1 ( ) ( 1995,pp ) 1

1 (1997) (1997) 1974:Q3 1994:Q3 (i) (ii) ( ) ( ) 1 (iii) ( ( 1999 ) ( ) ( ) 1 ( ) ( 1995,pp ) 1 1 (1997) (1997) 1974:Q3 1994:Q3 (i) (ii) ( ) ( ) 1 (iii) ( ( 1999 ) ( ) ( ) 1 ( ) ( 1995,pp.218 223 ) 1 2 ) (i) (ii) / (iii) ( ) (i ii) 1 2 1 ( ) 3 ( ) 2, 3 Dunning(1979) ( ) 1 2 ( ) ( ) ( ) (,p.218) (

More information

2 4 (four-dimensional variational(4dvar))(talagrand and Courtier(1987), Courtier et al.(1994)) (Ensemble Kalman Filter( EnKF))(Evensen(1994), Evensen(

2 4 (four-dimensional variational(4dvar))(talagrand and Courtier(1987), Courtier et al.(1994)) (Ensemble Kalman Filter( EnKF))(Evensen(1994), Evensen( 1,3 2,3 2,3 ; ; ; 1. (Wunsch(1996), Daley(1991), Bennett(2002), (1997)) 1 106-8569 4-6-7 2 106-8569 4-6-7 3 (JST) (CREST) 2 4 (four-dimensional variational(4dvar))(talagrand and Courtier(1987), Courtier

More information

X X X Y R Y R Y R MCAR MAR MNAR Figure 1: MCAR, MAR, MNAR Y R X 1.2 Missing At Random (MAR) MAR MCAR MCAR Y X X Y MCAR 2 1 R X Y Table 1 3 IQ MCAR Y I

X X X Y R Y R Y R MCAR MAR MNAR Figure 1: MCAR, MAR, MNAR Y R X 1.2 Missing At Random (MAR) MAR MCAR MCAR Y X X Y MCAR 2 1 R X Y Table 1 3 IQ MCAR Y I (missing data analysis) - - 1/16/2011 (missing data, missing value) (list-wise deletion) (pair-wise deletion) (full information maximum likelihood method, FIML) (multiple imputation method) 1 missing completely

More information

dvi

dvi Recent Advances in Statistical Inference - in Honor of Professor Masafumi Akahira 2008 12 16 Shrinkage estimators for covariance matrices in multivariate complex normal distributions December 12, 2008

More information

t14.dvi

t14.dvi version 1 1 (Nested Logit IIA(Independence from Irrelevant Alternatives [2004] ( [2004] 2 2 Spence and Owen[1977] X,Y,Z X Y U 2 U(X, Y, Z X Y X Y Spence and Owen Spence and Owen p X, p Y X Y X Y p Y p

More information

( 30 ) 30 4 5 1 4 1.1............................................... 4 1.............................................. 4 1..1.................................. 4 1.......................................

More information

2 (March 13, 2010) N Λ a = i,j=1 x i ( d (a) i,j x j ), Λ h = N i,j=1 x i ( d (h) i,j x j ) B a B h B a = N i,j=1 ν i d (a) i,j, B h = x j N i,j=1 ν i

2 (March 13, 2010) N Λ a = i,j=1 x i ( d (a) i,j x j ), Λ h = N i,j=1 x i ( d (h) i,j x j ) B a B h B a = N i,j=1 ν i d (a) i,j, B h = x j N i,j=1 ν i 1. A. M. Turing [18] 60 Turing A. Gierer H. Meinhardt [1] : (GM) ) a t = D a a xx µa + ρ (c a2 h + ρ 0 (0 < x < l, t > 0) h t = D h h xx νh + c ρ a 2 (0 < x < l, t > 0) a x = h x = 0 (x = 0, l) a = a(x,

More information

ばらつき抑制のための確率最適制御

ばらつき抑制のための確率最適制御 ( ) http://wwwhayanuemnagoya-uacjp/ fujimoto/ 2011 3 9 11 ( ) 2011/03/09-11 1 / 46 Outline 1 2 3 4 5 ( ) 2011/03/09-11 2 / 46 Outline 1 2 3 4 5 ( ) 2011/03/09-11 3 / 46 (1/2) r + Controller - u Plant y

More information

untitled

untitled K-Means 1 5 2 K-Means 7 2.1 K-Means.............................. 7 2.2 K-Means.......................... 8 2.3................... 9 3 K-Means 11 3.1.................................. 11 3.2..................................

More information

(m/s)

(m/s) ( ) r-taka@maritime.kobe-u.ac.jp IBIS2009 15 20 25 30 1900 1920 1940 1960 1980 2000 (m/s) 1900 1999 -2-1 0 1 715900 716000 716100 716200 Daily returns of the S&P 500 index. 1960 Gilli & Këllezi (2006).

More information

0 1-4. 1-5. (1) + b = b +, (2) b = b, (3) + 0 =, (4) 1 =, (5) ( + b) + c = + (b + c), (6) ( b) c = (b c), (7) (b + c) = b + c, (8) ( + b)c = c + bc (9

0 1-4. 1-5. (1) + b = b +, (2) b = b, (3) + 0 =, (4) 1 =, (5) ( + b) + c = + (b + c), (6) ( b) c = (b c), (7) (b + c) = b + c, (8) ( + b)c = c + bc (9 1-1. 1, 2, 3, 4, 5, 6, 7,, 100,, 1000, n, m m m n n 0 n, m m n 1-2. 0 m n m n 0 2 = 1.41421356 π = 3.141516 1-3. 1 0 1-4. 1-5. (1) + b = b +, (2) b = b, (3) + 0 =, (4) 1 =, (5) ( + b) + c = + (b + c),

More information

211 kotaro@math.titech.ac.jp 1 R *1 n n R n *2 R n = {(x 1,..., x n ) x 1,..., x n R}. R R 2 R 3 R n R n R n D D R n *3 ) (x 1,..., x n ) f(x 1,..., x n ) f D *4 n 2 n = 1 ( ) 1 f D R n f : D R 1.1. (x,

More information

Talk 2. Local asymptotic power of self-weighted GEL method and choice of weighting function Kouchi International Seminar on Recent Developments of Qua

Talk 2. Local asymptotic power of self-weighted GEL method and choice of weighting function Kouchi International Seminar on Recent Developments of Qua TALKS Invited Talks 2018 Oct. 3 Robust statistical inference for nonstandard time series models and related topics Quantitative Finance Seminar, Tokyo Metropolitan University Sep. 4-5 Robust statistical

More information

,.,. NP,., ,.,,.,.,,, (PCA)...,,. Tipping and Bishop (1999) PCA. (PPCA)., (Ilin and Raiko, 2010). PPCA EM., , tatsukaw

,.,. NP,., ,.,,.,.,,, (PCA)...,,. Tipping and Bishop (1999) PCA. (PPCA)., (Ilin and Raiko, 2010). PPCA EM., , tatsukaw ,.,. NP,.,. 1 1.1.,.,,.,.,,,. 2. 1.1.1 (PCA)...,,. Tipping and Bishop (1999) PCA. (PPCA)., (Ilin and Raiko, 2010). PPCA EM., 152-8552 2-12-1, tatsukawa.m.aa@m.titech.ac.jp, 190-8562 10-3, mirai@ism.ac.jp

More information

kato-kuriki-2012-jjas-41-1.pdf

kato-kuriki-2012-jjas-41-1.pdf Vol. 41, No. 1 (2012), 1 14 2 / JST CREST T 2 T 2 2 K K K K 2,,,,,. 1. t i y i 2 2 y i = f (t i ; c) + ε i, f (t; c) = c h t h = c ψ(t), i = 1,...,N (1) h=0 c = (c 0, c 1, c 2 ), ψ(t) = (1, t, t 2 ) 3

More information

2004 2 µ i ν it IN(0, σ 2 ) 1 i ȳ i = β x i + µ i + ν i (2) 12 y it ȳ i = β(x it x i ) + (ν it ν i ) (3) 3 β 1 µ i µ i = ȳ i β x i (4) (least square d

2004 2 µ i ν it IN(0, σ 2 ) 1 i ȳ i = β x i + µ i + ν i (2) 12 y it ȳ i = β(x it x i ) + (ν it ν i ) (3) 3 β 1 µ i µ i = ȳ i β x i (4) (least square d 2004 1 3 3.1 1 5 1 2 3.2 1 α = 0, λ t = 0 y it = βx it + µ i + ν it (1) 1 (1995)1998Fujiki and Kitamura (1995). 2004 2 µ i ν it IN(0, σ 2 ) 1 i ȳ i = β x i + µ i + ν i (2) 12 y it ȳ i = β(x it x i ) +

More information

(pdf) (cdf) Matlab χ ( ) F t

(pdf) (cdf) Matlab χ ( ) F t (, ) (univariate) (bivariate) (multi-variate) Matlab Octave Matlab Matlab/Octave --...............3. (pdf) (cdf)...3.4....4.5....4.6....7.7. Matlab...8.7.....9.7.. χ ( )...0.7.3.....7.4. F....7.5. t-...3.8....4.8.....4.8.....5.8.3....6.8.4....8.8.5....8.8.6....8.9....9.9.....9.9.....0.9.3....0.9.4.....9.5.....0....3

More information

1 1 ( ) ( 1.1 1.1.1 60% mm 100 100 60 60% 1.1.2 A B A B A 1

1 1 ( ) ( 1.1 1.1.1 60% mm 100 100 60 60% 1.1.2 A B A B A 1 1 21 10 5 1 E-mail: qliu@res.otaru-uc.ac.jp 1 1 ( ) ( 1.1 1.1.1 60% mm 100 100 60 60% 1.1.2 A B A B A 1 B 1.1.3 boy W ID 1 2 3 DI DII DIII OL OL 1.1.4 2 1.1.5 1.1.6 1.1.7 1.1.8 1.2 1.2.1 1. 2. 3 1.2.2

More information

三石貴志.indd

三石貴志.indd 流通科学大学論集 - 経済 情報 政策編 - 第 21 巻第 1 号,23-33(2012) SIRMs SIRMs Fuzzy fuzzyapproximate approximatereasoning reasoningusing using Lukasiewicz Łukasiewicz logical Logical operations Operations Takashi Mitsuishi

More information

Kalman ( ) 1) (Kalman filter) ( ) t y 0,, y t x ˆx 3) 10) t x Y [y 0,, y ] ) x ( > ) ˆx (prediction) ) x ( ) ˆx (filtering) )

Kalman ( ) 1) (Kalman filter) ( ) t y 0,, y t x ˆx 3) 10) t x Y [y 0,, y ] ) x ( > ) ˆx (prediction) ) x ( ) ˆx (filtering) ) 1 -- 5 6 2009 3 R.E. Kalman ( ) H 6-1 6-2 6-3 H Rudolf Emil Kalman IBM IEEE Medal of Honor(1974) (1985) c 2011 1/(23) 1 -- 5 -- 6 6--1 2009 3 Kalman ( ) 1) (Kalman filter) ( ) t y 0,, y t x ˆx 3) 10) t

More information

dvi

dvi 2017 65 2 217 234 2017 Covariate Balancing Propensity Score 1 2 2017 1 15 4 30 8 28 Covariate Balancing Propensity Score CBPS, Imai and Ratkovic, 2014 1 0 1 2 Covariate Balancing Propensity Score CBPS

More information

.3. (x, x = (, u = = 4 (, x x = 4 x, x 0 x = 0 x = 4 x.4. ( z + z = 8 z, z 0 (z, z = (0, 8, (,, (8, 0 3 (0, 8, (,, (8, 0 z = z 4 z (g f(x = g(

.3. (x, x = (, u = = 4 (, x x = 4 x, x 0 x = 0 x = 4 x.4. ( z + z = 8 z, z 0 (z, z = (0, 8, (,, (8, 0 3 (0, 8, (,, (8, 0 z = z 4 z (g f(x = g( 06 5.. ( y = x x y 5 y 5 = (x y = x + ( y = x + y = x y.. ( Y = C + I = 50 + 0.5Y + 50 r r = 00 0.5Y ( L = M Y r = 00 r = 0.5Y 50 (3 00 0.5Y = 0.5Y 50 Y = 50, r = 5 .3. (x, x = (, u = = 4 (, x x = 4 x,

More information

基礎から学ぶトラヒック理論 サンプルページ この本の定価 判型などは, 以下の URL からご覧いただけます. このサンプルページの内容は, 初版 1 刷発行時のものです.

基礎から学ぶトラヒック理論 サンプルページ この本の定価 判型などは, 以下の URL からご覧いただけます.   このサンプルページの内容は, 初版 1 刷発行時のものです. 基礎から学ぶトラヒック理論 サンプルページ この本の定価 判型などは, 以下の URL からご覧いただけます. http://www.morikita.co.jp/books/mid/085221 このサンプルページの内容は, 初版 1 刷発行時のものです. i +α 3 1 2 4 5 1 2 ii 3 4 5 6 7 8 9 9.3 2014 6 iii 1 1 2 5 2.1 5 2.2 7

More information

DSGE Dynamic Stochastic General Equilibrium Model DSGE 5 2 DSGE DSGE ω 0 < ω < 1 1 DSGE Blanchard and Kahn VAR 3 MCMC 2 5 4 1 1 1.1 1. 2. 118

DSGE Dynamic Stochastic General Equilibrium Model DSGE 5 2 DSGE DSGE ω 0 < ω < 1 1 DSGE Blanchard and Kahn VAR 3 MCMC 2 5 4 1 1 1.1 1. 2. 118 7 DSGE 2013 3 7 1 118 1.1............................ 118 1.2................................... 123 1.3.............................. 125 1.4..................... 127 1.5...................... 128 1.6..............

More information

4 2 p = p(t, g) (1) r = r(t, g) (2) p r t g p r dp dt = p dg t + p g (3) dt dr dt = r dg t + r g dt 3 p t p g dt p t 3 2 4 r t = 3 4 2 Benefit view dp

4 2 p = p(t, g) (1) r = r(t, g) (2) p r t g p r dp dt = p dg t + p g (3) dt dr dt = r dg t + r g dt 3 p t p g dt p t 3 2 4 r t = 3 4 2 Benefit view dp ( ) 62 1 1 47 2 3 47 2 e-mail:miyazaki@ngu.ac.jp 1 2000 2005 1 4 2 p = p(t, g) (1) r = r(t, g) (2) p r t g p r dp dt = p dg t + p g (3) dt dr dt = r dg t + r g dt 3 p t p g dt p t 3 2 4 r t = 3 4 2 Benefit

More information

untitled

untitled 17 5 13 1 2 1.1... 2 1.2... 2 1.3... 3 2 3 2.1... 3 2.2... 5 3 6 3.1... 6 3.2... 7 3.3 t... 7 3.4 BC a... 9 3.5... 10 4 11 1 1 θ n ˆθ. ˆθ, ˆθ, ˆθ.,, ˆθ.,.,,,. 1.1 ˆθ σ 2 = E(ˆθ E ˆθ) 2 b = E(ˆθ θ). Y 1,,Y

More information

Stepwise Chow Test * Chow Test Chow Test Stepwise Chow Test Stepwise Chow Test Stepwise Chow Test Riddell Riddell first step second step sub-step Step

Stepwise Chow Test * Chow Test Chow Test Stepwise Chow Test Stepwise Chow Test Stepwise Chow Test Riddell Riddell first step second step sub-step Step Stepwise Chow Test * Chow Test Chow Test Stepwise Chow Test Stepwise Chow Test Stepwise Chow Test Riddell Riddell first step second step sub-step Stepwise Chow Test a Stepwise Chow Test Takeuchi 1991Nomura

More information

数理統計学Iノート

数理統計学Iノート I ver. 0/Apr/208 * (inferential statistics) *2 A, B *3 5.9 *4 *5 [6] [],.., 7 2004. [2].., 973. [3]. R (Wonderful R )., 9 206. [4]. ( )., 7 99. [5]. ( )., 8 992. [6],.., 989. [7]. - 30., 0 996. [4] [5]

More information

1 Stata SEM LightStone 4 SEM 4.. Alan C. Acock, Discovering Structural Equation Modeling Using Stata, Revised Edition, Stata Press 3.

1 Stata SEM LightStone 4 SEM 4.. Alan C. Acock, Discovering Structural Equation Modeling Using Stata, Revised Edition, Stata Press 3. 1 Stata SEM LightStone 4 SEM 4.. Alan C. Acock, 2013. Discovering Structural Equation Modeling Using Stata, Revised Edition, Stata Press 3. 2 4, 2. 1 2 2 Depress Conservative. 3., 3,. SES66 Alien67 Alien71,

More information

ii

ii ii iii 1 1 1.1..................................... 1 1.2................................... 3 1.3........................... 4 2 9 2.1.................................. 9 2.2...............................

More information

講義のーと : データ解析のための統計モデリング. 第5回

講義のーと :  データ解析のための統計モデリング. 第5回 Title 講義のーと : データ解析のための統計モデリング Author(s) 久保, 拓弥 Issue Date 2008 Doc URL http://hdl.handle.net/2115/49477 Type learningobject Note この講義資料は, 著者のホームページ http://hosho.ees.hokudai.ac.jp/~kub ードできます Note(URL)http://hosho.ees.hokudai.ac.jp/~kubo/ce/EesLecture20

More information

ウェーブレット分数を用いた金融時系列の長期記憶性の分析

ウェーブレット分数を用いた金融時系列の長期記憶性の分析 TOPIX E-mail: masakazu.inada@boj.or.jp wavelet TOPIX Baillie Gourieroux and Jasiak Elliott and Hoek TOPIX I (0) I (1) I (0) I (1) TOPIX ADFAugmented Dickey-Fuller testppphillips-perron test I (1) I (0)

More information

II 2 3.,, A(B + C) = AB + AC, (A + B)C = AC + BC. 4. m m A, m m B,, m m B, AB = BA, A,, I. 5. m m A, m n B, AB = B, A I E, 4 4 I, J, K

II 2 3.,, A(B + C) = AB + AC, (A + B)C = AC + BC. 4. m m A, m m B,, m m B, AB = BA, A,, I. 5. m m A, m n B, AB = B, A I E, 4 4 I, J, K II. () 7 F 7 = { 0,, 2, 3, 4, 5, 6 }., F 7 a, b F 7, a b, F 7,. (a) a, b,,. (b) 7., 4 5 = 20 = 2 7 + 6, 4 5 = 6 F 7., F 7,., 0 a F 7, ab = F 7 b F 7. (2) 7, 6 F 6 = { 0,, 2, 3, 4, 5 },,., F 6., 0 0 a F

More information

example2_time.eps

example2_time.eps Google (20/08/2 ) ( ) Random Walk & Google Page Rank Agora on Aug. 20 / 67 Introduction ( ) Random Walk & Google Page Rank Agora on Aug. 20 2 / 67 Introduction Google ( ) Random Walk & Google Page Rank

More information

kubostat2015e p.2 how to specify Poisson regression model, a GLM GLM how to specify model, a GLM GLM logistic probability distribution Poisson distrib

kubostat2015e p.2 how to specify Poisson regression model, a GLM GLM how to specify model, a GLM GLM logistic probability distribution Poisson distrib kubostat2015e p.1 I 2015 (e) GLM kubo@ees.hokudai.ac.jp http://goo.gl/76c4i 2015 07 22 2015 07 21 16:26 kubostat2015e (http://goo.gl/76c4i) 2015 (e) 2015 07 22 1 / 42 1 N k 2 binomial distribution logit

More information

橡同居選択における所得の影響(DP原稿).PDF

橡同居選択における所得の影響(DP原稿).PDF ** *** * 2000 13 ** *** (1) (2) (1986) - 1 - - 2 - (1986) Ohtake (1991) (1993) (1994) (1996) (1997) (1997) Hayashi (1997) (1999) 60 Ohtake (1991) 86 (1996) 89 (1997) 92 (1999) 95 (1993) 86 89 74 79 (1986)

More information

S I. dy fx x fx y fx + C 3 C vt dy fx 4 x, y dy yt gt + Ct + C dt v e kt xt v e kt + C k x v k + C C xt v k 3 r r + dr e kt S Sr πr dt d v } dt k e kt

S I. dy fx x fx y fx + C 3 C vt dy fx 4 x, y dy yt gt + Ct + C dt v e kt xt v e kt + C k x v k + C C xt v k 3 r r + dr e kt S Sr πr dt d v } dt k e kt S I. x yx y y, y,. F x, y, y, y,, y n http://ayapin.film.s.dendai.ac.jp/~matuda n /TeX/lecture.html PDF PS yx.................................... 3.3.................... 9.4................5..............

More information

A5 PDF.pwd

A5 PDF.pwd Average Treatment Effect; ATE attributes Randomized Factorial Survey Experiment; RFSE cues ATE ATE Hainmueller et al. 2014 Average Marginal Component Effect ATE 67 4 2017 2 845 , ;, ATE, ;, ;, W 846 67

More information

Centralizers of Cantor minimal systems

Centralizers of Cantor minimal systems Centralizers of Cantor minimal systems 1 X X X φ (X, φ) (X, φ) φ φ 2 X X X Homeo(X) Homeo(X) φ Homeo(X) x X Orb φ (x) = { φ n (x) ; n Z } x φ x Orb φ (x) X Orb φ (x) x n N 1 φ n (x) = x 1. (X, φ) (i) (X,

More information

3 3.3. I 3.3.2. [ ] N(µ, σ 2 ) σ 2 (X 1,..., X n ) X := 1 n (X 1 + + X n ): µ X N(µ, σ 2 /n) 1.8.4 Z = X µ σ/ n N(, 1) 1.8.2 < α < 1/2 Φ(z) =.5 α z α

3 3.3. I 3.3.2. [ ] N(µ, σ 2 ) σ 2 (X 1,..., X n ) X := 1 n (X 1 + + X n ): µ X N(µ, σ 2 /n) 1.8.4 Z = X µ σ/ n N(, 1) 1.8.2 < α < 1/2 Φ(z) =.5 α z α 2 2.1. : : 2 : ( ): : ( ): : : : ( ) ( ) ( ) : ( pp.53 6 2.3 2.4 ) : 2.2. ( ). i X i (i = 1, 2,..., n) X 1, X 2,..., X n X i (X 1, X 2,..., X n ) ( ) n (x 1, x 2,..., x n ) (X 1, X 2,..., X n ) : X 1,

More information

要旨 1. 始めに PCA 2. 不偏分散, 分散, 共分散 N N 49

要旨 1. 始めに PCA 2. 不偏分散, 分散, 共分散 N N 49 要旨 1. 始めに PCA 2. 不偏分散, 分散, 共分散 N N 49 N N Web x x y x x x y x y x y N 三井信宏 : 統計の落とし穴と蜘蛛の糸,https://www.yodosha.co.jp/jikkenigaku/statistics_pitfall/pitfall_.html 50 標本分散 不偏分散 図 1: 不偏分散のほうが母集団の分散に近付くことを示すシミュレーション

More information

12/1 ( ) GLM, R MCMC, WinBUGS 12/2 ( ) WinBUGS WinBUGS 12/2 ( ) : 12/3 ( ) :? ( :51 ) 2/ 71

12/1 ( ) GLM, R MCMC, WinBUGS 12/2 ( ) WinBUGS WinBUGS 12/2 ( ) : 12/3 ( ) :? ( :51 ) 2/ 71 2010-12-02 (2010 12 02 10 :51 ) 1/ 71 GCOE 2010-12-02 WinBUGS kubo@ees.hokudai.ac.jp http://goo.gl/bukrb 12/1 ( ) GLM, R MCMC, WinBUGS 12/2 ( ) WinBUGS WinBUGS 12/2 ( ) : 12/3 ( ) :? 2010-12-02 (2010 12

More information

5 Armitage x 1,, x n y i = 10x i + 3 y i = log x i {x i } {y i } 1.2 n i i x ij i j y ij, z ij i j 2 1 y = a x + b ( cm) x ij (i j )

5 Armitage x 1,, x n y i = 10x i + 3 y i = log x i {x i } {y i } 1.2 n i i x ij i j y ij, z ij i j 2 1 y = a x + b ( cm) x ij (i j ) 5 Armitage. x,, x n y i = 0x i + 3 y i = log x i x i y i.2 n i i x ij i j y ij, z ij i j 2 y = a x + b 2 2. ( cm) x ij (i j ) (i) x, x 2 σ 2 x,, σ 2 x,2 σ x,, σ x,2 t t x * (ii) (i) m y ij = x ij /00 y

More information

kubostat2018d p.2 :? bod size x and fertilization f change seed number? : a statistical model for this example? i response variable seed number : { i

kubostat2018d p.2 :? bod size x and fertilization f change seed number? : a statistical model for this example? i response variable seed number : { i kubostat2018d p.1 I 2018 (d) model selection and kubo@ees.hokudai.ac.jp http://goo.gl/76c4i 2018 06 25 : 2018 06 21 17:45 1 2 3 4 :? AIC : deviance model selection misunderstanding kubostat2018d (http://goo.gl/76c4i)

More information

yasi10.dvi

yasi10.dvi 2002 50 2 259 278 c 2002 1 2 2002 2 14 2002 6 17 73 PML 1. 1997 1998 Swiss Re 2001 Canabarro et al. 1998 2001 1 : 651 0073 1 5 1 IHD 3 2 110 0015 3 3 3 260 50 2 2002, 2. 1 1 2 10 1 1. 261 1. 3. 3.1 2 1

More information

2 1,2, , 2 ( ) (1) (2) (3) (4) Cameron and Trivedi(1998) , (1987) (1982) Agresti(2003)

2 1,2, , 2 ( ) (1) (2) (3) (4) Cameron and Trivedi(1998) , (1987) (1982) Agresti(2003) 3 1 1 1 2 1 2 1,2,3 1 0 50 3000, 2 ( ) 1 3 1 0 4 3 (1) (2) (3) (4) 1 1 1 2 3 Cameron and Trivedi(1998) 4 1974, (1987) (1982) Agresti(2003) 3 (1)-(4) AAA, AA+,A (1) (2) (3) (4) (5) (1)-(5) 1 2 5 3 5 (DI)

More information