3 1 1 1 2 1
2 1,2,3 1 0 50 3000, 2 ( ) 1 3 1 0 4 3 (1) (2) (3) (4) 1 1 1 2 3 Cameron and Trivedi(1998) 4 1974, (1987) (1982) Agresti(2003)
3 (1)-(4) AAA, AA+,A (1) (2) (3) (4) (5) (1)-(5) 1 2 5 3 5 (DI) % (2006)
4 1 2 6 2 7 (50% (median ) 50% (quartile 25% 50% 75% ) quantile:20% 40% 60% 80% 8 1000 15% 1000 500 (mode) 9 > > 6 (1991 pp.21-22) 7 45 2 8 Quantile Regression Koenker(2005) 9
5 5 5 10 x y 3 11 ( ) 2 x y r xy 10 ±3 0.9973 0.3% ±3 (fat tail fat tail 11
6 r xy = (xi x)(y i y) (xi x) 2 (yi y) 2 (1) x y 1 r xy 1 x y x y 3 12 13 10 50%-70% 14 5 12 13 1991 pp.54-55) 14
7 id 4 X f(x) X P (a X b) = b f(x)dx (2) a x f(x) 0 + f(x)dx = 1 f(x) X X x X F (x) = P (X x) = x f(u)du (3) F (x) = f(x) 3 (1)F x 1 < x 2 F (x 1 ) < F (x 2 ) (2) lim F (x) = 0 x lim F (x) = 1 (3)F lim F (x) = F (a). 3 x x a+0 F (x) X E(X) E(X) = xf(x)dx (4) V (X) E(X) = µ
8 V (X) = (X µ)2 f(x)dx = E(X 2 ) 2µE(X) + µ 2 = E(X 2 ) (E(X)) 2 (5) D(X) D(X) = V (X) skewness SK SK = E(X µ) 3 /σ 3 (6) SK > 0 SK 0 kurtosis KT KT = E(X µ) 4 /σ 4 (7) KT = 3 (EK=excess kurtosis) EK = KT 3 (8) EK > 0 EK 0 A p A X X f(x) f(x) = n C x p x (1 p) n x, x = 0, 1, 2,...n (9) E(X) = np V (X) = np(1 p) np λ n, p 0 f(x) e λ λ x /x!, x = 0, 1, 2.. f(x) = e λ λ x /x!, x = 0, 1, 2.. (10) P o(λ) E(X) = λ V (X) = λ
9 15 f(x) f(x) = 1 ] (x µ)2 exp [ 2πσ 2σ 2, < x < (11) E(X) = µ V (X) = σ 2 N(µ, σ 2 ) 16 1/ 2πσ f(x)dx = 1 ] [ exp (x µ)2 2σ 2 dx = 2πσ (12) 17 k f(x 1, x 2,...x k ) 0... f(x 1, x 2,...x k )dx 1 dx 2...dx k = 1 (13) S S A P ((x 1, x 2,...x k ) A) =... f(x 1, x 2,...x k )dx 1 dx 2...dx k (14) A X i F (x i ) X i X j 15 2002 FIFA 48 1.344 16 X (X µ)/σ µ = 0 σ = 1 N(0, 1) 17 (2003) Silverman (1986)
10 F (x 1, x 2,...x k ) = F 1 (x 1 )F 2 (x 2 )F 3 (x 3 )...F k (x k ) (15) X i X j X i x i X j g(x j x i ) = f(x j, x i ) h(x i ) (16) x j g(x j x i ) = f(x j, x i ) h(x i ) = h(x i )/h(x i ) = 1 (17) x j x j E(X j x i ) = x x jg(x j x i ) = µ xj x i (18) V (X j x i ) = x (x j µ xj x i ) 2 g(x j x i )dx j (19) 5
11 18 y = (y 1, y n ) θ = (θ 1, θ 2, θ p ) L(θ) 19 L(θ) θ θ θ L(θ) logl(θ) θ y i = βx i + ε i i = 1,, n (20) ε i N(0, σ 2 ) { log L(β) = log [(2πσ 2 ) n2 exp (y }] βx) (y βx) 2σ 2 = n 2 log(2πσ2 ) 1 2σ 2 (y βx) (y βx) β (21) β = ˆβ = x y/x x (22) log L(β) 2 log L(β) β 2 = x x/σ 2 (23) β 2 log L(β) β I(θ) y y f θ (y) { 2 } log L(θ) I(θ) = E θ 2 { 2 } log f θ (y) = E θ 2 (24) I(θ) 18 19 1992 4
12 V {t(y)} 1 I(θ) (25) θ Z H 0 : θ = θ 0 n( θ θ 0 ) N(0, 1/I 1 (θ 0 )) θ 1 > θ 0 ni1 (θ 0 )( θ θ 0 ) > Z α (26) I 1 (θ) = I(θ)/n Z α α Z H 0 : θ = θ 0 H 1 : θ θ 0 H 0 θ 0 χ 2 (1) 2 log L( θ) L(θ 0 ) > χ2 α(1) (= Z 2 α/2 ) (27) χ 2 α(1) = Z 2 α/2 20 21 6 1 20 (2005 1 21 Empirical Likelihood Mittelhammer, Judge and Miller (2000) Owen (2001)
13 [1] (1987) [2] (2006) [3] (2005) [4] (1991) [5] (1992) [6] (1982) [7] (1974) [8] (2003) [9] Cameron, A.C.and Trivedi, P.K.(1998) Regression Analysis of Count Data, Cambridge University Press. [10] Cameron, A.C. and Trivedi, P.K.(2005) Microeconometrics: Methods and Applications, Cambridge University Press. [11] Davidson, Russell and MacKinnon, James G.(2004) Econometric Theory and Methods, Oxford University Press. [12] Koenker, Roger. (2005) Quantile Regression, Cambridge University Press. [13] Mitterlhammer, Ron C.,Judge, Gerorge G. and Miller, Douglas, J.(2000) Econometric Foundations, Cambridge University Press. [14] Owen, Art B.(2001) Empirical Likelihood, Chapman & Hall
14 [15] Silverman, B.W.(1986) Density Estimation for Statistics and Data Analysis, Chipman & Hall. [16] Winklemann, Rainer and Boes, Stefan.(2005) Analysis of Microdata, Springer. [17] Wooldridge, Jeffrey. M.(2003) Econometric Analysis of Cross Section and Panel Data, The MIT Press