MCMC 2004 23 1
I. MCMC 1. 2. 3. 4. MH 5. 6. MCMC 2
II. 1. 2. 3. 4. 5. 3
I. MCMC 1. 2. 3. 4. MH 5. 4
1. MCMC 5
2. A P (A) : P (A)=0.02 A B A B Pr B A) Pr B A c Pr B A)=0.8, Pr B A c =0.1 6
B A 7
8
A, : B x, x ( ) 9
10
( ) 11
1. N( 0, 02 ) ( 0, 02 ) 12
, 13
2. 2 IG (n 0 /2, n 0 S 0 /2) 14
,, 2 IG (n 1 /2, n 1 S 1 /2) 15
. 2 N( 0, 2 /m 0 ). 2 IG (n 0 /2, n 0 S 0 /2) 16
, 17
18
19
3.. (, 2 x) ( ) 1. (0), 2(0). 2. (1), 2(1). 20
3. ( 3. i=1,2, (i+1), 2(i+1). 4. (i), 2(i) (i=n,n+1, N ) (, 2 x). 21
3. ( WinBUGS model{ #, 2 ) = - 2 for (i in 1:N) { x[i] ~ dnorm(mu,tau) # x ~N(, 2 ) } # (, 2 ) tau~dgamma(1.0e-3,1.0e-3) # 2 ~IG(0.001,0.001) mu ~ dnorm(0.0,1.0e-3) # ~N(0,1000) var <- 1/tau } http://www.mrc-bsu.cam.ac.uk/bugs/ 22
3. ( X 1,..,X 30 ~N(10,25) ) list(x=c(13.08, 2.44, 13.82, 14.14, 15.63, 18.99, 4.37, 11.41, 13.53, 9.43, 12.69, 5.30, 17.73, 4.31, 11.42, -2.72, 14.15, 9.56, 18.29, 12.09, 11.16, 8.93, 6.14, 4.72, 7.99, 7.80, 11.35, 11.76, 13.62, 14.47),N=30), list(mu=10 tau=1/25) 1000 (burn-in period), 10000 23
3. ( mu 15.0 12.5 10.0 7.5 5.0 1001 2500 5000 7500 10000 iteration var 80.0 60.0 40.0 20.0 0.0 1001 2500 5000 7500 10000 iteration 24
3. ( mu sample: 10000 5.0 7.5 10.0 12.5 15.0 17.5 var sample: 10000 0.0 25.0 50.0 75.0 100.0 25
3. ( MC error 2.5 97.5 % 10.56 0.918 0.0093 8.76 10.56 12.38 2 26.40 7.415 0.0681 15.62 25.22 42.26 26
3. ( X X new (i) ~ N( (i), 2(i) ) xnew sample: 10000-20.0 0.0 20.0 40.0 27
3. ( ( 1,, m x). 1. (0) =( (0) 1,, (0) m ). 2. i=0,1,2,. a. (i+1) 1 ~ ( 1 (i) 2,, (i) m x) b. (i+1) 2 ~ ( 2 (i+1) 1, (i), 3, (i) m x) c. d. (i+1) m ~ ( m (i+1) 2,, (i+1) m-1 x) 3. i>n ) (i). (i) (i=1,2, ) (. MH 28
4.MH MH( target distribution proposal distribution) MH 29
4.MH ( ( 1,, m x). 1. (0) =( 1 (0),, m (0) ). 2. i=0,1,2,. 1. (i) =( 1 (i),, m (i) ) Q. Q ( ) q( (i), x) (q( (i), x) ). 2. ( x),, (i+1) =. (i+1) = (i). 3. i>n ) (i). 30
4.MH ( MH,.,. (detailed balance equation).. 31
4.MH ( 1. MH. (random walk) = (i) +z, z~f(z)., q( (i), x)=f( - (i) ). z 0 t,.,.,. (i) (i=1,2, ),. 32
4.MH ( 2. (independence) MH. q( (i), x)=f( x ) q (i). w q.. 33
4.MH (, * N(m,V), log ( x) * 2.. V, v_0. 34
4.MH ( 1 U 1, 1) 2 IG(n 0 /2,n 0 S 0 /2) 35
4.MH ( 2,y IG(n 1 /2,n 1 S 1 /2) 36
4.MH ( TN (-1,1) (, 2 ) 37
4.MH ( ~TN (-1,1) (, 2 ),. 38
4.MH ( 1. (0), 2(0) ). 2. i=0,1,2,. 2(i+1) (i),y IG(n 1 /2,n 1 S 1 /2) 2(i+1),y~TN (-1,1) (, 2 ), (i+1) =, (i+1) = (i). 3. i>n ) (i), 2(i) ). 39
4.MH ( Ox. ( http://www.e.u-tokyo.ac.jp/~omori ). for(i=0;i<10000;++i){ // 2 a=0.5*(0.001+nobs+1); b=0.5*(0.001+(1-phi^2)*y[0]+sumsqrc(y[1:nobs-1]-phi*y[0:nobs-2])); var_e=1/rangamma(1,1,a,b); // mu_phi=(y[0:nobs-2]'*y[1:nobs-1])/sumsqrc(y[1:nobs-2]); var_phi=var_e/sumsqrc(y[1:nobs-2]); // 2(i+1),y~TN (-1,1) (, 2 ) do{phi_n=mu_phi+rann(1,1)*sqrt(var_phi);}while(fabs(phi_n)>=1); //. if(ranu(1,1)<= (0.5*sqrt(1-phi_n^2))/(0.5*sqrt(1-phi^2))){phi=phi_n;} } 40
4.MH ( ( =0.9, 2 =1.0 y 0,,y 1000 ) 96.3%. 2.5 97.5% 0.899 0.014 0.873 0.927 2 0.976 0.044 0.892 1.065 41
4.MH ( 1 Sample ACF PHI 0 0 5 10 15 20 25 30 35 40 45 50 Sample Path 0.95 PHI 0.90 0.85 0 750 1500 2250 3000 3750 4500 5250 6000 6750 7500 8250 9000 9750 Posterior Density 30 PHI 20 10 0.85 0.86 0.87 0.88 0.89 0.90 0.91 0.92 0.93 0.94 0.95 42
4.MH ( 1 Sample ACF SIGMA2 0 0 5 10 15 20 25 30 35 40 45 50 Sample Path 1.2 SIGMA2 1.1 1.0 0.9 0 750 1500 2250 3000 3750 4500 5250 6000 6750 7500 8250 9000 9750 Posterior Density 10.0 SIGMA2 7.5 5.0 2.5 0.800 0.825 0.850 0.875 0.900 0.925 0.950 0.975 1.000 1.025 1.050 1.075 1.100 1.125 1.150 1.175 43
4.MH ( WinBUGS ( WinBUGS Slice sampler ) # Model y 0,, y 999 1000 model{ # y, 2 ) = - 2 tau0 <-tau*(1-phi*phi) y[1]~dnorm(0,tau0) # y 0 ~N(0, 2 /(1-2 ) ) for( i in 2 : N ) { ymu[i] <-phi*y[i-1] # y t ~N( y t-1, 2 ) y[i] ~ dnorm(ymu[i],tau)} # (, 2 ) phi ~ dunif(-1,1) # ~U(-1,1) tau~dgamma(1.0e-3,1.0e-3) # 2 ~IG(0.001,0.001) var <- 1/tau } 44
4.MH ( =0.9, 2 =1.0 ) list(n=1000,y=c( -1.4958246547, -0.2742255020)) Fold, list(phi=0 tau=1) 1000 (burn-in period), 10000 45
4.MH ( phi 0.95 0.9 0.85 0.8 1001 2500 5000 7500 10000 iteration 1.2 1.1 1.0 0.9 0.8 var 1001 2500 5000 7500 10000 iteration 46
4.MH ( phi sample: 10000 0.8 0.85 0.9 0.95 1.0 var sample: 10000 0.6 0.8 1.0 1.2 1.4 47
4.MH ( MC error 2.5 97.5 % 0.899 0.014 0.0001 0.873 0.899 0.927 2 0.978 0.044 0.0004 0.895 0.977 1.068 48
.,, 49
. 3.. mu lag 0 20 40 var lag 0 20 40 50
. mu iteration 1401 5000 7500 10000 var iteration 1401 5000 7500 10000 51
. Geweke (1992) (10%) (50%) g( ). Gelfand and Smith (1990), Gelman and Rubin (1992), Liu and Liu (1993), Raftery and Lewis (1992) Heidelberger and Welch (1983) Mykland et al. (1995) 52
. Cowles and Carlin (1996) 13. Robert and Casella (1999) Mengersen, Robert and Guihenneuc-Jouyaux(1999). S-Plus R.. CODA(Convergence Diagnostic and Output Analysis Software) BOA (Bayesian Output Analysis) Geweke, Gelman and Rubin, Raftery and Lewis, Heidelberger and Welch,,,,, 95% 53
. BOA Geweke(1992) GEWEKE CONVERGENCE DIAGNOSTIC: ============================== Fraction in first window = 0.1 Fraction in last window = 0.5 Chain: sample1 -------------- mu var Z-Score 0.9682253-0.9140632 p-value 0.3329318 0.3606836 54
. Gibbs Sampler etc. 55
6.MCMC WinBUGS. BayesX TSP STATA. CODA, BOA (R Splus. ) http://www.public-health.uiowa.edu/boa/) Ox, Matlab, GAUSS, Ox C, Fortran 56
II. 1. 2. 3. 4. 5. 57
1. y i, i=1,,n x i =(x i1,,x ik ) : =( 1,, K ) : i N(0, 2 ). 58
1. 59
1. 60
1. (Y) (X) 900 700 500 300 0 10 20 30 61
1. WinBUGS model{ #, 2 ) = - 2 for(i in 1:n){ #n mu[i] <-beta[1]+beta[2]*x[i] # i = 1 + 2 x i y[i]~dnorm(mu[i],tau) # y i ~N( i, 2 ) } # (, 2 ) for(i in 1:K){ #K =2 beta[i]~dnorm(0,1.0e-6)} # i ~N(0,10 6 ) tau~dgamma(1.0e-6,1.0e-6) sigma2 <-1/tau # 2 ~IG(10-6, 10-6 ) } 62
1. MC error 2.5 97.5 % 1 429.2 78.34 0.833 271.3 429.9 584.7 2 8.813 4.305 0.045 0.286 8.763 17.42 2 11760 6550 78.13 4593 10150 28860 63
1. beta[1] sample: 10000 beta[2] sample: 10000-250.0 0.0 250.0 500.0 750.0-20.0 0.0 20.0 40.0 sigma2 sample: 10000-5.0E+4 0.0 5.00E+4 1.00E+5 1.50E+5 64
1. y n+1, (, 2 ) y n+1, 2, x n+1 ~N(x n+1, 2 ). 65
1., i ~N(0, 2 )., t. 0 i 0. 66
1. 67
1. 68
2. y i,. 0, y i*, i=1,,n y i y i* >0,. x i =(x i1,,x ik ) : =( 1,, K ) : i N(0, 2 ). 69
2. 70
2. 71
2. 1 (y) 1. (x1), 2. (x2), 3. (x3), 4. (x4). 0 72
Ox 2. for(cn=-cburn;cn<crepeat;++cn){ mb1=invert((1/dsigma2)*mx *mx+invert(mb0)); # vb1=mb1*((1/dsigma2)*mx'*vystar+invert(mb0)*vb0); vb=vb1+choleski(mb1)*rann(cm,1); for(i=0;i<cnobs;++i){ #y* if(vy[i]==0){ do{vystar[i]=mx[i][]*vb+rann(1,1)*sqrt(dsigma2); }while(vystar[i] >=0); } else{vystar[i]=vy[i];}} 2 da=0.5*(0.002+cnobs);db=0.5*(0.002+(vystar-mx*vb)'*(vystar-mx*vb)); dsigma2=1/rangamma(1,1,da,db); } 73
2. 2.5 97.5% 1 0.56 12.49 18.77-24.03 49.01 2-6.15 3.53-13.38 3-0.95 0.35-1.66-0.30 4 3.46 1.45 0.70 6.38 5 1.57E-4 1.95E-4-2.31E-4 5.44E-4 2 679.6 215.5 369.3 1195.1 74
2. 75
3. y i 0 1. : (y i =1), (y i =0). (y i =1), (y i =0) 76
3. F( ) 0 1 F F y i * y*, 1. 77
3. 78
3. 79
3. :,. 95 Pindyck and Rubinfeld (1998),. Y: 1, 0 X_1: X_2: 1, 0 X_3: 1, 0 X_4: ( X_5: ( ) 80
3. Ox for(cn=-cburn;cn<crepeat;++cn){ # ~N(b 1,B 1 ) vb1=mb1*(mx'*vystar+invert(mb0)*vb0); vb=vb1+choleski(mb1)*rann(cm,1); # y* for(i=0;i<cnobs;++i){ if(vy[i]==0){do{vystar[i]=mx[i][]*vb+rann(1,1); }while(vystar[i] >=0);} else{do{vystar[i]=mx[i][]*vb+rann(1,1); }while(vystar[i] <0);} } } 81
3. 82
3. 83
3. 2.5 97.5% 1 0.48-4.91 3.73-12.3 2.43 2-0.39 0.44-1.23 3 2.05 0.89 0.55 4.00 4 1.44 0.45 0.62 2.36 5-1.33 0.60-2.55-0.18 84
4. 85
4. ( ) MH N(,V).,. 86
4. ( ), ( m,v),.. 87
4. ( ) WinBUGS model { for (i in 1:n) { y[i] ~ dbern(p[i]) logit(p[i]) <- b[1] + b[2]*x[i,1]+ b[3]*x[i,2] + b[4]*x[i,3] + b[5]*x[i,4] } #prior for (j in 1:K) {b[j] ~ dnorm(0,0.001)} } ( WinBUGS ) 88
3. Ox for(cn=-cburn;cn<crepeat;++cn){ vb_o=vb;flk(vb_o,&dl_o,0,0); # dh_o=dlhat+vl1hat *(vb_o-vbhat) # +0.5*(vb_o-vbhat)'*ml2hat*(vb_o-vbhat); vb_n=vb1+mb1root*rann(cm,1); # flk(vb_n,&dl_n,0,0); # dh_n=dlhat+vl1hat *(vb_n-vbhat) # +0.5*(vb_n-vbhat)'*ml2hat*(vb_n-vbhat); dfrac=exp(dl_n+dh_o-dl_o-dh_n); # if(ranu(1,1)<=dfrac){vb=vb_n;} } MH 72.75% 89
4. ( ) 90
4. 2.5 97.5% 1 0.84-9.66 6.91-23.2 3.87 2-0.64 0.76-2.12 3 3.55 1.56 0.92 6.95 4 2.51 0.81 1.00 4.18 5-2.15 1.06-4.30-0.20 91
5. Seemingly Unrelated Model y i mx1, i=1,,n t m X i = mxk =( 1,, K ) : i N(0, ). 92
5.SUR 93
5.SUR 94
5.SUR :General Electric Westinghouse Gross investment (Y): Value of the Firm (V) Stock of plant and equipment(k) y i = 1 + 2 V i 3 K i i Grunfeld Investment Data, 100 yearly observations. 1935-1954. 95
5.SUR ( ) WinBUGS model { for (t in 1:T) { y[t,1:2] ~ dmnorm(mu[t,],omega[, ]); mu[t,1] <- beta[1,1]+beta[1,2]*v[t,1]+beta[1,3]*k[t,1] mu[t,2] <- beta[2,1]+beta[2,2]*v[t,2]+beta[2,3]*k[t,2] } # priors on regression coefficients for (i in 1:M) {for (j in 1:3) {beta[i,j] ~ dnorm(0,0.001)}} Omega[1 : M, 1 : M] ~ dwish(r[, ], 2) # cross-correlation matrix of dimension M=2 Sigma[1:M,1:M] <- inverse(omega[, ]) } 96
5.SUR ( ) beta[1,1] sample: 50000 beta[1,2] sample: 50000-200.0-100.0 0.0 100.0-0.05 0.0 0.05 0.1 beta[1,3] sample: 50000 rho sample: 50000-0.1 0.0 0.1 0.2 0.3-0.5 0.0 0.5 1.0 1.5 97
5.SUR ( ) beta[2,1] sample: 50000 beta[2,2] sample: 50000-50.0-25.0 0.0 25.0 50.0-0.05 0.0 0.05 0.1 0.15 beta[2,3] sample: 50000-0.4-0.2 0.0 0.2 0.4 0.6 98
5.SUR SUR MC rror 2.5 97.5% 11-14.21 20.78 0.65-54.88-14.05 26.65 12 0.03 0.01 3.7E-4 0.01 0.0.03 0.05 13 0.13 0.03 4.9E-4 0.08 0.14 0.18 1.13 6.67 0.21 21 0.08-11.71 1.03 14.33 22 23 0.05 0.01 4.8E-4 0.03 0.05 0.06 0.06 0.001-0.05 0.07 0.18 0.74 0.12 0.001 0.46 0.76 0.90 99
Chen, Shao and Ibrahim(2000) Monte Carlo Methods in Bayesian Computation. Springer. Gamerman, D. (1997) Markov Chain Monte Carlo. Chapman & Hall Gelman, Carlin, Stern and Rubin(1995) Bayesian Data Analysis. Chapman & Hall Ibrahim, Chen and Shinha (2001) Bayesian Survival Analysis. Springer. Koop, G. (2003) Bayesian Econometrics. Wiley., J.S. (2001) Monte Carlo Strategies in Scientific Computing. Springer. 100