Ł\”ƒ.dvi
|
|
|
- せいごろう しまむね
- 9 years ago
- Views:
Transcription
1 ,
2 ,
3 PLM CCM PLCM BTM 54
4 MCMC MCMC BTM PLM PLM GRM BTM CCM 129
5 3 535 CCM Bias
6 4 11 N J 9 21 CCM (1,1),(1,0),(0,1),(0,0) ( ) PLCM (1,1),(1,0),(0,1),(0,0) ( ) PLM Bias,RMSE,cor(θ, ˆθ) CCM Bias,RMSE,cor(θ, ˆθ) CCM Bias, RMSE, cor(θ, ˆθ) PLCM Bias,RMSE,cor(θ, ˆθ) PLCM Bias, RMSE, cor(θ, ˆθ) BTM Bias,RMSE,cor(a,â) PLM Bias,RMSE,cor(a,â) Bias, RMSE, cor(a,â) Bias/σ a j RMSE/σ a j Bias,RMSE,cor(a,â) Bias, RMSE, cor(a,â) Bias/σ a j RMSE/σ a j BTM Bias,RMSE,cor(b,ˆb) 75
7 5 39 2PLM Bias,RMSE,cor(b,ˆb) Bias, RMSE, cor(b,ˆb) I BTM Bias,RMSE PLM Bias,RMSE PLM Bias, RMSE PLM Bias/I, RMSE/I PLM Bias,RMSE,cor(θ, ˆθ) GRM Bias,RMSE,cor(θ, ˆθ) GRM Bias, RMSE, cor(θ, ˆθ) BTM Bias,RMSE,cor(θ, ˆθ) BTM Bias, RMSE, cor(θ, ˆθ) CCM Bias,RMSE,cor(θ, ˆθ) CCM Bias, RMSE, cor(θ, ˆθ) σγ 2 d(j) = BTM (1,1),(1,0),(0,1),(0,0) (100 ) b jk = 1714 CCM (1,1),(1,0),(0,1),(0,0) (100 ) σγ 2 d(j) = 0155 BTM (1,1),(1,0),(0,1),(0,0) (100 ) b jk = 0612 CCM (1,1),(1,0),(0,1),(0,0) (100 ) 140
8 6 21 CCM Bias CCM RMSE CCM cor(θ, ˆθ) CCM Bias CCM RMSE CCM cor(θ, ˆθ) PLCM Bias PLCM RMSE PLCM cor(θ, ˆθ) PLCM Bias PLCM RMSE PLCM cor(θ, ˆθ) Bias/σ a j RMSE/σ a j cor(a,â) Bias/σ a j RMSE/σ a j cor(a,â) Bias/σ a j RMSE/σ a j cor(a,â) Bias RMSE cor(b,ˆb) Bias 90
9 7 314 RMSE cor(b,ˆb) Bias RMSE cor(b,ˆb) Bias/I RMSE/I Bias/I RMSE/I Bias/I RMSE/I GRM Bias GRM RMSE GRM cor(θ, ˆθ) GRM Bias GRM RMSE GRM cor(θ, ˆθ) GRM Bias GRM RMSE GRM cor(θ, ˆθ) BTM Bias BTM RMSE BTM cor(θ, ˆθ) BTM Bias BTM RMSE 154
10 8 515 BTM cor(θ, ˆθ) BTM Bias BTM RMSE BTM cor(θ, ˆθ) CCM Bias CCM RMSE CCM cor(θ, ˆθ) CCM Bias CCM RMSE CCM cor(θ, ˆθ) 4 164
11 J ( A), N, ( 11) 11 N J 1 2 J N 1 1 1, 11 1, 0 (item response theory, IRT),,,,, A,, A,,,,, ( )
12 1 10,,,,,,,,,,, (eg, Kan, van der Ven, Breteler, & Zitman, 2001; Simms, Goldberg, Roberts Watson, Welte, & Rotterman, 2011) 12 j θ θ j (item characteristic function, ICF) 11,,, 2 (two-parameter logistic model, 2PLM) P j (θ i ) = 1 1+exp[ 17a j (θ i b j )] (11), (11) P j (θ i ) θ θ i j, a j,b j j a j θ i = b j (11), j, a j j, (11) θ i = b j P j (θ i ) = 05, j 05 θ, b j j, (11) a j 1 (one-parameter logistic model, 1PLM) (11) c j P j (θ i ) P j (θ i ) = c j + 1 c j 1+exp[ 17a j (θ i b j )] (12)
13 (three-parameter logistic model, 3PLM),, (12) θ i θ i =, P j (θ i ) = c j, c j j, j, A B, A, A 100%, B 0%, i j ( ) π ij,, θ i π ij, Lord (1980, pp ), P j (θ i ), 1 j, θ i 2 θ i j 3 j, θ i 3, (1984), P j (θ i ) π ij,, θ i π ij, P j (θ i ),, P j (θ i ) = E i θi [π ij ] (13) 13 (local independence) (Lord & Novick, 1968, p360), 2 j k, θ j k u j,u k,, U j, j U j = 1, j U j = 0,
14 1 12, j k Prob(U j = u j,u k = u k θ i ) = Prob(U j = u j θ i )Prob(U k = u k θ i ) = P j (θ i ) u j (1 P j (θ i )) 1 u j P k (θ i ) u k (1 P k (θ i )) 1 u k (14), (14) Prob(U j = u j,u k = u k θ i ) θ θ i U j,u k u j,u k, Prob(U j = u j θ i ) Prob(U k = u k θ i ) θ i U j,u k u j,u k j k, (14), θ i u k u j Prob(u j u k,θ i ) Prob(u j u k,θ i ) = Prob(u k,u j θ i ) Prob(u k θ i ) = Prob(u j θ i )Prob(u k θ i ) Prob(u k θ i ) = Prob(u j θ i ) (15), j k, θ u j u k, (experimental independence) (Lord & Novick, 1968, p 44), j,k i ( ) u ij,u ik, u ij u ik Prob(u ij,u ik ), Prob(u ij ),Prob(u ik ), j, k, Prob(u ij,u ik ) = Prob(u ij )Prob(u ik ) = π u ij ij (1 π ij ) 1 u ij π u ik ik (1 π ik) 1 u ik (16), i θ i, θ i *1, P j (θ i ) = π ij (17) P k (θ i ) = π ik (18) *1 Lord & Novick (1968, p 539)
15 1 13, θ i j,k u j,u k Prob(u j,u k θ i ) Prob(u j,u k θ i ) = P j (θ i ) u j (1 P j (θ i )) 1 u j P k (θ i ) u k (1 P k (θ i )) 1 u k (19), j k,,, j k, 12,, j k, (2000), j,k, 2,, π ij,π ik Π j,π k, θ i Π j,π k π j,π k Prob(Π j = π j,π k = π k θ i ) θ i Π j,π k π j,π k Prob(Π j = π j θ i ),Prob(Π k = π k θ i ),, Prob(Π j = π j,π k = π k θ i ) = Prob(Π j = π j θ i )Prob(Π k = π k θ i ) (110) j,k, (2000), (,, ), (14), θ i 1 π i1, i θ,, P 1 (θ), 2 1, 1 π i1, (110), 2 π i2,,, 1, 2, 2 1 P 2 (θ),, 1, 2, P 1 (θ)p 2 (θ) 3 θ i
16 ,, Ferrara, Huynh, & Michaels (1999), Hoskens & De Boeck (1997), Kreiner & Christensen (2004), Yen (1993), (local dependence),, 3 ( I) ( II) ( III) I,,, 1 X 2 X, X N x 1,,x N, 2 X X M X S 2 X S 2 X = 1 N N (x i M X ) 2 (111) i=1,, 1 2, I, 13, II, 1 ( ),, j k, ( ) j k ( ), θ i
17 1 15 j k j k, II, 13, III, θ θ,,,,, *2, j k θ θ, θ θ j,k, θ i π ij π ik, Π j Π k, III, 13, 15 14,,,, 3 1 ( a) ( b) ( c) a, J J d(j), a, (graded response model, GRM) *2,, θ i, (Differential Item Functioning, DIF)
18 1 16 (Samejima, 1969) P d(j) (r θ i ) = P d(j) (r θ i) P d(j) (r+1 θ i) (112), P d(j) (r θ i ) θ θ i d(j) r, Pd(j) (r θ i) θ θ i d(j) r, Pd(j) (r θ i) 2PLM ((11) ), (112) P d(j) (r θ i ) P d(j) (r θ i ) = 1 ] 1+exp [ 17a d(j) (θ i b r ) 1 ] (113) 1+exp [ 17a d(j) (θ i b r+1 ), (113) b r,b r+1, b r b r+1, d(j),,, a, a,, (Bock, 1972) (Muraki, 1992),, Sireci, Thissen & Wainer (1991), b, 2PLM, b, 2 (Baeysian testlet model, BTM) (Bradlow, Wainer, & Wang, 1999) P j d(j) (θ i ) = 1 1+exp [ 17a j d(j) (θ i b j d(j) γ id(j) ) ] (114), P j d(j) (θ i ) θ i d(j) j, a j d(j),b j d(j), 2PLM, j, a j d(j) γ id(j) 0 θ i = b j d(j) (114), b j d(j) γ id(j) 0 P j d(j) (θ i ) 05 θ, (114) exp 17a j d(j) (θ i b j d(j) γ id(j) ) = 17a j d(j) ( (θi γ id(j) ) b j d(j) ) (115), θ i γ id(j) j,
19 1 17 (114) γ id(j), θ i i d(j) γ id(j) N(0,σγ 2 d(j) ) *3, σγ 2 d(j) 0, d(j), σγ 2 d(j) 0, d(j), σγ 2 d(j) d(j), d(j),, θ d(j), θ i,,, b, b, a, 2 BTM, 3 (three parameter Bayesian testlet model, 3PBTM) (Wang, Bradlow, & Wainer, 2002), (multidimensional item response model, MIRM),, Nandakumar (1990) Li, Bolt & Fu (2006), MIRM MIRM,, c, 2PLM, c,, (constant combination model, CCM) (Hoskens & De Boeck, 1997) exp[u j Z j +u k Z k u j u k b jk ] P(u j,u k θ i ) = 1+exp[Z j ]+exp[z k ]+exp[z j +Z k b jk ] (116) Z j = 17a j d(j) (θ i b j d(j) ) (117) Z k = 17a k d(j) (θ i b k d(j) ) (118), P(u j,u k θ i ) θ i j k u j,u k, (117) a j d(j),b j d(j) j, (118) a k d(j),b k d(j) k, a j d(j) a k d(j), b jk 0, θ i = b j d(j) θ i = b k d(j), b j d(j) *3, 223
20 1 18 b k d(j), b jk 0, 05 θ (116), θ i j k ω jk, ω jk = ln ( / ) P(Uj = 1,U k = 1 θ i ) P(Uj = 0,U k = 1 θ i ) P(U j = 1,U k = 0 θ i ) P(U j = 0,U k = 0 θ i ) = ln P(U j = 1,U k = 1 θ i )P(U j = 0,U k = 0 θ i ) P(U j = 1,U k = 0 θ i )P(U j = 0,U k = 1 θ i ) = b jk (119) j k ω jk 0, (116) b jk j k, b jk j,k ( ), j,k, c,, 2 (two-parameter logistic copula model, 2PLCM) 2PLCM Braeken, Tuerlinckx, & DeBoeck (2007) (Rasch copula model) 2PLM, 2PLCM, 2 j,k u j,u k P(u j,u k θ i ) =u j u k +( 1) 2 u j u k Q j (θ i )+( 1) 2 u k u j Q k (θ i ) +( 1) u j+u k C(Q j (θ i ),Q k (θ i )) (120), (120) Q j (θ i ), θ i j,, Q j (θ i ) = exp[ 17a j (θ i b j )] (121), C(Q j (θ i ),Q k (θ i )) Q j (θ i ),Q k (θ i ) ( ),, C(Q j (θ i ),Q k (θ i )) = 1 [ log 1 W (Q ] j(θ j ))W (Q j (θ k )) δ jk W(1) (122) W(x) = 1 exp[ δ jk x] (123) (Frank, 1979), (122) (123) δ jk, θ i j k
21 1 19, δ jk, θ i j k, δ jk 0 ω jk 0, δ jk j k, d(j),,, θ i, c, c,, 3 (three parameter constant combination model, 3PCCM, Chen & Wang, 2007) hybrid kernel (Ip, 2002), (locally dependent linear logistic test model, LDLLTM, Ip, Smits, & De Boeck, 2009), (conjunctive item response model, CIRM, Jannarone, 1986),, (2005), CIRM 16, (, 2005;, 1992;, 2001),,, (eg, ),,,,,, 14,,, 15,,,,,, (eg, Yang & Gao, 2008 ),,,,
22 1 20,,,,,,,,,,,,,,, 2,, 3,,, 4,, 5,,,,,, 6,
23 ,, *1,, Bradlow et al (1999), 2 BTM, 2PLM,,,,, 95% (mean 95% posterior interval width, M95%PIW),, M95%PIW,, *1, (2010),
24 2 22, Junker (1991),,, (2013),, 2PLM,,,,,,,,,,,,, Bradlow et al (1999), 1000, 60, (2013), 1000, 12,,,,,,,, ( ),,,,,,,,,,,
25 ,, 2,,, (Bias(ˆθ i )), (RMSE(ˆθ i )),, (cor(θ, ˆθ)), 2 221,, (11) 2PLM ( ) 2PLM ( ) P j (θ i ) = 1 1+exp[ 17a j (θ i b j )] (21),,,, 15 c, (116) CCM (120) 2PLCM (CCM, 2PLCM ) CCM ( ) P(U j,u k θ i ) = exp[u j Z j +U k Z k U j U k b jk ] 1+exp[Z j ]+exp[z k ]+exp[z j +Z k b jk ] Z j = 17a j d(j) (θ i b j d(j) ) Z k = 17a k d(j) (θ i b k d(j) ) (22)
26 2 24 2PLCM ( ) P(U j,u k θ i ) =U j U k +( 1) 2 U j U k Q j (θ i )+( 1) 2 U k U j Q k (θ i ) +( 1) U j+u k C(Q j (θ i ),Q k (θ i )) (23) 1 Q j (θ i ) =1 1+exp[ 17a j (θ i b j )] C(Q j (θ i ),Q k (θ i )) = 1 δ jk log W(x) = 1 exp[ δ jk x] [ 1 W (Q j(θ j ))W (Q j (θ k )) W(1) ] 222,,,, 100, 300, 500, , 10, 30, 50 3, 12,, 2j 1 2j (j = 1,2,,J/2, J ), j,k, CCM b jk = 2, 2PLCM δ jk = 30 b jk δ jk,,,,, 2 j,k (1,1),(0,0),, 100, 10, 2 1,2, , 21 22, CCM 1,2 (1,1),(0,0) (100 ) 84% (73%+11%), 2PLCM 88% (52%+36%)
27 CCM (1,1),(1,0),(0,1),(0,0) ( ) U 2 = 1 U 2 = 0 U 1 = 1 73% 10% U 1 = 0 6% 11% 22 2PLCM (1,1),(1,0),(0,1),(0,0) ( ) U 2 = 1 U 2 = 0 U 1 = 1 52% 12% U 1 = 0 0% 36%, Hoskens & De Boeck (1997), b jk 2-2,, b jk -2, δ jk 30,, 221 3, 2PLM, (Markov chain Monte Carlo, MCMC) *2, R 1 θ i (i = 1,2,,N) j (j = 1,2,,J) a j b j, N(0,1), U(03,15) U( 20,20) *3 2, 1, 2PLM *2 MCMC 223 *3, R
28 2 26 ((21) ), N J A 3 U(0,1) NJ, N J B 4 A,B, U, a ij b ij u ij = 1, a ij < b ij u ij = 0 5 j,k (0,0),(1,0),(0,1),(1,1), 1, CCM ((22) ), N (J/2) D,E,F,G 6 U(0,1) NJ/2, N (J/2) H 7 D,E,F,G H, N J U, d ij h ij (u i(2j 1),u i(2j) ) = (0,0), e ij+d ij h ij > d ij (u i(2j 1),u i(2j) ) = (1,0), 8 2PLCM 5 7, N J U 9 U,U,U 2PLM, MCMC , Bias(ˆθ i ) = RMSE(ˆθ i ) = 100 r= cor(θ, ˆθ) = ˆθ ir θ i (24) 100 ) 2 (ˆθir θ i (25) r=1 100 cor(θ, ˆθ r ) (26) r=1, θ i 1, ˆθ i θ i, ˆθ ir 10 θ i r, θ N θ i, ˆθ θ, ˆθ r 10 θ r, (26) cor(θ, ˆθ r ) θ ˆθ r, Bias(ˆθ i )
29 2 27 ˆθ i, RMSE(ˆθ i ) ˆθ i, cor(θ, ˆθ) ˆθ θ ,, 2PLM, MCMC, MCMC (2008),, t(t = 0,1,2, ), t X t, x t,, t Prob(X t+1 = x t+1 X 0 = x 0,X 1 = x 1,,X t = x t ) = Prob(X t+1 = x t+1 X t = x t ) (27), X t Ω, X t (t = 0,1,2, ) Ω, Prob(X t+1 = x t+1 X t = x t ), p x t+1 x t, t, p x t+1 x t xt x t+1 P, t X t x t, Ω x t+1 π t+1, X t+1 π t+1, π t+1 = π t P (28),, π t+1 = π 0 P t+1 (29), t+1 X t+1 π t+1 t = 0, 1 2 ω(ω Ω),
30 2 28, π 0, t π t+1 π, (28), π = πp (210),, π, λ, λ ((211) ) λ = (θ 1,θ 2,,θ N,a 1,a 2,,a J,b 1,b 2,,b J ) (211) λ U,, *4 λ Prob(λ U), (MAP ) (EAP ), λ ((212) ) Prob(λ U) = L(U λ)prob(λ) Prob(U) = L(U λ)prob(λ) (212) L(U λ)prob(λ)dλ, (212) L(U λ), λ, U (λ ),, L(U λ) = N i=1 j=1 J P j (θ i ) u ij Q j (θ i ) 1 u ij (213), U λ, (212) λ,,, MCMC, U λ, *4, A B Prob(B A) = Prob(A B)Prob(B) Prob(A), Prob(B A), A B, B, Prob(B), A B, B
31 2 29 λ, MAP EAP λ MCMC, (Gibbs sampler, Geman & Geman, 1984), (data augmentation and Gibbs sampling, Tanner & Wong, 1987), (Metropolis-Hastings algorithm, Hastings, 1970),, Patz & Junker (1999) - (Metropolis-Hastings within Gibbs algorithm), 222 9, N J U λ, λ, λ (EAP ) (U,U ), MCMC,, θ i,a j,b j, N(0,1), N(1,025), N(0,1) MCMC 1 U z i r j, p j, z i,r j, 1 p j θ i,a j,b j θi 0,a0 j,b0 j, λ λ0 λ 0 = (λ 0 1,λ0 2,,λ0 N+2J ) = (θ 0 1,θ 0 2,,θ 0 N,a 0 1,a 0 2,,a 0 J,b 0 1,b 0 2,,b 0 J) = (z 1,z 2,,z N,r 1,r 2,,r J,1 p 1,1 p 2,,1 p J ) (214) 2 λ 1 1 λ 1 ((215) ) h(λ 1 λ 0 1) = [ ] 1 exp (λ 1 λ0 1 )2 2πσ 2 2σ 2 (215) 3 λ 1 λ0 1, ((216) ) ( Prob(λ α(λ 1 λ 0 1) = min 1 λ 0 1,U)h(λ0 1 λ 1 ) ) Prob(λ 0 1 λ0 1,U)h(λ 1 λ0 1 ),1 (216), (216) λ 0 1 λ0 λ 0 1 λ 0 1 = (λ0 2,,λ0 N+2J ) (217)
32 U(0,1) (216) λ 1 1 = λ 1, λ 1 1 = λ0 1 5 λ 0 2,,λ 0 N+2J 2 4, λ 1 = (λ 1 1,λ 1 2,,λ 1 N+2J) (218) 6 λ 1 2 5, λ λ, (215) σ 2, 4 (l = 1,,N +2J) λ l,,, λ l 25% 50% σ 2,, , 1 (burn-in ), 20000,,, 3000 burn-in, λ 23,, 2PLM CCM, 2PLCM Bias,RMSE,cor(θ, ˆθ), Bias Bias(ˆθ i ) N, 222,, RMSE RMSE(ˆθ i ) N, 222,,,, 222, cor(θ, ˆθ), 2PLM CCM, 2PLCM Bias
33 2 31 RMSE, θ i 222,, θ i N(0,1), θ i 1, Bias RMSE,, θ i 231 2PLM, 2PLM Bias,RMSE,cor(θ, ˆθ),, CCM, CCM Bias,RMSE,cor(θ, ˆθ),, 24, CCM, 2PLM, Bias,RMSE,cor(θ, ˆθ),, 25, Bias, RMSE, cor(θ, ˆθ), Bias,RMSE,cor(θ, ˆθ) 2PLM, CCM Bias, RMSE, cor(θ, ˆθ), 21, 22, 23, CCM Bias, RMSE, cor(θ, ˆθ), 24, 25, 26, CCM,,, 25, N = 100,, 2PLM, CCM Bias 01 (θ i 10%),, N = 300 N = 500, ( 10 ), CCM Bias 01, 21 24,,, Bias
34 2 32,,, Bias, Bias, 25, N = 100 J = 50, 2PLM, RMSE 015 (θ i 15%),, 22, N = 100, 2PLM RMSE, 22, N 300, J = 10 J = 30 RMSE, J = 30 J = 50 RMSE, RMSE 2,, J = 10 J = 30, J = 30 J = 50 25,, CCM, 2PLM,,,,, 26,, J = 10 J = 30 cor(θ, ˆθ), J = 30 J = 50 cor(θ, ˆθ),, 3,,,, 11,,,,,
35 2 33,,,, N = 100,,,,,,,,,, 233 2PLCM, 2PLCM Bias,RMSE,cor(θ, ˆθ),, 26, 2PLCM Bias, RMSE, cor(θ, ˆθ),, 27, 2PLCM Bias, RMSE, cor(θ, ˆθ), 27, 28, 29, 2PLCM Bias, RMSE, cor(θ, ˆθ), 210, 211, 212, 2PLCM,,, 27,, Bias 2PLM 27,, 2PLM, RMSE, J = 10, RMSE 010 (θ i 10%),,,
36 ,, 2PLM, cor(θ, ˆθ),, J = 10,, 010 cor(θ, ˆθ), 212,, cor(θ, ˆθ) 2PLM, 29, cor(θ, ˆθ), J = 10, N = 1000, 2PLM cor(θ, ˆθ),,,,,,,,,,,,, ,,,,,,, Bradlow et al (1999) 2, 2 BTM
37 2 35 Bradlow et al (1999),, CCM,,, Bias,, M95%PIW,,,, M95%PIW, M95%PIW,, 3,, Junker (1991),, CCM, 2PLCM,,, 235, ( ),, Bradlow et al (1999),,, CCM, 2PLCM, (2013),,,, Bias,,, RMSE, cor(θ, ˆθ)
38 2 36, 3,, CCM, 2,, 3, 4,,, (eg,, ),,,,,, N = 100,,,, 23 2PLM Bias,RMSE,cor(θ, ˆθ) Bias RMSE cor(θ, ˆθ) N = 100,J = N = 100,J = N = 100,J = N = 300,J = N = 300,J = N = 300,J = N = 500,J = N = 500,J = N = 500,J = N = 1000, J = N = 1000, J = N = 1000, J =
39 CCM Bias,RMSE,cor(θ, ˆθ) Bias RMSE cor(θ, ˆθ) N = 100,J = N = 100,J = N = 100,J = N = 300,J = N = 300,J = N = 300,J = N = 500,J = N = 500,J = N = 500,J = N = 1000, J = N = 1000, J = N = 1000, J = CCM Bias, RMSE, cor(θ, ˆθ) Bias RMSE cor(θ, ˆθ) N = 100, J = N = 100,J = N = 100,J = N = 300, J = N = 300,J = N = 300, J = N = 500, J = N = 500,J = N = 500, J = N = 1000, J = N = 1000, J = N = 1000, J =
40 PLCM Bias,RMSE,cor(θ, ˆθ) Bias RMSE cor(θ, ˆθ) N = 100,J = N = 100,J = N = 100,J = N = 300,J = N = 300,J = N = 300,J = N = 500,J = N = 500,J = N = 500,J = N = 1000, J = N = 1000, J = N = 1000, J = PLCM Bias, RMSE, cor(θ, ˆθ) Bias RMSE cor(θ, ˆθ) N = 100, J = N = 100, J = N = 100, J = N = 300, J = N = 300, J = N = 300, J = N = 500, J = N = 500, J = N = 500, J = N = 1000, J = N = 1000, J = N = 1000, J =
41 CCM Bias
42 CCM RMSE
43 CCM cor(θ, ˆθ)
44 CCM Bias
45 CCM RMSE
46 CCM cor(θ, ˆθ)
47 PLCM Bias
48 PLCM RMSE
49 PLCM cor(θ, ˆθ)
50 PLCM Bias
51 PLCM RMSE
52 PLCM cor(θ, ˆθ)
53 ,,, *1,, Bradlow et al (1999), 2 BTM, 2PLM BTM,,, 2PLM BTM 2PLM BTM, M95%PIW,, 2PLM, BTM, M95%PIW,, 2PLM, BTM, 2PLM,, Chen & Wang (2007), 3PCCM, 3PLM, *1, (2012b),
54 3 52,,,,, Jiao Kamatani, Wang, & Jin (2012),,, (Rasch, 1960),,, Looney & Spray (1992),,,, Tuerlinckx & De Boeck (2001), CCM, 2PLM,,,, Wainer & Wang (2000), TOEFL, 3PLM 3PBTM,,, (2012), 2PLM GRM, 2PLM GRM,,,,,,,, (Braeken, 2011; Braeken, et al, 2007; Ip, 2010; Ip, Smits, & De Boeck, 2009), Bradlow et al (1999), 2 BTM, 2PLM BTM, 15, BTM 2PLM,,, Chen & Wang (2007) Tuerlinckx & De
55 3 53 Boeck (2001), CCM, 2PLM 3PLM, 12 15,,,,,,,, Ip (2010), 2 BTM 2PLM,,,,,,,,,, 2 BTM, 2PLM, Ip (2010),,,,, 32,,,, 321, 31, (114) 2 BTM ( ) P j d(j) (θ i ) = 1 1+exp [ 17a j d(j) (θ i b j d(j) γ ad(j) ] (31)
56 , 31, (11) 2PLM ( ) P j (θ i ) = 1 1+exp[ 17a j (θ i b j )] (32) 2 BTM ( (31) ) BTM 15, (32) a j θ i = b j (32), (32) b j P j (θ i ) = 05 θ i, (31) a j d(j) γ ad(j) 0 θ i = b j d(j) (31), b j d(j) γ ad(j) 0 P j d(j) (θ i ) = 05 θ i, 2PLM 2 BTM, 31 Ip (2010), (31) γ ad(j) 2 BTM 2PLM, (31) 2 BTM γ ad(j) (marginalized testlet item response function, MIRF), MIRF
57 3 55, (31) a j d(j) a j = τa j d(j) (33) τ = 1 κ 2 (17a j d(j) ) 2 σγ 2 d(j) +1 (34) κ = π (35) b j = b j d(j) (36), 2 BTM 2PLM a j,b j *2, 31, (33) (36) a j d(j),b j d(j) a j,b j,,,, 324,, 4, 5, 31,,,,,, 4,, 300, ,, 3, 5 (J d(j) = 5) 3 (J d(j) = 3) 2 (J d(j) = 2), 4, 4 5, 4 ( d(j) 4) 4 3 ( d(j) 3) 4 2 ( d(j) 2) *2 MIRF, Ip (2010)
58 ( d(j) 1) 4 ( d(j) 0),, 2 BTM Bradlow et al (1999) 2 BTM 3 (Bradlow et al, 1999; Li, Bolt, & Fu, 2005; Li, Bolt, & Fu, 2006), 3 17 σγ 2 d(j) ˆσ γ 2 d(j) 4 (14075) σγ 2 d(j), 4 (0155) σγ 2 d(j),, (2 ) (3 ) (5 ) 30, *3 1 θ i (i = 1,2,,N) γ ad(j) (a = 1,2,,N; d(j) = 1,2,3,4) a j d(j) b j d(j) (j = 1,2,,20) N(0,1), N(0,σγ 2 d(j) ), U(05,15), N(0,1) *4 2 1 (31), N 20 A 3 U(0,1) N 20 N 20 B 4 A,B U, a ij b ij u ij = 1, a ij < b ij u ij = 0 5 U R Ip (2010), 2 BTM 2PLM 8 7 R *3 R *4, R
59 3 57 Bias(ˆλ j ) = 1 R ˆλ jr λ j (37) R r=1 RMSE(ˆλ j ) = 1 R ) 2 (ˆλjr λ j (38) R cor(λ,ˆλ) = 1 R r=1 R cor(λ, ˆλ r ) (39) r= Bias(ˆλ j ) RMSE(ˆλ j ), Bias(ˆλ j ) RMSE(ˆλ j ) Bias,RMSE, λ j j, ˆλ j λ j, ˆλ jr ˆλ j r, λ 20 a j b j, ˆλ, ˆλ r r, 6 R, ˆλ ir,, 1000 R = R = MCMC MCMC (MCMC 223 ) MCMC WinBUGS 14 (Spiegelhalter, Thomas, & Best, 2003), MCMC (slice sampling) (Neal, 1997),, 2PLM θ i N(0,1) a j N(1,025) b j N(0,1) 2 BTM
60 3 58 θ i N(0,1) a j d(j) N(1,025) b j d(j) N(0,1) γ ad(j) N(0,σγ 2 d(j) ) σγ 2 d(j) Γ(3, 1), MCMC,, burn-in 2PLM 1000 burn-in 2 BTM 1000 burn-in, MCMC, WinBUGS User Manual (Spiegelhalter, Thomas, Best, & Lunn, 2003), 5% 2PLM BTM 5000, MCMC, θ i 0 a j,a j d(j) 1 b j,b j d(j) 0 γ ad(j) 0 σγ 2 d(j) 3
61 ,,,, Bias,RMSE,cor(a,â),, 2PLM 2 BTM Bias RMSE, a j, 324, a j d(j) U(05,15), a j a j d(j) 323 (33) (35), a j,, a j a j d(j) U(05,15), a j 0289, Bias RMSE, 0289, a j 2 BTM, 2 BTM Bias,RMSE,cor(a,â),, 31 2PLM, 2PLM Bias,RMSE,cor(a,â),, 32, 2PLM, 2 BTM Bias,RMSE,cor(a,â),, 33, Bias, RMSE, cor(a,â), 2PLM Bias,RMSE,cor(a,â) 2 BTM, 2PLM, Bias RMSE a j,, 34, Bias/σ a j RMSE/σ a j,, Bias RMSE a j, 2PLM Bias/σ a j, RMSE/σ a j, cor(a,â)
62 3 60, 31, 32, 33, 2PLM Bias/σ a j, RMSE/σ a j, cor(a,â), 34, 35, 36, 2PLM Bias/σ a j, RMSE/σ a j, cor(a,â) 4, 37, 38, 39, 2PLM,,, 34,, 2PLM Bias σ a j 01 (a [ j ] 10% ), 2 BTM Bias, 34,,, Bias/σ a j,, 2PLM Bias, 2 BTM, 34,, 2PLM RMSE BTM RMSE,, N = 300, σ a j 01 (a j 10% ), 32,, RMSE/σ a j, 2PLM, 2PLM RMSE, 2 BTM, 2PLM,, 35,,, RMSE/σ a j, 2PLM RMSE, BTM
63 ,, 2PLM cor(a,â) BTM,, 5,,, cor(a,â) -015, 36,,, cor(a,â),,,, 2PLM cor(a,â), BTM,, 39, 5,, cor(a,â) 0,, cor(a,â), 4,, 2PLM, BTM, 4,, 2PLM, 5,,,,,,,,,,,,,,,,, 2PLM, a j d(j) a j, Bias,RMSE,cor(a,â)
64 3 62,, 35,, Bias, RMSE, cor(a,â),, 36, Bias/σ a j RMSE/σ a j,, 37,,,, 37,,, 2PLM Bias BTM Bias,, 37,,, 2PLM RMSE BTM RMSE,, N = 300, σ a j 01 (a j 10% ),,, 36,,, cor(a,â),, 3 2, cor(a,â) 01,,,
65 3 63,,,,,, 332,,,, Bias,RMSE,cor(b,ˆb),, 2PLM 2 BTM Bias RMSE, b j, 324, b j d(j) N(0,1), 323 (36), b j N(0,1), Bias,RMSE,, b j 2 BTM, 2 BTM Bias,RMSE,cor(b,ˆb),, 38 2PLM, 2PLM Bias,RMSE,cor(b,ˆb),, 39, 2PLM, Bias, RMSE, cor(b,ˆb),, 310, 2PLM, Bias, RMSE, cor(b,ˆb), 310, 311, 312, 2PLM, Bias, RMSE, cor(b,ˆb), 313, 314, 315, 2PLM,
66 3 64 Bias, RMSE, cor(b,ˆb) 4, 316, 317, 318, 2PLM,,, 310,, Bias, 2PLM 2 BTM 310,, 2PLM 2 BTM, RMSE, J d(j) = 5, d(j) 4,3,2, 2PLM 2 BTM, RMSE 005 (b j 5% ), 314,, RMSE,, 2PLM RMSE, 2 BTM, 317,, RMSE, 2PLM RMSE, BTM 310,, cor(b,ˆb), 2PLM BTM, 318, 5, cor(b,ˆb) 0,, cor(b,ˆb), 4,,2PLM, BTM,,,, 5,
67 3 65,,,,,,, J d(j) = 5, d(j) 4,3,2,,,,, 2PLM, 323 (36), b j d(j) b j b j b j,, 333,, Lord & Novick (1968), 2PLM a j,, j u j s r bj a j = r bj 1 r 2 bj (310), 2 BTM θ i d(j) γ id(j), γ id(j) θ i, 2PLM, θ i,, r bj
68 3 66, (310), a j r bj, 2 BTM 2PLM, â j, 2PL M Bias, 2, 2PLM RMSE,, â j ,,,, 2 BTM, 2PLM, BTM, Ip (2010),, 2 BTM 2PLM Bradlow et al (1999),,,, Bradlow et al (1999),,,,,, M95%PIW,,, Chen & Wang (2007) Jiao et al (2012),,,,
69 3 67,,,
70 BTM Bias,RMSE,cor(a,â) Bias RMSE cor(a,â) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j)
71 PLM Bias,RMSE,cor(a,â) Bias RMSE cor(a,â) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j)
72 Bias, RMSE, cor(a,â) Bias RMSE cor(a,â) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j)
73 Bias/σ a j RMSE/σ a j Bias/σ a j RMSE/σ a j N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j)
74 Bias,RMSE,cor(a,â) Bias RMSE cor(a,â) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j)
75 Bias, RMSE, cor(a,â) Bias RMSE cor(a,â) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j)
76 Bias/σ a j RMSE/σ a j Bias/σ a j RMSE/σ a j N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j)
77 BTM Bias,RMSE,cor(b,ˆb) Bias RMSE cor(b,ˆb) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j)
78 PLM Bias,RMSE,cor(b,ˆb) Bias RMSE cor(b,ˆb) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j)
79 Bias, RMSE, cor(b,ˆb) Bias RMSE cor(b,ˆb) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j)
80 Bias/σ a j
81 RMSE/σ a j
82 cor(a,â)
83 Bias/σ a j
84 RMSE/σ a j
85 cor(a,â)
86 Bias/σ a j 4
87 RMSE/σ a j 4
88 cor(a,â) 4
89 Bias
90 RMSE
91 cor(b,ˆb)
92 Bias
93 RMSE
94 cor(b,ˆb)
95 Bias 4
96 RMSE 4
97 cor(b,ˆb) 4
98 ,,,,, Ip (2010), National Assessment of Educational Progress (NAEP),,, (2001), 2000 GRM 2PLM, 2PLM GRM, Keller, Swaminathan, & Sireci (2003),,, Wainer & Wang (2000), TOEFL,,,, Ip (2000),,, Lee (2000),,,, Wainer et al (2007, p 182),,
99 4 97,,, Anastasi (1961, p 121) Thorndike (1951, p 585), Guilford (1936, p417), Kelly (1924),,, Wainer (1995), Low School Admission Test (LSAT),, Wang & Wilson (2005), ,,,,,,,,,,,,,,,,,,,,,,,,,,,,, 42,,,,,,,,,,
100 , (114) 2 BTM ( ) P j d(j) (θ i ) = 1 1+exp [ 17a j d(j) (θ i b j d(j) γ id(j) ] (41) 15, 2 BTM b a c,, a d(j),,, c, 3,,, 2 (Braeken et al, 2007; Hoskens & De Boeck, 1997; Ip, 2002; Ip et al, 2009),,, b 2 BTM 422,, 2,, (11) 2PLM ( ) P j (θ i ) = 1 1+exp[ 17a j (θ i b j )] (42),, 2 BTM ( (41) )
101 ,,,,,,, U i u i ( i ) i θ i,, θ i I(θ i ), I(θ i ) ((43) ) [ ( ) 2 ] I(θ i ) = E logl(u i θ i ) θ i θ i, ˆθ i,,, θ i ˆθ i σ 2ˆθi I(θ θ i ) (44) i (43) σ 2ˆθi θ i = 1 I(θ i ) (44),, I(θ i ) ˆθ i,,,, 2PLM 2PLM, (45) ( Lord & Novick (1968) (2002) ) J I(θ i ) = (17) 2 a 2 j P j(θ i )Q j (θ i ) (45) j=1, (45) P j (θ i ) 2PLM ((42) ), Q j (θ i ) 1 P j (θ i ), 2PLM (45),, 2 BTM 2 BTM,
102 4 100 (46) ( Wainer, Bradlow, & Du (2000) Ip (2010) ) I(θ i ) = J j=1 [ ] (17a j d(j) ) 2 exp(17a j d(j) (θ i b j d(j) γ id(j) )) (1+exp(17a j d(j) (θ i b j d(j) γ id(j) ))) 2 dγ id(j) (46), 2 BTM (46), 424,, 4, 5, 41,,,,,,, 4, 300, ,, 3 5 (J d(j) = 5) 3 (J d(j) = 3) 2 (J d(j) = 2),, ( d(j) 4) 4 3 ( d(j) 3) 4 2 ( d(j) 2) 4 1 ( d(j) 1) 4 ( d(j) 0)
103 4 101,, 3, *1 1 θ i (i = 1,2,,N) γ id(j) (a = 1,2,,N; d(j) = 1,2,3,4) a j d(j) b j d(j) (j = 1,2,,20) N(0,1), N(0,σγ 2 d(j) ), U(05,15), N(0,1) * BTM ((41) ), N 20 A 3 U(0,1) N 20 N 20 B 4 A,B U, a ij b ij u ij = 1, a ij < b ij u ij = 0 5 U , R 8 7 R, Bias(Î(θ i)) = 1 R Î r (θ i ) I(θ i ) (47) R r=1 RMSE(Î(θ i)) = 1 R ) 2 (Îr (θ i ) I(θ i ) (48) R, (47), (48) Î(θ i ) I(θ i ), Îr(θ i ) I(θ i ) r r=1, 7 R,, 1000 R = 50, 300 R = 100 *1, R *2, R
104 MCMC, 5 MCMC MCMC (Neal, 1997), WinBUGS 14 (Spiegelhalter et al, 2003) MCMC,,, 2PLM θ i N(0,1) a j N(1,025) b j N(0,1) 2 BTM θ i N(0,1) a j d(j) N(1,025) b j d(j) N(0,1) γ id(j) N(0,σγ 2 d(j) ) σγ 2 d(j) Γ(3, 1),,, 2PLM θ i 0 a j 1 b j 0 2 BTM θ i 0 a j d(j) 1 b j d(j) 0 γ id(j) 0 σγ 2 d(j) 3,, λ,, burn-in
105 PLM 1000 burn-in 2 BTM 1000 burn-in,, WinBUGS User Manual (Spiegelhalter et al, 2003), 5%, 2PLM BTM ,,, 2PLM 2 BTM Bias,RMSE,, Bias, θ i = 300, 275,,275, Bias(Î(θ i)),, 425,, θ i N(0,1), N(0,1) θ i 0125 ( , ), Bias,, RMSE, Bias, θ i = 300, 275,,275,300 RMSE(Î(θ i)),,,,, 2PLM 2 BTM Bias RMSE, ( ),,, θ i = 300, 275,,275,300
106 4 104 Bias,RMSE, I, Bias,RMSE, I, 431,, I, BTM, 2 BTM Bias,RMSE,, PLM, 2PLM Bias,RMSE,, 43, 2PLM, 2 BTM Bias,RMSE,, 44, Bias, RMSE, 2PLM Bias,RMSE 2 BTM, 2PLM, Bias RMSE I,, 45, Bias/I RMSE/I, Bias RMSE I, 2PLM Bias/I, RMSE/I, 41, 42, 2PLM Bias/I, RMSE/I, 43, 44, 2PLM Bias/I, RMSE/I 4, 45, 46, 2PLM,,
107 ,, 2PLM, 2 BTM, Bias,, d(j) 2,1,0,,, Bias I 01, 43,, Bias/I, 45,,, Bias/I, Bias,, 41,, Bias/I,,, Bias 45,, 2PLM, 2 BTM, RMSE,, d(j) 2,1,0,,, RMSE I 01, 44,, RMSE/I, 46,,, RMSE/I, RMSE,, 42,, RMSE/I,, RMSE 423,,,,,,,
108 4 106,,, d(j) 2,1,0,,,,,,,,, ,,,,,,,,,, 4,, Ip (2010), (2001), Keller et al (2003), Wainer & Wang (2000),,,,,
109 I I N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j)
110 BTM Bias,RMSE Bias RMSE N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j)
111 PLM Bias,RMSE Bias RMSE N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j)
112 PLM Bias, RMSE Bias RMSE N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j)
113 PLM Bias/I, RMSE/I Bias/I RMSE/I N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j)
114 Bias/I
115 RMSE/I
116 Bias/I
117 RMSE/I
118 Bias/I 4
119 RMSE/I 4
120 ,,,,,,,, *1,, Keller et al (2003), Lee (2000), Lee, Kolen, Frisbie, & Ankenmann (2001), Reise, Horan, & blanchard (2011), Tuerlinckx & De Boeck (1999), Wainer & Wang (2000), Wang, Cheng, & Wilson (2005), Zhang (2010),,, Braedlow et al (1999), DeMars (2006), (2001), (2013), Wainer et al (2007, pp ), Bradlow et al (1999), 2 BTM, 2PLM BTM, BTM, 2PLM, BTM, 2PLM, M95%PIW *1, (2012a),
121 5 119 BTM, 2PLM, M95%PIW BTM 2PLM,, DeMars (2006), b, a, b,,,,, (2001), 2000 GRM 2PLM, 0987, (2013), 2PLM GRM, , Wainer et al (2007, pp ), b, b,,,,, 15, a, b, c 3,, 3,,,,, 3 ( d ),,, 4,,,,,,,,,
122 5 120 ( a, b, c) ( d),,,,, 52,, a d,, ˆθ ir ˆθ i Bias(ˆθ i ), RMSE(ˆθ i ) cor(θ, ˆθ), 521, 421, (114) 2 BTM ( ) P j d(j) (θ i ) = 1 1+exp [ 17a j d(j) (θ i b j d(j) γ id(j) ) ] (51) 522, a d, a a, (112) GRM ( ) P d(j) (r θ i ) = 1 ] 1+exp [ 17a d(j) (θ i b r ) 1 ] (52) 1+exp [ 17a d(j) (θ i b r+1 ) b b, 2 BTM ( (51) )
123 5 121 c c, (116) CCM ( ) P(U j,u k θ i ) = exp[u j Z j +U k Z k U j U k b jk ] 1+exp[Z j ]+exp[z k ]+exp[z j +Z k b jk ] (53) Z j = 17a j d(j) (θ i b j d(j) ) (54) Z k = 17a k d(j) (θ i b k d(j) ) (55) d d, (11) 2PLM ( ) P j (θ i ) = 1 1+exp[ 17a j (θ i b j )] (56) 523,, 4, 5, 51,,,,, 4, 300, ,, 3 5 (J d(j) = 5) 3 (J d(j) = 3) 2 (J d(j) = 2),, ( d(j) 4) 4 3 ( d(j) 3) 4 2 ( d(j) 2) 4 1 ( d(j) 1)
124 ( d(j) 0),, 3 *2 1 θ i (i = 1,2,,N) γ id(j) (i = 1,2,,N; d(j) = 1,2,3,4) a j d(j), b j d(j) (j = 1,2,,20) N(0,1), N(0,σγ 2 d(j) ), U(05,15), N(0,1) * BTM ((51) ), N 20 A 3 U(0,1) N 20, N 20 B 4 A,B U, a ij b ij u ij = 1, a ij < b ij u ij = 0 5 U R 7 6 R ˆθ ir,, Bias(ˆθ i ),RMSE(ˆθ i ),cor(θ, ˆθ) ( ) Bias(ˆθ i ) = 1 R ˆθ ir θ i (57) R r=1 RMSE(ˆθ i ) = 1 R ) 2 (ˆθir θ i (58) R cor(θ, ˆθ) = 1 R r=1 R cor(θ, ˆθ r ) (59) r=1, 6 R, ˆθ ir, 1000 R = 50, 300 R = 100, c, 421, 2, c,, J d(j) = 2, *2, R *3, R
125 , 5 MCMC MCMC (Neal, 1997), WinBUGS 14 (Spiegelhalter et al, 2003) MCMC,,, a θ i N(0,1) a d(j) N(1,025) b r N(0,1) b θ i N(0,1) a j d(j) N(1,025) b j d(j) N(0,1) γ id(j) N(0,σγ 2 d(j) ) σγ 2 d(j) Γ(3, 1) c θ i N(0,1) a j d(j) N(1,025) b j d(j) N(0,1) b jk N( 2,1) d θ i N(0,1) a j N(1,025) b j N(0,1),,, a θ i 0 a d(j) 1 b r r = 1,2,3,4,5, 0, 01, 02, 03, 04 b
126 5 124 θ i 0 a j d(j) 1 b j d(j) 0 γ id(j) 0 σγ 2 d(j) 3 c θ i 0 a j d(j) 1 b j d(j) 0 b jk -2 d θ i 0 a j 1 b j 0,, λ,, burn-in a 1000 burn-in b 1000 burn-in c 1000 burn-in d 1000 burn-in,, WinBUGS User Manual (Spiegelhalter et al, 2003), 5%, a J d(j) = , 4000 b 5000
127 5 125 c 4000 d ,,, 2PLM GRM, BTM, CCM Bias,RMSE,cor(θ, ˆθ), 2PLM GRM, BTM, CCM Bias RMSE, θ i 523,, θ i N(0,1), θ i 1, Bias RMSE,, θ i 531 2PLM, 2PLM Bias,RMSE,cor(θ, ˆθ),, GRM, GRM Bias,RMSE,cor(θ, ˆθ),, 52, GRM Bias, RMSE, cor(θ, ˆθ),, 53, Bias, RMSE, cor(θ, ˆθ), GRM Bias,RMSE,cor(θ, ˆθ) 2PLM Bias,RMSE,cor(θ, ˆθ), GRM Bias, RMSE, cor(θ, ˆθ), 51, 52, 53, GRM Bias, RMSE, cor(θ, ˆθ)
128 5 126, 54, 55, 56, GRM Bias, RMSE, cor(θ, ˆθ) 4, 57, 58, 59, GRM,,, 57,, 2PLM,, 2PLM GRM,, 51,, Bias, 2PLM,, GRM,, 57,, 2PLM,, 2PLM GRM,, 58, 2 3, RMSE, 5,,, RMSE 2 3, 2PLM RMSE, GRM, RMSE, 5, 57, 4 2PLM, GRM,, 2PLM GRM 57,, 2PLM,, 2PLM GRM,
129 5 127, 59, 2 3,, cor(θ, ˆθ), cor(θ, ˆθ), GRM 2PLM, 59, 5, 4 4, cor(θ, ˆθ), 4,, 2PLM, GRM, 4,,, 5,,,,, a,,,,, a BTM, BTM Bias,RMSE,cor(θ, ˆθ),, 54, BTM Bias, RMSE, cor(θ, ˆθ),, 55, BTM Bias, RMSE, cor(θ, ˆθ), 510, 511, 512,
130 5 128 BTM Bias, RMSE, cor(θ, ˆθ), 513, 514, 515, BTM Bias, RMSE, cor(θ, ˆθ) 4, 516, 517, 518, BTM,,, 59, N = 1000, 2PLM 2 BTM,, 2PLM 2 BTM,, 510,, Bias,, 2PLM,, 2 BTM 59,, 2PLM 2 BTM,, 2PLM 2 BTM,, 514,, RMSE, RMSE, 2 BTM 2PLM,, 517, 5, 4, 4 RMSE, 5, 59, 4 2PLM 2 BTM, 2PLM BTM
131 ,, 2PLM 2 BTM,, 2PLM 2 BTM,, 518, 5, 4 4, cor(θ, ˆθ), 4,, 2PLM, BTM, 4,,, 5,,,, b,,,,,,, b, 534 CCM, CCM Bias,RMSE,cor(θ, ˆθ),, 56, CCM Bias, RMSE, cor(θ, ˆθ),, 57, CCM Bias, RMSE, cor(θ, ˆθ), 519, 520, 521, CCM Bias, RMSE, cor(θ, ˆθ) 4, 522, 523,
132 , CCM,,, 511,, 2PLM CCM,, 2PLM CCM,, 519, Bias,,,, 2PLM Bias, CCM, 522,, Bias,, CCM Bias, 2PLM, 511,, 2PLM CCM,, 2PLM CCM,, 520,, RMSE, CCM 2PLM, RMSE 511,, 2PLM CCM,, 2PLM CCM,,,,, c
133 5 131,,,,,,, c,,,, CCM,, 535, 535 CCM Bias, CCM b jk, -1714, b jk -0612, 2 BTM CCM, 2 j,k 100, σγ 2 d(j) = BTM j,k (1,1),(1,0),(0,1),(0,0) (100 ) 58, b jk = 1714 CCM 59, 2 BTM CCM, 2 j,k (N = 100), σγ 2 d(j) = BTM 510, b jk = 0612 CCM ,, 59, 510, 511, 2 BTM,, j,k, j,k CCM,, j,k,, j,k,,, 2 BTM CCM,,, CCM Bias
134 ,,, 3, GRM, BTM, CCM, 2PLM,, BTM 2PLM, Bradlow et al (1999), BTM,, BTM, 2PLM, M95%PIW BTM 2PLM,, DeMars (2006), a, b, d,,,,,,, Wainer et al (2007, pp ), b d,,, BTM,,, 537,,, 3,,,
135 5 133,,,,,,,, 4,,,,,,,
136 PLM Bias,RMSE,cor(θ, ˆθ) Bias RMSE cor(θ, ˆθ) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j)
137 GRM Bias,RMSE,cor(θ, ˆθ) Bias RMSE cor(θ, ˆθ) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j)
138 GRM Bias, RMSE, cor(θ, ˆθ) Bias RMSE cor(θ, ˆθ) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j)
139 BTM Bias,RMSE,cor(θ, ˆθ) Bias RMSE cor(θ, ˆθ) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j)
140 BTM Bias, RMSE, cor(θ, ˆθ) Bias RMSE cor(θ, ˆθ) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 5, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 3, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 5, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 3, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j)
141 CCM Bias,RMSE,cor(θ, ˆθ) Bias RMSE cor(θ, ˆθ) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) CCM Bias, RMSE, cor(θ, ˆθ) Bias RMSE cor(θ, ˆθ) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 300,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j) N = 1000,J d(j) = 2, d(j)
142 σγ 2 d(j) = BTM (1,1),(1,0),(0,1),(0,0) (100 ) U k = 1 U k = 0 U j = 1 38% 20% U j = 0 13% 29% 59 b jk = 1714 CCM (1,1),(1,0),(0,1),(0,0) (100 ) k = 1 k = 0 j = 1 46% 5% j = 0 6% 43% 510 σγ 2 d(j) = 0155 BTM (1,1),(1,0),(0,1),(0,0) (100 ) U k = 1 U k = 0 U j = 1 38% 23% U j = 0 15% 24% 511 b jk = 0612 CCM (1,1),(1,0),(0,1),(0,0) (100 ) k = 1 k = 0 j = 1 32% 8% j = 0 8% 52%
143 GRM Bias
144 GRM RMSE
145 GRM cor(θ, ˆθ)
146 GRM Bias
147 GRM RMSE
148 GRM cor(θ, ˆθ)
149 GRM Bias 4
150 GRM RMSE 4
151 GRM cor(θ, ˆθ) 4
152 BTM Bias
153 BTM RMSE
154 BTM cor(θ, ˆθ)
155 BTM Bias
156 BTM RMSE
157 BTM cor(θ, ˆθ)
158 BTM Bias 4
159 BTM RMSE 4
160 BTM cor(θ, ˆθ) 4
161 CCM Bias
162 CCM RMSE
163 CCM cor(θ, ˆθ)
164 CCM Bias 4
165 CCM RMSE
166 CCM cor(θ, ˆθ) 4
167 , 2,,,,,,,,,,,,,, 3,,,,,,,,,,,,,,
168 6 166,,,,,,,,,,, 4,,,,,,, 5,,,,,,,,,,,,, 62,,,,,,,,
169 6 167,,,,,
170 168 1 Anastasi, A (1961) Psychological testing (2nd ed) New York: Macmillian 2 (2005), 1, Bock, R D (1972) Estimating item parameters and latent ability when responses are scored in two or more latent categories Psychometrika, 37, Bradlow, E T, Wainer, H, & Wang, X H (1999) A Bayesian random effects model for testlets Psychometrika, 64, Braeken, J (2011) A boundary mixture approach to violation of conditional independence Psychometrika, 64, Braeken, J, Tuerlinckx, F, & De Boeck, P (2007) Copula functions for residual dependency Psychometrika, 72, Chen, C, & Wang, W (2007) Effect of ignoring item interaction on item parameter estimation and detection of interacting items Applied Psychological Measurement, 31, DeMars, C E (2006) Application of the bi-factor multidimensional item response theory model to testlet-based tests Journal of Educational Measurement, 43, Ferrara, S, Huynh, H, & Michaels, H (1999) Contextual explanation of local dependence in item clusters in a large scale hands-on science performance assessment Journal of Educational Measurement, 6, Frank, M J (1979) On the simultaneous associativity of F(x,y) and x + y F(x,y) Aequationes Mathematica, 19, Geman, S, & Geman, D (1984) Stochastic relaxation, Gibbs distributions and the Bayesian restoration of images Institute of Electorical and Electorics Engineers, Transactions on Pattern Analysis and Machine Intelligence, 6,
171 Guilford, J P (1936) Psychometric methods New York: McGraw-Hill 13 (1984), 26, (2000) < tokyoacjp/ haebara/local ind/> ( ) 15 Hastings, W K (1970) Monte Carlo sampling methods using Markov chains and their applications Biometrika, 57, Hoskens, M, & De Boeck, P (1997) A parametric model for local dependence among test items Psychological Methods, 2, (1992) 18 Ip E H (2000) Adjusting for information inflation due to local dependency in moderately large item clusters Psychometrika, 65, Ip, E H (2002) Locally dependent latent trait model and the Dutch identity revisited Psychometrika, 67, Ip, E H (2010) Interpretation of the three parameter testlet response model and information function Applied Psychological Measurement, 34, Ip, E H, Smits, D J M, & De Boeck, P (2009) Locally dependent linear logistic test model with person covariates Applied Psychological Measurement, 33, (2001), 30, (2013) 2PLM, 9, Jannarone, R J (1986) Conjunctive item response theory kernels Psychometrika, 51, Jiao, H, Kamata, A, Wang, S, &Jin, Ying (2012) Amultileveltestlet model for dual local dependence Journal of Educational Measurement, Junker, B W (1991) Essential independence and likelihood-based ability estimation for polytomous items Psychometrika, 56, Kan, C C, van der Ven, A H G S, Breteler, M H M, & Zitman, F G (2001) Latent trait standardization of the Benzodiazepine dependence selfreport questionnaire using the Rasch scaling model Comprehensive Psychiatry, 42,
172 Keller, L A, Swaminathan, H, & Sireci, S G (2003) Evaluating scoring procedures for context-dependent item sets Applied Measurement in Education, 16, Kelly, T L (1924) Note on the reliability of a test: A reply to Dr Crumm s criticism Journal of Educational Psychology, 15, Kreiner, S, & Christensen, K B (2004) Analysis of local dependence and multidimensionality in graphical loglinear Rasch models Communications in Statistics: Theory and Methods, 33, Lee, G (2000) A comparison of methods of estimating conditional standard errors of measurement for testlet-based test scores using simulation techniques Journal of Educational Measurement, 36, Lee, G, Kolen, M J, Frisbie, D A, & Ankenmann, R D (2001) Comparison of dichotomous and polytomous item response models in equating scores form tests composed of testlets Applied Psychological Measurement, 25, Li, Y, Bolt, D M, & Fu, J (2005) A test characteristic curve linking method for the testlet model Applied Psychological Measurement, 29, Li, Y, Bolt, D M, & Fu, J (2006) A comparison of alternative models for testlets Applied Psychological Measurement, 30, Looney, M A, & Spray, J A (1992) Effects of violating local independence on IRT parameter estimation for the binomial trials model Research Quarterly for Exercise and Sport, 63, Lord, F M (1980) Applications of item response theory to practical testing problems Hillsdale, NJ: Lawrence Erlbaum Associate 37 Lord, F M, & Novick M R (1968) Statistical theories of mental test scores Reading, MA: Addison-Wesley 38 Muraki, E (1992) A generalized partial credit model: Application of an EM algorithm Applied Psychological Measurement, 16, Nandakumar, R (1990) Traditional dimensionality versus essential dimensionality Journal of Educational Measurement, 28, Neal, R (1997) Markov chain Monte Carlo methods based on slicing the density function Technical Report 9722, Department of Statistics, University of Toronto, Canada 41 Patz, R J, & Junker, B W (1999) A straightforward approach to Markov chain Monte Carlo methods for item response models Journal of Educational and Behavioral Statistics, 24,
173 Rasch, G (1960) Probabilistic models for some intelligence and achievement tests Copenhagen: Danish Institute for Educational Research 43 Reise, S P, Horan, W P, & Blanchard, J J (2011) The challenges of fitting an item response theory model to the social anhedonia scale Personality Assessment, 93, Journal of 44 Samejima, F (1969) Estimation of latent ability using a response pattern of graded scores Psychometrika Monograph Supplement, 34, Simms, L J, Goldberg, L R, Roberts, J E, Watson, D, Welte, J, & Rotterman, J H (2011) Computerized adaptive assessment of personality disorder: Introducing the CAT-PD project Journal of Personality Assessment, 93, Sireci, S G, Thissen, D, & Wainer, H (1991) On the reliability of testletbased tests Journal of Educational Measurement, 28, Spiegelhalter, D J, Thomas, A, & Best, N (2003) WinBUGS 14 [Computer Program] Cambridge, UK: MRC Biostatistics Unit, Institute of Public Health 48 Spiegelhalter, D J, Thomas, A, Best, N, & Lunn, D (2003) WinBUGS User Manual Version 14 Cambridge, UK: MRC Biostatistics Unit, Institute of Public Health 49 Tanner, M A, & Wong, W H (1987) The calculation of posterior distributions by data augmentation Journal of the American Statistical Association, 82, Thorndike, R L (1951) Reliability In E F Lindquist (Ed), Educational Measurement Washington, DC: American Council on Education 51 (2010), 6, (2012a), 8, (2012b), 39, (2002) [ ] 55 ( ) (2005) [ ] 56 ( ) (2008) 57 Tuerlinckx, F, & De Boeck, P (1999) Distinguishing constant and dimensiondependent interaction: A simulation study Applied Psychological Measure ment, 23,
174 Tuerlinckx, F, & De Boeck, P (2001) The effect of ignoring item interactions on the estimated discrimination parameters in item response theory Psychological Methods, 6, Wainer, H (1995) Precision and differential item functioning on a testletbased test: The 1991 Law School Admission Test as an example Applied Measurement in Education, 8, Wainer, H Bradlow, E T, & Du, Z (2000) Testlet response theory: An analog for the 3-PL useful in testlet-based adaptive testing In W J van der Linden, & C A W Glas (Eds), Computerized adaptive testing, theory and practice Boston, MA: Kluwer-Nijhoff pp Wainer, H, Bradlow E T, & Wang X H (2007) Testlet response theory and its applications USA: Cambridge University Press 62 Wainer, H, & Wang, X H (2001) Using a new statistical model for testlets to score TOEFL TOEFL Technical Report No 16 Princeton, NJ: Educational Testing Service 63 Wang, X, Bradlow, E T, & Wainer, H (2002) A general Bayesian model for testlets: Theory and applications Applied Psychological Measurement, 26, Wang, C W, Cheng, Y Y, & Wilson, M (2005) Local item dependence for items across tests connected by common stimuli Educational and Psychological Measurement, 65, Wang, C W, & Wilson, M (2005) Exploring local item dependence using a random-effect facet model Applied Psychological Measurement, 29, (2012) 54, Yang, W L, & Gao, R (2008) Invariance of score linkings across gender groups for forms of a testlet-based college-level examination program examination Applied Psychological Measurement, 32, (2013) 11, Yen, W M (1993) Scaling performance assessments: Strategies for managing local item dependence Journal of Educational Measurement, 30, Zhang, B (2010) Assessing the accuracy and consistency of language proficiency classification under competing measurement models Language Testing,
175 173 27,
176 PLM ((N = 1000,J = 50) ) # setseed(1010) # 100 t1 <- rnorm(n = 100) # bt1 <- runif(n = 30, min = -20, max = 20) # at1 <- runif(n = 30, min = 03, max = 15) # 900 t2 <- rnorm(n = 900) t <- c( t1, t2) # 20 bt2 <- runif(n = 20, min = -20, max = 20) bt <- c(bt1, bt2) at2 <- runif(n = 20, min = 03, max = 15) at <- c(at1, at2) # (2 ) tvec <- rep( t, 50) tmat <- matrix( tvec, nrow = 50, ncol = 1000, byrow = TRUE) ft <- tmat - bt et <- exp(-10 * at * ft) Pt <- 1 / (1 + et) Qt <- 1 - Pt #
177 175 m <- matrix(nrow = 1000, ncol = 100) am <- matrix(nrow = 50, ncol = 100) bm <- matrix(nrow = 50, ncol = 100) Xe <- matrix(nrow = 1000, ncol = 100) DPe <- matrix(nrow = 50, ncol = 100) PRe <- matrix(nrow = 50, ncol = 100) pc <- matrix(nrow = 3, ncol = 100) kc <- matrix(nrow = 3, ncol = 100) sc <- matrix(nrow = 3, ncol = 100) #MCMC 100 for(r in 1:100){ # <- runif(n = 1000 * 50, min = 0, max = 1) <- matrix(, nrow = 50, ncol = 1000) U <- ifelse(pt >=, 1, 0) # gc() gc() # # e <- matrix(nrow = 1000, ncol = 20000) be <- matrix(nrow = 50, ncol = 20000) ae <- matrix(nrow = 50, ncol = 20000) # X <- colsums(u) Xe[, r] <- X <- mean(x) <- mean((x - ) ^ 2) <- sqrt( ) e[, 1] <- (X - ) / PR <- rowmeans(u) PRe[, r] <- PR be[, 1] <- 1 - PR DP <- apply(u, 1, cor, X) DPe[, r] <- DP ae[, 1] <- DP for(i in 1:19999){ # (2 ) vec <- rep( e[, i], 50) mat <- matrix( vec, nrow = 50, ncol = 1000, byrow = TRUE)
178 176 f <- mat - be[, i] e <- exp(-10 * ae[, i] * f) P <- 1 / (1 + e) Q <- 1 - P # ( ) t <- rnorm(n = 1000, mean = 0, sd = 17) c <- e[, i] + t # c (2 ) cvec <- rep( c, 50) cmat <- matrix( cvec, nrow = 50, ncol = 1000, byrow = TRUE) fc <- cmat - be[, i] ec <- exp(-10 * ae[, i] * fc) Pc <- 1 / (1 + ec) Qc <- 1 - Pc # c P <- dnorm( e[, i], mean = 0, sd = 1) P c <- dnorm( c, mean = 0, sd = 1) L <- (P ^ U) * (Q ^ (1 - U)) L c <- (Pc ^ U) * (Qc ^ (1 - U)) LP <- log(p ) LP c <- log(p c) LL <- log(l ) LL c <- log(l c) LL <- colsums(ll ) LL c <- colsums(ll c) L <- (LL c + LP c) - (LL + LP ) c <- exp(l ) dim( c) <- c(1, 1000) <- apply( c, 2, min, 1) urn <- runif(n = 1000, min = 0, max = 1) logic <- >= urn e[which( logic), i+1] <- c[which( logic)] e[which(! logic), i+1] <- e[which(! logic), i] # (2 ) vec <- rep( e[, i+1], 50) mat <- matrix( vec, nrow = 50, ncol = 1000, byrow = TRUE) f <- mat - be[, i] e <- exp(-10 * ae[, i] * f) P <- 1 / (1 + e) Q <- 1 - P
179 177 # a ( ) ta <- rnorm(n = 50, mean = 0, sd = 025) ac <- ae[, i] + ta # ac (2 ) ec <- exp(-10 * ac * f) Pc <- 1 / (1 + ec) Qc <- 1 - Pc #ac Pa <- dnorm(ae[, i], mean = 1, sd = 05) Pac <- dnorm(ac, mean = 1, sd = 05) La <- (P ^ U) * (Q ^ (1 - U)) Lac <- (Pc ^ U) * (Qc ^ (1 - U)) LPa <- log(pa) LPac <- log(pac) LLa <- log(la) LLac <- log(lac) LLA <- rowsums(lla) LLAc <- rowsums(llac) L <- (LLAc + LPac) - (LLA + LPa) c <- exp(l ) dim( c) <- c(1, 50) <- apply( c, 2, min, 1) urn <- runif(n = 50, min = 0, max = 1) logic <- >= urn ae[which( logic), i+1] <- ac[which( logic)] ae[which(! logic), i+1] <- ae[which(! logic), i] # a (2 ) vec <- rep( e[, i+1], 50) mat <- matrix( vec, nrow = 50, ncol = 1000, byrow = TRUE) f <- mat - be[, i] e <- exp(-10 * ae[, i+1] * f) P <- 1 / (1 + e) Q <- 1 - P # b ( ) tb <- rnorm(n = 50, mean = 0, sd = 03) bc <- be[, i] + tb # bc (2 )
180 178 fc <- mat - bc ec <- exp(-10 * ae[, i+1] * fc) Pc <- 1 / (1 + ec) Qc <- 1 - Pc #bc Pb <- dnorm(be[, i], mean = 0, sd = 1) Pbc <- dnorm(bc, mean = 0, sd = 1) Lb <- (P ^ U) * (Q ^ (1 - U)) Lbc <- (Pc ^ U) * (Qc ^ (1 - U)) LPb <- log(pb) LPbc <- log(pbc) LLb <- log(lb) LLbc <- log(lbc) LLB <- rowsums(llb) LLBc <- rowsums(llbc) L <- (LLBc + LPbc) - (LLB + LPb) c <- exp(l ) dim( c) <- c(1, 50) <- apply( c, 2, min, 1) urn <- runif(n = 50, min = 0, max = 1) logic <- >= urn be[which( logic), i+1] <- bc[which( logic)] be[which(! logic), i+1] <- be[which(! logic), i] } # gc() gc() # a b e <- e[, -c(1:3000)] ae <- ae[, -c(1:3000)] be <- be[, -c(1:3000)] <- rowmeans( e) a <- rowmeans(ae) b <- rowmeans(be) m[, r] <- am[, r] <- a bm[, r] <- b pc[1, r] <- cor( t,, method = c("pearson")) pc[2, r] <- cor(at, a, method = c("pearson")) pc[3, r] <- cor(bt, b, method = c("pearson")) kc[1, r] <- cor( t,, method = c("kendall")) kc[2, r] <- cor(at, a, method = c("kendall")) kc[3, r] <- cor(bt, b, method = c("kendall"))
181 179 sc[1, r] <- cor( t,, method = c("spearman")) sc[2, r] <- cor(at, a, method = c("spearman")) sc[3, r] <- cor(bt, b, method = c("spearman")) # gc() gc() } # a b Bias RMSE Bias <- rowmeans( m) - t Biasa <- rowmeans(am) - at Biasb <- rowmeans(bm) - bt RMSE 1 <- ( m - t) ^ 2 RMSE 2 <- rowmeans(rmse 1) RMSE <- sqrt(rmse 2) RMSEa1 <- (am - at) ^ 2 RMSEa2 <- rowmeans(rmsea1) RMSEa <- sqrt(rmsea2) RMSEb1 <- (bm - bt) ^ 2 RMSEb2 <- rowmeans(rmseb1) RMSEb <- sqrt(rmseb2) #Bias RMSE B <- mean(bias ) Ba <- mean(biasa) Bb <- mean(biasb) R <- mean(rmse ) Ra <- mean(rmsea) Rb <- mean(rmseb) B <- sqrt(mean((bias - B ) ^ 2)) Ba <- sqrt(mean((biasa - Ba) ^ 2)) Bb <- sqrt(mean((biasb - Bb) ^ 2)) R <- sqrt(mean((rmse - R ) ^ 2)) Ra <- sqrt(mean((rmsea - Ra) ^ 2)) Rb <- sqrt(mean((rmseb - Rb) ^ 2)) # writetable( m, file = " (10) xls", sep = "\t") writetable(am, file = " a(10) xls", sep = "\t") writetable(bm, file = " b(10) xls", sep = "\t") writetable(xe, file = " X(10) xls", sep = "\t") writetable(dpe, file = " DP(10) xls", sep = "\t")
182 180 writetable(pre, file = " PR(10) xls", sep = "\t") writetable(pc, file = " pc(10) xls", sep = "\t") writetable(kc, file = " kc(10) xls", sep = "\t") writetable(sc, file = " sc(10) xls", sep = "\t") writetable(bias, file = " Bias (10) xls", sep = "\t") writetable(biasa, file = " Biasa(10) xls", sep = "\t") writetable(biasb, file = " Biasb(10) xls", sep = "\t") writetable(rmse, file = " RMSE (10) xls", sep = "\t") writetable(rmsea, file = " RMSEa(10) xls", sep = "\t") writetable(rmseb, file = " RMSEb(10) xls", sep = "\t") sv <- cbind( B, Ba, Bb, R, Ra, Rb, B, Ba, Bb, R, Ra, Rb) writetable(sv, file = " sv(10) xls", sep = "\t") CCM ((N = 500,J = 30) ) # setseed(1010) # 100 t1 <- rnorm(n = 100) # bt <- runif(n = 30, min = -20, max = 20) # at <- runif(n = 30, min = 03, max = 15) # 400 t2 <- rnorm(n = 400) t <- c( t1, t2) # (2 ) tvec <- rep( t, 30)
183 181 tmat <- matrix( tvec, nrow = 30, ncol = 500, byrow = TRUE) ft <- tmat - bt et <- exp(-10 * at * ft) Pt <- 1 / (1 + et) Qt <- 1 - Pt # (item1 ~ item30) b12 <- (-2) ec <- at * ft eec <- exp(ec) d1 <- eec[seq(1, 29, by = 2), ] d2 <- eec[seq(2, 30, by = 2), ] d3 <- exp(ec[seq(1, 29, by = 2), ] + ec[seq(2, 30, by = 2), ] - b12) d <- 1 + d1 + d2 + d3 P00 <- 1 / d P10 <- d1 / d P01 <- d2 / d P11 <- d3 / d 2PLM 2PLCM ((N = 100,J = 10) ) # setseed(1010) # 100 t <- rnorm(n = 100) # bt <- runif(n = 30, min = -20, max = 20) # at <- runif(n = 30, min = 03, max = 15) # 10 bt <- bt[c(1:10)] at <- at[c(1:10)] # (2 ) tvec <- rep( t, 10) tmat <- matrix( tvec, nrow = 10, ncol = 100, byrow = TRUE) ft <- tmat - bt et <- exp(-10 * at * ft)
184 182 Pt <- 1 / (1 + et) Qt <- 1 - Pt # (item1 ~ item10) <- 300 C1 <- - * Qt C2 <- 1 - exp(c1) C3 <- log(c2) C4 <- matrix(nrow = 5, ncol = 100) for(i in 1:5){ C4[i, ] <- colsums(c3[c((2 * i - 1), (2 * i)), ]) } C5 <- C4 - log(1 - exp(- )) C6 <- 1 - exp(c5) C7 <- log(c6) Cop <- (-1 / ) * C7 P00 <- Cop P10 <- Qt[seq(2, 10, by = 2), ] - Cop P01 <- Qt[seq(1, 9, by = 2), ] - Cop P11 <- 1 - Qt[seq(1, 9, by = 2), ] - Qt[seq(2, 10, by = 2), ] + Cop 2PLM 3, 4, 5 ((N = 1000,J d(j) = 5, d(j) 0) ) # setseed(1010) # ( :1186 :0394) tt <- rnorm(n = 1000, mean = 0, sd = 1) bt <- rnorm(n = 20, mean = 0, sd = 1) at <- runif(n = 20, min = 05, max = 15) te1t <- rnorm(n = 1000, mean = 0, sd = 0394) te2t <- rnorm(n = 1000, mean = 0, sd = 0394) te3t <- rnorm(n = 1000, mean = 0, sd = 0394) te4t <- rnorm(n = 1000, mean = 0, sd = 0394) tet <- cbind(te1t, te2t, te3t, te4t) # subtract <- function(x, y){x - y} # (5 ver)
185 183 c1 <- apply(asmatrix(tt), 1, subtract, bt) c2 <- numeric() for(l in 1:4){ zp <- t(c1)[, (5*l-4):(5*l)] - tet[, l] c2 <- cbind(c2, zp) } #l Pt <- 1/(1 + exp((-17)*t(t(c2)*at))) # (row & sum) U <- vector(mode = "list", length = 50) S <- vector(mode = "list", length = 50) for(i in 1:50){ u <- matrix(runif(n = 20000, min = 0, max = 1), nrow = 1000, ncol = 20) U[[i]] <- ifelse(pt >= u, 1, 0) z <- numeric() for(l in 1:4){ zp <- rowsums(u[[i]][, (5*l-4):(5*l)]) z <- cbind(z, zp) } #l S[[i]] <- z } #i # for(l in 1:50){ writetable(t(asvector(t(u[[l]]))), paste("u_", formatc(l, format = "d", width = 2, flag = "0"), "txt", sep = ""), sep = ",", rownames = FALSE, colnames = FALSE) writetable(t(asvector(t(s[[l]]))), paste("s_", formatc(l, format = "d", width = 2, flag = "0"), "txt", sep = ""), sep = ",", rownames = FALSE, colnames = FALSE) } #l WinBUGS data file ( GRM J d(j) = 5 ) #model file (GRM) #5item ver model{ for(i in 1:N){ for(j in 1:J){ Ps[i, j, 6] <- 0 for(k in 1:5){ Ps[i, j, k] <- 1/(1 + exp(-17*a[j]*(t[i]-b[j, k]))) P[i, j, k+1] <- Ps[i, j, k] - Ps[i, j, k+1] } #k P[i, j, 1] <- 1 - Ps[i, j, 1] U[i, j] <- S[i, j] + 1
186 184 U[i, j] ~ dcat(p[i, j, ]) } #j t[i] ~ dnorm(0, 1) } #i for(j in 1:J){ a[j] ~ dnorm(1, 4) b[j, 1] ~ dnorm(0, 1) for(k in 2:5){ b[j, k] ~ dnorm(0, 1)I(b[j, k-1], ) } #k } #j } WinBUGS model file ( BTM J d(j) = 3 ) #model file (TEM) #3item ver model{ for(i in 1:N){ for(l in 1:4){ for(j in (5*l-4):(5*l-2)){ P[i, j] <- 1/(1+exp(-17*a[j]*(t[i]-b[j]-g[i, l]))) U[i, j] ~ dbern(p[i, j]) } #j for(j in (5*l-1):(l*5)){ P[i, j] <- 1/(1+exp(-17*a[j]*(t[i]-b[j]))) U[i, j] ~ dbern(p[i, j]) } #j g[i, l] ~ dnorm(0, tau[l]) } #l t[i] ~ dnorm(0, 1) } #i for(l in 1:4){ tau[l] ~ dgamma(3, 1) } #l for(j in 1:J){ a[j] ~ dnorm(1, 4) b[j] ~ dnorm(0, 1) } #j } #model WinBUGS model file ( CCM J d(j) = 2 ) #model file (CCM) #2item ver
187 185 model{ for(i in 1:N){ for(j in 1:10){ z1[i, j] <- (17)*a[2*j-1]*(t[i]-b[2*j-1]) z2[i, j] <- (17)*a[2*j]*(t[i]-b[2*j]) dp[i, j] <- 1 + exp(z1[i, j]) + exp(z2[i, j]) + exp(z1[i, j]+z2[i, j]-g[j]) P[i, j, 1] <- 1/dP[i, j] P[i, j, 2] <- exp(z1[i, j])/dp[i, j] P[i, j, 3] <- exp(z2[i, j])/dp[i, j] P[i, j, 4] <- exp(z1[i, j]+z2[i, j]-g[j])/dp[i, j] } #j t[i] ~ dnorm(0, 1) } #i for(l in 1:L){ a[l] ~ dnorm(1, 4) b[l] ~ dnorm(0, 1) } #j for(j in 1:4){ g[j] ~ dnorm(-2, 1) } #j for(j in 5:10){ g[j] <- 0 } #j for(i in 1:N){ for(j in 1:10){ U[i, j] <- V[i, j] + 1 U[i, j] ~ dcat(p[i, j, ]) } #j } #i } #model #ok WinBUGS model file ( 2PLM ) model{ for(i in 1:N){ for(j in 1:J){ P[i, j] <- 1/(1+exp(-17*a[j]*(t[i]-b[j]))) U[i, j] ~ dbern(p[i, j]) } #j t[i] ~ dnorm(0, 1) } #i for(j in 1:J){ a[j] ~ dnorm(1, 4) b[j] ~ dnorm(0, 1) } #j
188 186 } #model, WinBUGS MCMC, data file inits file, WinBUGS User Manual Version 14 (Spiegelhalter et al, 2003)
189 187,,,,,
t χ 2 F Q t χ 2 F 1 2 µ, σ 2 N(µ, σ 2 ) f(x µ, σ 2 ) = 1 ( exp (x ) µ)2 2πσ 2 2σ 2 0, N(0, 1) (100 α) z(α) t χ 2 *1 2.1 t (i)x N(µ, σ 2 ) x µ σ N(0, 1
t χ F Q t χ F µ, σ N(µ, σ ) f(x µ, σ ) = ( exp (x ) µ) πσ σ 0, N(0, ) (00 α) z(α) t χ *. t (i)x N(µ, σ ) x µ σ N(0, ) (ii)x,, x N(µ, σ ) x = x+ +x N(µ, σ ) (iii) (i),(ii) z = x µ N(0, ) σ N(0, ) ( 9 97.
12/1 ( ) GLM, R MCMC, WinBUGS 12/2 ( ) WinBUGS WinBUGS 12/2 ( ) : 12/3 ( ) :? ( :51 ) 2/ 71
2010-12-02 (2010 12 02 10 :51 ) 1/ 71 GCOE 2010-12-02 WinBUGS [email protected] http://goo.gl/bukrb 12/1 ( ) GLM, R MCMC, WinBUGS 12/2 ( ) WinBUGS WinBUGS 12/2 ( ) : 12/3 ( ) :? 2010-12-02 (2010 12
Part () () Γ Part ,
Contents a 6 6 6 6 6 6 6 7 7. 8.. 8.. 8.3. 8 Part. 9. 9.. 9.. 3. 3.. 3.. 3 4. 5 4.. 5 4.. 9 4.3. 3 Part. 6 5. () 6 5.. () 7 5.. 9 5.3. Γ 3 6. 3 6.. 3 6.. 3 6.3. 33 Part 3. 34 7. 34 7.. 34 7.. 34 8. 35
n ξ n,i, i = 1,, n S n ξ n,i n 0 R 1,.. σ 1 σ i .10.14.15 0 1 0 1 1 3.14 3.18 3.19 3.14 3.14,. ii 1 1 1.1..................................... 1 1............................... 3 1.3.........................
n (1.6) i j=1 1 n a ij x j = b i (1.7) (1.7) (1.4) (1.5) (1.4) (1.7) u, v, w ε x, ε y, ε x, γ yz, γ zx, γ xy (1.8) ε x = u x ε y = v y ε z = w z γ yz
1 2 (a 1, a 2, a n ) (b 1, b 2, b n ) A (1.1) A = a 1 b 1 + a 2 b 2 + + a n b n (1.1) n A = a i b i (1.2) i=1 n i 1 n i=1 a i b i n i=1 A = a i b i (1.3) (1.3) (1.3) (1.1) (ummation convention) a 11 x
/22 R MCMC R R MCMC? 3. Gibbs sampler : kubo/
2006-12-09 1/22 R MCMC R 1. 2. R MCMC? 3. Gibbs sampler : [email protected] http://hosho.ees.hokudai.ac.jp/ kubo/ 2006-12-09 2/22 : ( ) : : ( ) : (?) community ( ) 2006-12-09 3/22 :? 1. ( ) 2. ( )
LLG-R8.Nisus.pdf
d M d t = γ M H + α M d M d t M γ [ 1/ ( Oe sec) ] α γ γ = gµ B h g g µ B h / π γ g = γ = 1.76 10 [ 7 1/ ( Oe sec) ] α α = λ γ λ λ λ α γ α α H α = γ H ω ω H α α H K K H K / M 1 1 > 0 α 1 M > 0 γ α γ =
80 4 r ˆρ i (r, t) δ(r x i (t)) (4.1) x i (t) ρ i ˆρ i t = 0 i r 0 t(> 0) j r 0 + r < δ(r 0 x i (0))δ(r 0 + r x j (t)) > (4.2) r r 0 G i j (r, t) dr 0
79 4 4.1 4.1.1 x i (t) x j (t) O O r 0 + r r r 0 x i (0) r 0 x i (0) 4.1 L. van. Hove 1954 space-time correlation function V N 4.1 ρ 0 = N/V i t 80 4 r ˆρ i (r, t) δ(r x i (t)) (4.1) x i (t) ρ i ˆρ i t
医系の統計入門第 2 版 サンプルページ この本の定価 判型などは, 以下の URL からご覧いただけます. このサンプルページの内容は, 第 2 版 1 刷発行時のものです.
医系の統計入門第 2 版 サンプルページ この本の定価 判型などは, 以下の URL からご覧いただけます. http://www.morikita.co.jp/books/mid/009192 このサンプルページの内容は, 第 2 版 1 刷発行時のものです. i 2 t 1. 2. 3 2 3. 6 4. 7 5. n 2 ν 6. 2 7. 2003 ii 2 2013 10 iii 1987
.2 ρ dv dt = ρk grad p + 3 η grad (divv) + η 2 v.3 divh = 0, rote + c H t = 0 dive = ρ, H = 0, E = ρ, roth c E t = c ρv E + H c t = 0 H c E t = c ρv T
NHK 204 2 0 203 2 24 ( ) 7 00 7 50 203 2 25 ( ) 7 00 7 50 203 2 26 ( ) 7 00 7 50 203 2 27 ( ) 7 00 7 50 I. ( ν R n 2 ) m 2 n m, R = e 2 8πε 0 hca B =.09737 0 7 m ( ν = ) λ a B = 4πε 0ħ 2 m e e 2 = 5.2977
24 I ( ) 1. R 3 (i) C : x 2 + y 2 1 = 0 (ii) C : y = ± 1 x 2 ( 1 x 1) (iii) C : x = cos t, y = sin t (0 t 2π) 1.1. γ : [a, b] R n ; t γ(t) = (x
24 I 1.1.. ( ) 1. R 3 (i) C : x 2 + y 2 1 = 0 (ii) C : y = ± 1 x 2 ( 1 x 1) (iii) C : x = cos t, y = sin t (0 t 2π) 1.1. γ : [a, b] R n ; t γ(t) = (x 1 (t), x 2 (t),, x n (t)) ( ) ( ), γ : (i) x 1 (t),
X X X Y R Y R Y R MCAR MAR MNAR Figure 1: MCAR, MAR, MNAR Y R X 1.2 Missing At Random (MAR) MAR MCAR MCAR Y X X Y MCAR 2 1 R X Y Table 1 3 IQ MCAR Y I
(missing data analysis) - - 1/16/2011 (missing data, missing value) (list-wise deletion) (pair-wise deletion) (full information maximum likelihood method, FIML) (multiple imputation method) 1 missing completely
7 π L int = gψ(x)ψ(x)φ(x) + (7.4) [ ] p ψ N = n (7.5) π (π +,π 0,π ) ψ (σ, σ, σ )ψ ( A) σ τ ( L int = gψψφ g N τ ) N π * ) (7.6) π π = (π, π, π ) π ±
7 7. ( ) SU() SU() 9 ( MeV) p 98.8 π + π 0 n 99.57 9.57 97.4 497.70 δm m 0.4%.% 0.% 0.8% π 9.57 4.96 Σ + Σ 0 Σ 89.6 9.46 K + K 0 49.67 (7.) p p = αp + βn, n n = γp + δn (7.a) [ ] p ψ ψ = Uψ, U = n [ α
ii 3.,. 4. F. (), ,,. 8.,. 1. (75%) (25%) =7 20, =7 21 (. ). 1.,, (). 3.,. 1. ().,.,.,.,.,. () (12 )., (), 0. 2., 1., 0,.
24(2012) (1 C106) 4 11 (2 C206) 4 12 http://www.math.is.tohoku.ac.jp/~obata,.,,,.. 1. 2. 3. 4. 5. 6. 7.,,. 1., 2007 (). 2. P. G. Hoel, 1995. 3... 1... 2.,,. ii 3.,. 4. F. (),.. 5... 6.. 7.,,. 8.,. 1. (75%)
2 G(k) e ikx = (ik) n x n n! n=0 (k ) ( ) X n = ( i) n n k n G(k) k=0 F (k) ln G(k) = ln e ikx n κ n F (k) = F (k) (ik) n n= n! κ n κ n = ( i) n n k n
. X {x, x 2, x 3,... x n } X X {, 2, 3, 4, 5, 6} X x i P i. 0 P i 2. n P i = 3. P (i ω) = i ω P i P 3 {x, x 2, x 3,... x n } ω P i = 6 X f(x) f(x) X n n f(x i )P i n x n i P i X n 2 G(k) e ikx = (ik) n
.. ( )T p T = p p = T () T x T N P (X < x T ) N = ( T ) N (2) ) N ( P (X x T ) N = T (3) T N P T N P 0
20 5 8..................................................2.....................................3 L.....................................4................................. 2 2. 3 2. (N ).........................................
B
B YES NO 5 7 6 1 4 3 2 BB BB BB AA AA BB 510J B B A 510J B A A A A A A 510J B A 510J B A A A A A 510J M = σ Z Z = M σ AAA π T T = a ZP ZP = a AAA π B M + M 2 +T 2 M T Me = = 1 + 1 + 2 2 M σ Te = M 2 +T
kubo2015ngt6 p.2 ( ( (MLE 8 y i L(q q log L(q q 0 ˆq log L(q / q = 0 q ˆq = = = * ˆq = 0.46 ( 8 y 0.46 y y y i kubo (ht
kubo2015ngt6 p.1 2015 (6 MCMC [email protected], @KuboBook http://goo.gl/m8hsbm 1 ( 2 3 4 5 JAGS : 2015 05 18 16:48 kubo (http://goo.gl/m8hsbm 2015 (6 1 / 70 kubo (http://goo.gl/m8hsbm 2015 (6 2 /
0406_total.pdf
59 7 7.1 σ-ω σ-ω σ ω σ = σ(r), ω µ = δ µ,0 ω(r) (6-4) (iγ µ µ m U(r) γ 0 V (r))ψ(x) = 0 (7-1) U(r) = g σ σ(r), V (r) = g ω ω(r) σ(r) ω(r) (6-3) ( 2 + m 2 σ)σ(r) = g σ ψψ (7-2) ( 2 + m 2 ω)ω(r) = g ω ψγ
TOP URL 1
TOP URL http://amonphys.web.fc.com/ 3.............................. 3.............................. 4.3 4................... 5.4........................ 6.5........................ 8.6...........................7
³ÎΨÏÀ
2017 12 12 Makoto Nakashima 2017 12 12 1 / 22 2.1. C, D π- C, D. A 1, A 2 C A 1 A 2 C A 3, A 4 D A 1 A 2 D Makoto Nakashima 2017 12 12 2 / 22 . (,, L p - ). Makoto Nakashima 2017 12 12 3 / 22 . (,, L p
1. 4cm 16 cm 4cm 20cm 18 cm L λ(x)=ax [kg/m] A x 4cm A 4cm 12 cm h h Y 0 a G 0.38h a b x r(x) x y = 1 h 0.38h G b h X x r(x) 1 S(x) = πr(x) 2 a,b, h,π
. 4cm 6 cm 4cm cm 8 cm λ()=a [kg/m] A 4cm A 4cm cm h h Y a G.38h a b () y = h.38h G b h X () S() = π() a,b, h,π V = ρ M = ρv G = M h S() 3 d a,b, h 4 G = 5 h a b a b = 6 ω() s v m θ() m v () θ() ω() dθ()
4. ϵ(ν, T ) = c 4 u(ν, T ) ϵ(ν, T ) T ν π4 Planck dx = 0 e x 1 15 U(T ) x 3 U(T ) = σt 4 Stefan-Boltzmann σ 2π5 k 4 15c 2 h 3 = W m 2 K 4 5.
A 1. Boltzmann Planck u(ν, T )dν = 8πh ν 3 c 3 kt 1 dν h 6.63 10 34 J s Planck k 1.38 10 23 J K 1 Boltzmann u(ν, T ) T ν e hν c = 3 10 8 m s 1 2. Planck λ = c/ν Rayleigh-Jeans u(ν, T )dν = 8πν2 kt dν c
linearal1.dvi
19 4 30 I 1 1 11 1 12 2 13 3 131 3 132 4 133 5 134 6 14 7 2 9 21 9 211 9 212 10 213 13 214 14 22 15 221 15 222 16 223 17 224 20 3 21 31 21 32 21 33 22 34 23 341 23 342 24 343 27 344 29 35 31 351 31 352
waseda2010a-jukaiki1-main.dvi
November, 2 Contents 6 2 8 3 3 3 32 32 33 5 34 34 6 35 35 7 4 R 2 7 4 4 9 42 42 2 43 44 2 5 : 2 5 5 23 52 52 23 53 53 23 54 24 6 24 6 6 26 62 62 26 63 t 27 7 27 7 7 28 72 72 28 73 36) 29 8 29 8 29 82 3
006 11 8 0 3 1 5 1.1..................... 5 1......................... 6 1.3.................... 6 1.4.................. 8 1.5................... 8 1.6................... 10 1.6.1......................
1 9 v.0.1 c (2016/10/07) Minoru Suzuki T µ 1 (7.108) f(e ) = 1 e β(e µ) 1 E 1 f(e ) (Bose-Einstein distribution function) *1 (8.1) (9.1)
1 9 v..1 c (216/1/7) Minoru Suzuki 1 1 9.1 9.1.1 T µ 1 (7.18) f(e ) = 1 e β(e µ) 1 E 1 f(e ) (Bose-Einstein distribution function) *1 (8.1) (9.1) E E µ = E f(e ) E µ (9.1) µ (9.2) µ 1 e β(e µ) 1 f(e )
構造と連続体の力学基礎
II 37 Wabash Avenue Bridge, Illinois 州 Winnipeg にある歩道橋 Esplanade Riel 橋6 6 斜張橋である必要は多分無いと思われる すぐ横に道路用桁橋有り しかも塔基部のレストランは 8 年には営業していなかった 9 9. 9.. () 97 [3] [5] k 9. m w(t) f (t) = f (t) + mg k w(t) Newton
: (GLMM) (pseudo replication) ( ) ( ) & Markov Chain Monte Carlo (MCMC)? /30
PlotNet 6 ( ) 2006-01-19 TOEF(1998 2004), AM, growth6 DBH growth (mm) 1998 1999 2000 2001 2002 2003 2004 10 20 30 40 50 70 DBH (cm) 1. 2. - - : [email protected] http://hosho.ees.hokudai.ac.jp/ kubo/show/2006/plotnet/
p = mv p x > h/4π λ = h p m v Ψ 2 Ψ
II p = mv p x > h/4π λ = h p m v Ψ 2 Ψ Ψ Ψ 2 0 x P'(x) m d 2 x = mω 2 x = kx = F(x) dt 2 x = cos(ωt + φ) mω 2 = k ω = m k v = dx = -ωsin(ωt + φ) dt = d 2 x dt 2 0 y v θ P(x,y) θ = ωt + φ ν = ω [Hz] 2π
JMP V4 による生存時間分析
V4 1 SAS 2000.11.18 4 ( ) (Survival Time) 1 (Event) Start of Study Start of Observation Died Died Died Lost End Time Censor Died Died Censor Died Time Start of Study End Start of Observation Censor
II ( ) (7/31) II ( [ (3.4)] Navier Stokes [ (6/29)] Navier Stokes 3 [ (6/19)] Re
II 29 7 29-7-27 ( ) (7/31) II (http://www.damp.tottori-u.ac.jp/~ooshida/edu/fluid/) [ (3.4)] Navier Stokes [ (6/29)] Navier Stokes 3 [ (6/19)] Reynolds [ (4.6), (45.8)] [ p.186] Navier Stokes I Euler Navier
( ) ( )
20 21 2 8 1 2 2 3 21 3 22 3 23 4 24 5 25 5 26 6 27 8 28 ( ) 9 3 10 31 10 32 ( ) 12 4 13 41 0 13 42 14 43 0 15 44 17 5 18 6 18 1 1 2 2 1 2 1 0 2 0 3 0 4 0 2 2 21 t (x(t) y(t)) 2 x(t) y(t) γ(t) (x(t) y(t))
ω 0 m(ẍ + γẋ + ω0x) 2 = ee (2.118) e iωt x = e 1 m ω0 2 E(ω). (2.119) ω2 iωγ Z N P(ω) = χ(ω)e = exzn (2.120) ϵ = ϵ 0 (1 + χ) ϵ(ω) ϵ 0 = 1 +
2.6 2.6.1 ω 0 m(ẍ + γẋ + ω0x) 2 = ee (2.118) e iωt x = e 1 m ω0 2 E(ω). (2.119) ω2 iωγ Z N P(ω) = χ(ω)e = exzn (2.120) ϵ = ϵ 0 (1 + χ) ϵ(ω) ϵ 0 = 1 + Ne2 m j f j ω 2 j ω2 iωγ j (2.121) Z ω ω j γ j f j
5 1.2, 2, d a V a = M (1.2.1), M, a,,,,, Ω, V a V, V a = V + Ω r. (1.2.2), r i 1, i 2, i 3, i 1, i 2, i 3, A 2, A = 3 A n i n = n=1 da = 3 = n=1 3 n=1
4 1 1.1 ( ) 5 1.2, 2, d a V a = M (1.2.1), M, a,,,,, Ω, V a V, V a = V + Ω r. (1.2.2), r i 1, i 2, i 3, i 1, i 2, i 3, A 2, A = 3 A n i n = n=1 da = 3 = n=1 3 n=1 da n i n da n i n + 3 A ni n n=1 3 n=1
19 σ = P/A o σ B Maximum tensile strength σ % 0.2% proof stress σ EL Elastic limit Work hardening coefficient failure necking σ PL Proportional
19 σ = P/A o σ B Maximum tensile strength σ 0. 0.% 0.% proof stress σ EL Elastic limit Work hardening coefficient failure necking σ PL Proportional limit ε p = 0.% ε e = σ 0. /E plastic strain ε = ε e
,, Mellor 1973),, Mellor and Yamada 1974) Mellor 1973), Mellor and Yamada 1974) 4 2 3, 2 4,
Mellor and Yamada1974) The Turbulence Closure Model of Mellor and Yamada 1974) Kitamori Taichi 2004/01/30 ,, Mellor 1973),, Mellor and Yamada 1974) Mellor 1973), 4 1 4 Mellor and Yamada 1974) 4 2 3, 2
http://www.ike-dyn.ritsumei.ac.jp/ hyoo/wave.html 1 1, 5 3 1.1 1..................................... 3 1.2 5.1................................... 4 1.3.......................... 5 1.4 5.2, 5.3....................
I A A441 : April 15, 2013 Version : 1.1 I Kawahira, Tomoki TA (Shigehiro, Yoshida )
I013 00-1 : April 15, 013 Version : 1.1 I Kawahira, Tomoki TA (Shigehiro, Yoshida) http://www.math.nagoya-u.ac.jp/~kawahira/courses/13s-tenbou.html pdf * 4 15 4 5 13 e πi = 1 5 0 5 7 3 4 6 3 6 10 6 17
(1.2) T D = 0 T = D = 30 kn 1.2 (1.4) 2F W = 0 F = W/2 = 300 kn/2 = 150 kn 1.3 (1.9) R = W 1 + W 2 = = 1100 N. (1.9) W 2 b W 1 a = 0
1 1 1.1 1.) T D = T = D = kn 1. 1.4) F W = F = W/ = kn/ = 15 kn 1. 1.9) R = W 1 + W = 6 + 5 = 11 N. 1.9) W b W 1 a = a = W /W 1 )b = 5/6) = 5 cm 1.4 AB AC P 1, P x, y x, y y x 1.4.) P sin 6 + P 1 sin 45
N cos s s cos ψ e e e e 3 3 e e 3 e 3 e
3 3 5 5 5 3 3 7 5 33 5 33 9 5 8 > e > f U f U u u > u ue u e u ue u ue u e u e u u e u u e u N cos s s cos ψ e e e e 3 3 e e 3 e 3 e 3 > A A > A E A f A A f A [ ] f A A e > > A e[ ] > f A E A < < f ; >
H 0 H = H 0 + V (t), V (t) = gµ B S α qb e e iωt i t Ψ(t) = [H 0 + V (t)]ψ(t) Φ(t) Ψ(t) = e ih0t Φ(t) H 0 e ih0t Φ(t) + ie ih0t t Φ(t) = [
3 3. 3.. H H = H + V (t), V (t) = gµ B α B e e iωt i t Ψ(t) = [H + V (t)]ψ(t) Φ(t) Ψ(t) = e iht Φ(t) H e iht Φ(t) + ie iht t Φ(t) = [H + V (t)]e iht Φ(t) Φ(t) i t Φ(t) = V H(t)Φ(t), V H (t) = e iht V (t)e
2002 11 21 1 http://www.sml.k.u-tokyo.ac.jp/members/nabe/lecture2002 http://www.sml.k.u-tokyo.ac.jp/members/nabe/lecture [email protected] 2 1. 10/10 2. 10/17 3. 10/24 4. 10/31 5. 11/ 7 6. 11/14
磁性物理学 - 遷移金属化合物磁性のスピンゆらぎ理論
email: [email protected] May 14, 2009 Outline 1. 2. 3. 4. 5. 6. 2 / 262 Today s Lecture: Mode-mode Coupling Theory 100 / 262 Part I Effects of Non-linear Mode-Mode Coupling Effects of Non-linear
N/m f x x L dl U 1 du = T ds pdv + fdl (2.1)
23 2 2.1 10 5 6 N/m 2 2.1.1 f x x L dl U 1 du = T ds pdv + fdl (2.1) 24 2 dv = 0 dl ( ) U f = T L p,t ( ) S L p,t (2.2) 2 ( ) ( ) S f = L T p,t p,l (2.3) ( ) U f = L p,t + T ( ) f T p,l (2.4) 1 f e ( U/
TOP URL 1
TOP URL http://amonphys.web.fc.com/ 1 19 3 19.1................... 3 19.............................. 4 19.3............................... 6 19.4.............................. 8 19.5.............................
v v = v 1 v 2 v 3 (1) R = (R ij ) (2) R (R 1 ) ij = R ji (3) 3 R ij R ik = δ jk (4) i=1 δ ij Kronecker δ ij = { 1 (i = j) 0 (i
1. 1 1.1 1.1.1 1.1.1.1 v v = v 1 v 2 v 3 (1) R = (R ij ) (2) R (R 1 ) ij = R ji (3) R ij R ik = δ jk (4) δ ij Kronecker δ ij = { 1 (i = j) 0 (i j) (5) 1 1.1. v1.1 2011/04/10 1. 1 2 v i = R ij v j (6) [
2001 年度 『数学基礎 IV』 講義録
4 A 95 96 4 1 n {1, 2,,n} n n σ ( ) 1 2 n σ(1) σ(2) σ(n) σ σ 2 1 n 1 2 {1, 2,,n} n n! n S n σ, τ S n {1, 2,,n} τ σ {1, 2,,n} n τ σ σ, τ τσ σ n σ 1 n σ 1 ( σ σ ) 1 σ = σσ 1 = ι 1 2 n ι 1 2 n 4.1. 4 σ =
I ( ) 1 de Broglie 1 (de Broglie) p λ k h Planck ( Js) p = h λ = k (1) h 2π : Dirac k B Boltzmann ( J/K) T U = 3 2 k BT
I (008 4 0 de Broglie (de Broglie p λ k h Planck ( 6.63 0 34 Js p = h λ = k ( h π : Dirac k B Boltzmann (.38 0 3 J/K T U = 3 k BT ( = λ m k B T h m = 0.067m 0 m 0 = 9. 0 3 kg GaAs( a T = 300 K 3 fg 07345
, 1), 2) (Markov-Switching Vector Autoregression, MSVAR), 3) 3, ,, , TOPIX, , explosive. 2,.,,,.,, 1
2016 1 12 4 1 2016 1 12, 1), 2) (Markov-Switching Vector Autoregression, MSVAR), 3) 3, 1980 1990.,, 225 1986 4 1990 6, TOPIX,1986 5 1990 2, explosive. 2,.,,,.,, 1986 Q2 1990 Q2,,. :, explosive, recursiveadf,
q quark L left-handed lepton. λ Gell-Mann SU(3), a = 8 σ Pauli, i =, 2, 3 U() T a T i 2 Ỹ = 60 traceless tr Ỹ 2 = 2 notation. 2 off-diagonal matrices
Grand Unification M.Dine, Supersymmetry And String Theory: Beyond the Standard Model 6 2009 2 24 by Standard Model Coupling constant θ-parameter 8 Charge quantization. hypercharge charge Gauge group. simple
1. 2 P 2 (x, y) 2 x y (0, 0) R 2 = {(x, y) x, y R} x, y R P = (x, y) O = (0, 0) OP ( ) OP x x, y y ( ) x v = y ( ) x 2 1 v = P = (x, y) y ( x y ) 2 (x
. P (, (0, 0 R {(,, R}, R P (, O (0, 0 OP OP, v v P (, ( (, (, { R, R} v (, (, (,, z 3 w z R 3,, z R z n R n.,..., n R n n w, t w ( z z Ke Words:. A P 3 0 B P 0 a. A P b B P 3. A π/90 B a + b c π/ 3. +
Γ Ec Γ V BIAS THBV3_0401JA THBV3_0402JAa THBV3_0402JAb 1000 800 600 400 50 % 25 % 200 100 80 60 40 20 10 8 6 4 10 % 2.5 % 0.5 % 0.25 % 2 1.0 0.8 0.6 0.4 0.2 0.1 200 300 400 500 600 700 800 1000 1200 14001600
/ *1 *1 c Mike Gonzalez, October 14, Wikimedia Commons.
2010 05 22 1/ 35 2010 2010 05 22 *1 [email protected] *1 c Mike Gonzalez, October 14, 2007. Wikimedia Commons. 2010 05 22 2/ 35 1. 2. 3. 2010 05 22 3/ 35 : 1.? 2. 2010 05 22 4/ 35 1. 2010 05 22 5/
s = 1.15 (s = 1.07), R = 0.786, R = 0.679, DW =.03 5 Y = 0.3 (0.095) (.708) X, R = 0.786, R = 0.679, s = 1.07, DW =.03, t û Y = 0.3 (3.163) + 0
7 DW 7.1 DW u 1, u,, u (DW ) u u 1 = u 1, u,, u + + + - - - - + + - - - + + u 1, u,, u + - + - + - + - + u 1, u,, u u 1, u,, u u +1 = u 1, u,, u Y = α + βx + u, u = ρu 1 + ɛ, H 0 : ρ = 0, H 1 : ρ 0 ɛ 1,
