2014ESJ.key

Similar documents
@i_kiwamu Bayes - -

Stanによるハミルトニアンモンテカルロ法を用いたサンプリングについて

kubo2015ngt6 p.2 ( ( (MLE 8 y i L(q q log L(q q 0 ˆq log L(q / q = 0 q ˆq = = = * ˆq = 0.46 ( 8 y 0.46 y y y i kubo (ht

12/1 ( ) GLM, R MCMC, WinBUGS 12/2 ( ) WinBUGS WinBUGS 12/2 ( ) : 12/3 ( ) :? ( :51 ) 2/ 71

kubostat1g p. MCMC binomial distribution q MCMC : i N i y i p(y i q = ( Ni y i q y i (1 q N i y i, q {y i } q likelihood q L(q {y i } = i=1 p(y i q 1


untitled

36


本文/020:デジタルデータ P78‐97


Rによる計量分析:データ解析と可視化 - 第3回 Rの基礎とデータ操作・管理

kubostat2017b p.1 agenda I 2017 (b) probability distribution and maximum likelihood estimation :

AR(1) y t = φy t 1 + ɛ t, ɛ t N(0, σ 2 ) 1. Mean of y t given y t 1, y t 2, E(y t y t 1, y t 2, ) = φy t 1 2. Variance of y t given y t 1, y t

aisatu.pdf

Sigma

Sigma

30

kubostat2018d p.2 :? bod size x and fertilization f change seed number? : a statistical model for this example? i response variable seed number : { i

P P P P P P P P P P P P P

Copyrght 7 Mzuho-DL Fnancal Technology Co., Ltd. All rghts reserved.

untitled

k2 ( :35 ) ( k2) (GLM) web web 1 :

untitled

/22 R MCMC R R MCMC? 3. Gibbs sampler : kubo/

自由集会時系列part2web.key


バイオインフォマティクス特論4

/ *1 *1 c Mike Gonzalez, October 14, Wikimedia Commons.

yamadaiR(cEFA).pdf

Computational Semantics 1 category specificity Warrington (1975); Warrington & Shallice (1979, 1984) 2 basic level superiority 3 super-ordinate catego

80 X 1, X 2,, X n ( λ ) λ P(X = x) = f (x; λ) = λx e λ, x = 0, 1, 2, x! l(λ) = n f (x i ; λ) = i=1 i=1 n λ x i e λ i=1 x i! = λ n i=1 x i e nλ n i=1 x


-5 -

別冊 各分野における虐待事例と分析


: Bradley-Terry Burczyk

02[ ]小山・池田(責)岩.indd


講義のーと : データ解析のための統計モデリング. 第2回

Fig. 1. Schematic drawing of testing system. 71 ( 1 )

スライド 1


2 3


kubostat2017c p (c) Poisson regression, a generalized linear model (GLM) : :

森林航測72号

Isogai, T., Building a dynamic correlation network for fat-tailed financial asset returns, Applied Network Science (7):-24, 206,

分布

untitled

Stata 11 Stata ts (ARMA) ARCH/GARCH whitepaper mwp 3 mwp-083 arch ARCH 11 mwp-051 arch postestimation 27 mwp-056 arima ARMA 35 mwp-003 arima postestim

好きですまえばし




September 9, 2002 ( ) [1] K. Hukushima and Y. Iba, cond-mat/ [2] H. Takayama and K. Hukushima, cond-mat/020

SD SD SD


kubostat2015e p.2 how to specify Poisson regression model, a GLM GLM how to specify model, a GLM GLM logistic probability distribution Poisson distrib

kubostat2017e p.1 I 2017 (e) GLM logistic regression : : :02 1 N y count data or

,.,.,,. [15],.,.,,., , 1., , 1., 1,., 1,,., 1. i

kubo2017sep16a p.1 ( 1 ) : : :55 kubo ( ( 1 ) / 10

6. [1] (cal) (J) (kwh) ( ( 3 t N(t) dt dn ( ) dn N dt N 0 = λ dt (3.1) N(t) = N 0 e λt (3.2) λ (decay constant), λ [λ] = 1/s

udc-2.dvi

01.Œk’ì/“²fi¡*

A Nutritional Study of Anemia in Pregnancy Hematologic Characteristics in Pregnancy (Part 1) Keizo Shiraki, Fumiko Hisaoka Department of Nutrition, Sc



確率論と統計学の資料

2301/1     目次・広告

今回 次回の要点 あぶない 時系列データ解析は やめましょう! 統計モデル のあてはめ Danger!! (危 1) 時系列データの GLM あてはめ (危 2) 時系列Yt 時系列 Xt 各時刻の個体数 気温 とか これは次回)

kubostat7f p GLM! logistic regression as usual? N? GLM GLM doesn t work! GLM!! probabilit distribution binomial distribution : : β + β x i link functi

: (EQS) /EQUATIONS V1 = 30*V F1 + E1; V2 = 25*V *F1 + E2; V3 = 16*V *F1 + E3; V4 = 10*V F2 + E4; V5 = 19*V99

Microsoft Word - Meta70_Preferences.doc

Visual Evaluation of Polka-dot Patterns Yoojin LEE and Nobuko NARUSE * Granduate School of Bunka Women's University, and * Faculty of Fashion Science,

untitled


日本内科学会雑誌第102巻第10号

THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS TECHNICAL REPORT OF IEICE.

tokei01.dvi

k3 ( :07 ) 2 (A) k = 1 (B) k = 7 y x x 1 (k2)?? x y (A) GLM (k

PackageSoft/R-033U.tex (2018/March) R:


A g ( v x ) i i { v ( m m) }{ v ( m m) } v i vav ( m m)( m m) i ( m m)( m m) v ( m m)( m m) SS within g ( v x v x ) i g { v ( X ) m v ( m m) } g { v (

日立金属技報 Vol.34

1.7 D D 2 100m 10 9 ev f(x) xf(x) = c(s)x (s 1) (x + 1) (s 4.5) (1) s age parameter x f(x) ev 10 9 ev 2

バイオインフォマティクス特論12

03.Œk’ì

チュートリアル:ノンパラメトリックベイズ

2 94

Study on Application of the cos a Method to Neutron Stress Measurement Toshihiko SASAKI*3 and Yukio HIROSE Department of Materials Science and Enginee

一般化線形 (混合) モデル (2) - ロジスティック回帰と GLMM

OR2017_curlingRating.dvi

¥¤¥ó¥¿¡¼¥Í¥Ã¥È·×¬¤È¥Ç¡¼¥¿²òÀÏ Âè2²ó

GJG160842_O.QXD

Microsoft Word - ??? ????????? ????? 2013.docx

基礎から学ぶトラヒック理論 サンプルページ この本の定価 判型などは, 以下の URL からご覧いただけます. このサンプルページの内容は, 初版 1 刷発行時のものです.

研究シリーズ第40号

2005/11/19 THORPEX


Transcription:

http://www001.upp.so-net.ne.jp/ito-hi/stat/2014esj/

Statistical Software for State Space Models Commandeur et al. (2011) Journal of Statistical Software 41(1)

State Space Models in R Petris & Petrone (2011) Journal of Statistical Software 41(4)

y t = F t t + v t, v t N (0, V t ) t = G t t 1 + w t, w t N (0, W t ) t = 1,..., n 0 N (m 0, C 0 )

## build function! BuildLLM <- function(theta) {! dlmmodpoly(order = 1,! dv = theta[1],! dw = theta[2])! }

##! fit.llm <- dlmmle(nile, parm = c(100, 2),! build = BuildLLM,! lower = rep(1e-4, 2))!! ## build function! model.llm <- BuildLLM(fit.llm$par)!! ##! smooth.llm <- dlmsmooth(nile, model.llm)

#! x <- matrix(c(rep(0, 27),! rep(1, length(nile) - 27)),! ncol = 1)

##! model.reg <- dlmmodreg(x, dw = c(1, 0))! BuildReg <- function(theta) {! V(model.reg) <- exp(theta[1])! diag(w(model.reg))[1] <- exp(theta[2])! return(model.reg)! }

##! fit.reg <- dlmmle(nile,! parm = rep(0, 2),! build = BuildReg)! model.reg <- BuildReg(fit.reg$par)! smooth.reg <- dlmsmooth(nile,! mod = model.reg)

y t = Z t t + t, t N (0, H t ) t+1 = T t t + R t t, t N (0, Q t ) t = 1,..., n 1 N (a 1, P 1 )

model.van <- SSModel(VanKilled ~ law +! SSMtrend(degree = 1,! Q = list(matrix(na))) +! SSMseasonal(period = 12,! sea.type = dummy",! Q = matrix(na)),! data = Seatbelts,! distribution = "poisson")

fit.van <- fitssm(inits = c(-4, -7, 2),! model = model.van,! method = BFGS")!! pred.van <- predict(fit.van$model,! states = 1:2)

set.seed(1234)! n.t <- 50 #! N.lat <- rep(50, n.t) #! p <- 0.7 #! N.obs <- rbinom(n.t, N.lat, p) #!

var! N, #! y[n], #! y_hat[n], #! lambda[n], # log(y_hat)! p, #! tau, sigma;

model {! ##! for (t in 1:N) {! y[t] ~ dbin(p, y_hat[t]);! y_hat[t] <- trunc(exp(lambda[t]));! }! ##! for (t in 2:N) {! lambda[t] ~ dnorm(lambda[t - 1], tau);! }! ##! lambda[1] ~ dnorm(0, 1.0E-4);! p ~ dbeta(2, 2);! sigma ~ dunif(0, 100);! tau <- 1 / (sigma * sigma);! }

inits <- list()! inits[[1]] <- list(p = 0.9, sigma = 1,! lambda = rep(log(max(n.obs) + 1), n.t))! inits[[2]] <- list(p = 0.7, sigma = 3,! lambda = rep(log(max(n.obs) + 1), n.t))! inits[[3]] <- list(p = 0.8, sigma = 5,! lambda = rep(log(max(n.obs) + 1), n.t))!! model <- jags.model("ks51.bug.txt",! data = list(n = n.t, y = N.obs),! inits = inits, n.chains = 3,! n.adapt = 100000)! samp <- coda.samples(model,! variable.names = c("y_hat", sigma",! "p"),! n.iter = 3000000, thin = 3000)!

http://mc-stan.org/

data {! int<lower=0> N;! matrix[1, N] y;! }! transformed data {! matrix[1, 1] F;! matrix[1, 1] G;! vector[1] m0;! cov_matrix[1] C0;!! } F[1, 1] <- 1;! G[1, 1] <- 1;! m0[1] <- 0;! C0[1, 1] <- 1.0e+6;!

parameters {! real<lower=0> sigma[2];! }! transformed parameters {! vector[1] V;! cov_matrix[1] W;!! V[1] <- sigma[1] * sigma[1];! W[1, 1] <- sigma[2] * sigma[2];! }!

model {! y ~ gaussian_dlm_obs(f, G, V, W, m0, C0);! sigma ~ uniform(0, 1.0e+6);! }

library(rstan)!! model <- stan("kalman.stan",! data = list(y = matrix(c(nile),! nrow = 1),! N = length(nile)),! pars = c("sigma"),! chains = 3,! iter = 1500, warmup = 500,! thin = 1)

traceplot(fit, pars = "sigma", inc_warmup = FALSE)

> print(fit)! Inference for Stan model: kalman.! 3 chains, each with iter=1500; warmup=500; thin=1;! post-warmup draws per chain=1000, total post-warmup draws=3000.!! mean se_mean sd 2.5% 25% 50% 75% 97.5% n_eff Rhat! sigma[1] 121.2 0.5 13.8 92.6 112.7 121.5 130.3 148.4 889 1! sigma[2] 45.5 0.6 17.6 18.3 32.7 43.2 55.7 85.2 833 1! lp -541.6 0.0 1.1-544.6-542.0-541.3-540.9-540.6 904 1!! Samples were drawn using NUTS(diag_e) at Sun Feb 9 06:06:42 2014.! For each parameter, n_eff is a crude measure of effective sample size,! and Rhat is the potential scale reduction factor on split chains (at! convergence, Rhat=1).!

sigma <- apply(extract(fit, "sigma")$sigma, 2, mean)!! library(dlm)!! buildnile <- function(theta) {! dlmmodpoly(order = 1, dv = theta[1], dw = theta[2])! }! modnile <- buildnile(sigma^2)! smoothnile <- dlmsmooth(nile, modnile)

data {! int<lower=0> N;! real y[n];! }! parameters {! real theta[n];! real<lower=0> sigma[2];! }!

model {! //! for (t in 1:N) {! y[t] ~ normal(theta[t], sigma[1]);! }!! //! for (t in 2:N) {! theta[t] ~ normal(theta[t - 1], sigma[2]);! }!! //! theta[1] ~ normal(0, 1.0e+4);! sigma ~ uniform(0, 1.0e+6);! }