Duality in Bayesian prediction and its implication
|
|
- ぜんぺい かやぬま
- 5 years ago
- Views:
Transcription
1 $\theta$ Duality in Bayesian prediction and its implication Toshio Ohnishi and Takemi Yanagimotob) a) Faculty of Economics, Kyushu University b) Department of Industrial and Systems Engineering, Chuo University \S 1. Bayes (1) Bayes $p(x;\theta)\pi(\theta;c, \delta)$ $p(x;\theta, \tau)$ $\tau$ $\lambda(c, \delta)$ (2) $\pi(\theta \tau)$ $\lambda(\tau)$ $\xi\in\xi$ Bayes $p_{\xi}(x;\theta)\pi_{\xi}(\theta)$ $\lambda(\xi)$ Bayesian model averaging (Hoeting et al., 1999). $\lambda(\xi)$ prior averaging density $m_{\xi}(x)=e[p_{\xi}(x;\theta) \pi_{\xi}(\theta)]$ $m(x)=e[m_{\xi}(x) \lambda(\xi)]$ $E[f(y) p(y)]$ $p(y)$ $f(y)$
2 105 $x$ Bayes $\lambda(\xi x)=\frac{\lambda(\xi)m_{\xi}(x)}{m(x)}$ (1.1) posterior averaging density $p(x;\theta)\pi(\theta)$ Bayes Bayes $y$ Bayes $p(y;\theta)$ $q(y x)$ $q(y x)=p(y;\hat{\theta})$ 2 $D(q(y x),$ $p(y;\theta))$ $D(p(y;\theta),$ $q(y x))$ Bayes $D(p(y),$ $q(y));=e$ $[$log{p(y)/q(y)} $ p(y)]$ Kullback-Leibler divergence e-divergence $m$-divergence (Amari &Nagaoka, 2000). 2 1 Pythagrean difference Pythagras $PD(p_{1},p_{2},p_{3}):=D(p_{1},p_{3})-D(p_{1},p_{2})-D(p_{2},p_{3})$ $E$ $[PD(p_{1},p_{2},p_{3})]$ $E[D(p_{1},p_{3})]$ $=0$ $E[D(p_{1},p_{2})]$ $p(x;\theta)$ 1 $\hat{\theta}_{m}$ (MLE) $\leq$ $\log\frac{p(x;\hat{\theta}_{m})}{p(x;\theta)}=d(p(y;\hat{\theta}_{m}), p(y;\theta))$ (1.2) (Kullback, 1959). ) $p(x;\theta)$ (3 $\hat{\theta}_{s}$ Stein (Stein, 1981) $E[\log\frac{p(x;\hat{\theta}_{S})}{p(x;\theta)}-D(p(y;\hat{\theta}_{S}), p(y;\theta)) p(x;\theta)]=0$ (1.3)
3 106 MLE MLE Stein MLE (1.2) (1.3) $e$-divergence \S 2. $e$-divergence $e$-divergence Bayes $\min_{q(y x)}e[d(q(y x), p_{\xi}(y;\theta)) \pi_{\xi}(\theta x)\lambda(\xi x)]$ (2.1) (1.1) $\pi_{\xi}(\theta x)$ $\lambda(\xi x)$ $p_{\xi}(x;\theta)\pi_{\xi}(\theta)\lambda(\xi)=\pi_{\xi}(\theta x)\lambda(\xi x)m(x)$ posterior averaging density $\pi_{\xi}(\theta x)\lambda(\xi x)$ $(\theta, \xi)$ $x$ Corcuera &Giummole (1999) Bayes $\min_{q(y x)}e[d(q(y x), p_{\xi}(y;\theta)) \pi_{\xi}(\theta x)]$ $q_{\xi}^{e}(y x)\propto\exp\{e[\log p_{\xi}(y;\theta) \pi_{\xi}(\theta x)]\}$ (2.2) Pythagras E PD $(q(y x), q_{\xi}^{e}(y x), p_{\xi}(y;\theta)) \pi_{\xi}(\theta x)]=0$ (2.3) (Yanagimoto &Ohnishi, 2009). Bayes (2.1) $\min_{q(yx)}e[d(q(y x), q_{\xi}^{e}(y x)) h(\xi)]$ (2.4)
4 107 $h(\xi)$ canonical weight (2.4) Bayes (2.1) (2.3) $\min_{q(y x)}e[d(q(y x), q_{\xi}^{e}(y x)) \lambda(\xi x)]$ posterior averaging density canonical $\lambda(\xi x)$ weight $h(\xi)$ Definition 2.1. (i) (2.2) $q_{\xi}^{e}(y x)$ $f^{e}(y x;h) :=\exp\{e[\log q_{\xi}^{e}(y x) h(\xi)]-\psi_{x}(h)\}$ (2.5) $\exp\{\psi_{x}($ $)\}$ $f^{e}(y x;h)$ canonical weight $h$ $q_{\xi}^{e}(y x)$ $e$-mixture (ii) canonical weight $h$ mean weight $t_{x}(\xi;h) :=E[\log q_{\xi}^{e}(y x) f^{e}(y x;h)]$. (2.6) $f^{e}(y x;h),$ $\psi_{x}(h)$ $t_{x}(\xi;h)$ $h$ $\log q(x x)$ Bayesian $\log$ -likelihood Bayesian $\log$ -likelihood ratio $e$-divergence Theorem 2.1. (2.5) $f^{e}(y x;h)$ Pythagras $E [PD(q(y x), f^{e}(y x;h), q_{\xi}^{e}(y x)) h(\xi)]=0$ (2.4) $E[\log\frac{f^{e}(x x;h)}{q_{\xi}^{e}(x x)}-d(f^{e}(y x;h), q_{\xi}^{e}(y x)) h(\xi)]=0$ (2.7)
5 108 (2.7) (2.4) $(h)$ $\psi$x $F$ Gateaux ( ) canonical weight $h$ $F$ ( ) $h_{1}$ $h_{2}-h_{1}$ Gateaux $\delta_{g}f(h_{1};h_{2}-$ $h_{1})$ $\delta_{g}f(h_{1};h_{2}-h_{1}) :=\lim_{\betaarrow 0}\frac{F(h_{1}+\beta(h_{2}-h_{1}))-F(h_{1})}{\beta}$ Canonical weight $h$ Gateaux Ohnishi &Yanagimoto (2013) Mean weight (2.6) $\psi_{x}(h)$ Gateaux $\delta_{g}\psi_{x}(h_{1};h_{2}-h_{1})=e[t_{x}(\xi;h_{1}) h_{2}(\xi)-h_{1}(\xi)].$ Shannon entropy $H[p(y)]:=E[-\log p(y) p(y)]$ $p(y)$ Shannon entropy Theorem 2.2. $s(\xi)=t_{x}(\xi;h)$ Shannon entropy $\max H[q(y x)]$ subject to $E[\log q_{\xi}^{e}(y x) q(y x)]=s(\xi)$ (2.4) $f^{e}(y x;h)$ $p_{1}(y),p_{2}(y)$ $p(y; \eta)=\exp\{\eta\log\frac{p_{1}(y)}{p_{2}(y)}-\psi(\eta)\}p_{2}(y)$ 2 $\mu=\psi (\eta)$
6 109 $)$ 1 Shannon entropy $\max E[-\frac{q(y)}{p_{2}(y)}\log\frac{q(y)}{p_{2}(y)} p_{2}(y)],$ subject to $E[\log\frac{p_{1}(y)}{p_{2}(y)} q(y)]=\mu.$ 2 $)$ $e$-divergence $\min_{q(y1x)}\{\eta D(q(y), p_{1}(y))+(1-\eta)d(q(y), p_{2}(y))\}.$ Bayesian $\log$ -likeliho $od$ Theorem 2.3. Canonical weight $h_{x}^{\uparrow}(\xi)$ $\delta_{g}\log f^{e}(x x;h_{x}^{1};h-h_{x}^{\dagger})=0$ for any $h$. (2.8) $f^{e}(y x;h_{x}\dagger)$ $\log\frac{f^{\epsilon}(x x;h_{x}\dagger)}{q_{\xi}^{e}(x x)}=d(f^{e}(y x;h_{x}^{\uparrow}), q_{\xi}^{e}(y x))$ for any $\xi.$ $h_{x}^{*}(\xi)$ (1.1) posterior $h_{x}^{*}(\xi)$ averaging density $:=\lambda(\xi x)$ Theorem 2. 1 $f^{e}(y x;h_{x}^{*})$ Bayes (2. 1) Theorem 2.3 $\mathcal{q}^{e}$ Theorem 2.4. $E[\log\frac{f^{e}(x x;h)}{q_{\xi}^{e}(x x)}-d(f^{e}(y x;h), q_{\xi}^{e}(y x)) \lambda(\xi x)m(x)]=0$. (2.9) (2.8) Bayesian -likelihood $h_{x}^{\uparrow}(\xi)$ $\log$ $\log f^{e}(x x;h)$ $\mathcal{q}^{e}$ $f^{e}(y x;h_{x}^{*})$ $f^{e}(y x;h_{x}^{\uparrow})$ Bayesian -likelihood $\log$ Theorem 2.3 MLE
7 110 Yanagimoto & Ohnishi (2011) (2.9) (2.4) $-\psi_{x}$ ( ) $h$ Theorem 2.5. Canonical weight $h_{x}^{c}(\xi)$ $\delta_{g}\psi_{x}(h_{x}^{c};h-h_{x}^{c})=0$ for any $h.$ $f^{e}(y x;h_{x}^{c})$ $D(f^{e}(y x;h_{x}^{c}), q_{\xi}^{e}(y x))=-\psi_{x}(h_{x}^{c})$. $f^{e}(y x;h_{x}^{c})$ $q_{\xi}^{e}(y x)$ prior averaging density $-\psi_{x}(h_{x}^{c})$ \S 3. $m$-divergence $e$-divergence $m$-divergence Bayes $\min_{q(y x)}e[d(p_{\xi}(y;\theta), q(y x)) \pi_{\xi}(\theta x)\lambda(\xi x)]$ (3.1) Shannon entropy $m$-divergence \S 2 Shannon entropy Aitchson (1975) Bayes $\min_{q(y x)}e[d(p_{\xi}(y;\theta), q(y x)) \pi_{\xi}(\theta x)]$ $q_{\xi}^{m}(y x) :=E[p_{\xi}(y;\theta) \pi_{\xi}(\theta x)]$ (3.2)
8 111 Pythagras $E[PD(p_{\xi}(y;\theta), q_{\xi}^{m}(y x), q(y x)) \pi_{\xi}(\theta x)]=0$ (3.3) (Yanagimoto &Ohnishi, 2009). \S 2 $\min_{q(y x)}e[d(q_{\xi}^{m}(y x), q(y x)) h(\xi)]$ (3.4) $h(\xi)$ canonical weight (3.4) (3.3) (3.1) $\min_{q(y1x)}e[d(q_{\xi}^{m}(y x), q(y x)) \lambda(\xi x)]$ $\lambda(\xi x)$ $h(\xi)$ Definition 3.1. (i) (3.2) $q_{\xi}^{m}(y x)$ $f^{m}(y x;h):=e[q_{\xi}^{m}(y x) h(\xi)]$ (3.5) $f^{m}(y x;h)$ canonical weight $h$ $q_{\xi}^{m}(y x)$ (ii) canonical weight $h$ $m$ -mixture entropy weight $t_{x}(\xi;h)=-\log f^{m}(x x;h)-d(q_{\xi}^{m}(y x), f^{m}(y x;h))$ (3.6) (3.4) Shannon entropy $m$-divergence Theorem 3.1. (i) (3.5) $f^{m}(y x;h)$ Pythagras $E [PD(q_{\xi}^{m}(y x), f^{m}(y x;h), q(y x)) h(\xi)]=0$ (3.7)
9 112 (3.4) $f^{m}(y x;h)$ $E[H[f^{m}(y x;h)]-h[q_{\xi}^{m}(y x)]-d(q_{\xi}^{m}(y x), f^{m}(y x;h)) h(\xi)]=0$. (3.8) $\psi_{x}$ ( ) $-\psi_{x}(h) :=H[f^{m}(y x;h)]-e[h[q_{\xi}^{m}(x x)] h(\xi)]$ - $\psi$x( ) (3.8) (3.4) $\psi$x( ) Gateaux entropy weight $\delta_{g}\psi_{x}(h_{1};h_{2}-h_{1})=e[t_{x}(\xi;h_{1}) h_{2}(\xi)-h_{1}(\xi)].$ (3.4) Theorem 3.2. $s(\xi)=t_{x}(\xi;h)$ Bayesian $\log$-likelihood $\max\log q(x x)$ $q(y x)$ subject to $-\log q(x x)-d(q_{\xi}^{m}(y x), q(y x))=s(\xi)$ (3.4) $f^{m}(y x;h)$ Shannon entropy (3.8) Theorem 3.3. Canonical weight $h_{x}\dagger(\xi)$ $\delta_{g}h[f^{m}(y x;h_{x}^{\dagger};h-h_{x}^{\uparrow})]=0$ for any $h$. (3.9) $f^{m}(y x;h_{x}\dagger)$ $H[f^{m}(y x;h_{x}^{\dagger})]-h[q_{\xi}^{m}(y x)]=d(q_{\xi}^{m}(y x), f^{m}(y x;h_{x}^{\dagger}))$ for any $\xi.$
10 113 $h_{x}^{*}(\xi)=\lambda(\xi x)$ \S 2 $\mathcal{q}^{m}$ Theorem 3.4. $E[H[f^{m}(y x;h)]-h[q_{\xi}^{m}(y x)]-d(q_{\xi}^{m}(y x),$ $f^{m}(y x;h)) \lambda(\xi x)m(x)]=0.$ (3.9) Shannon entropy $H[f^{m}(y x;h)]$ $h_{x}\dagger(\xi)$ $\mathcal{q}^{m}$ $f^{m}(y x;h_{x}^{*})$ $f^{m}(y x;h_{x}\dagger)$ (3.4) $-\psi_{x}$ ( ) $h$ Theorem 3.5. Canonical weight $h_{x}^{c}(\xi)$ $\delta_{g}\psi_{x}(h_{x}^{c};h-h_{x}^{c})=0$ for any $h.$ $f^{m}(y x;h_{x}^{c})$ $D(q_{\xi}^{m}(y x), f^{m}(y x;h_{x}^{c}))=-\psi_{x}(h_{x}^{c})$ for any $\xi.$ \S 4. \S 4.1. Mean weight entropy weight Definition 2.1 (ii) (2.6) mean weight $t_{x}(\xi;h)$ $t_{x}(\xi;h)=-h[f^{e}(y x;h)]-d(f^{e}(y x;h), q_{\xi}^{e}(y x))$
11 114 $h_{x}^{\uparrow}$ (2.8) mean weight Theorem 2.3 Corollary 4.1. Canonical weight $h_{x}^{\uparrow}$ mean weight $t_{x}(\xi;h_{x}^{\uparrow})=\log q_{\xi}^{e}(x x)-a_{x}[f^{e}(y x;h_{x}^{\dagger})]$ $A_{x}[p(y)]$ $:=\log p(x)+h[p(y)]$ $p_{\xi}(x;\theta)$ $\hat{\theta}_{m\xi}$ MLE $\log q_{\xi}^{e}(x x)=\log p_{\xi}(x;\hat{\theta}_{m\xi})-d(p_{\xi}(y;\hat{\theta}_{m\xi}), q_{\xi}^{e}(y x))$ $t_{x}(\xi;h_{x}^{\uparrow})=\log p_{\xi}(x;\hat{\theta}_{m\xi})-\{d(p_{\xi}(y;\hat{\theta}_{m\xi}), q_{\xi}^{e}(y x))+a_{x}[f^{e}(y x;h_{x}^{\uparrow})]\}$ AIC (Akaike, 1973) ( ) ( ) Definition 3.1 (ii) (3.6) entropy weight Entropy weight (3.6) Theorem 3.3 $h_{x}\dagger$ Corollary 4.2. (3.9) canonical weight entropy weight $t_{x}(\xi;h_{x}^{1})=h[q_{\xi}^{m}(y x)]-a_{x}[f^{m}(y x;h_{x}^{\uparrow})]$ $p_{\xi}(y;\theta)$ Corollary 4.2 $\theta_{m\xi}=$ $H[p_{\xi}(y;\theta)]$ $H[q_{\xi}^{m}(y x)]=h[p_{\xi}(y;\theta_{m\xi})]-d(p_{\xi}(y;\theta_{m\xi}), q_{\xi}^{m}(y x))$
12 115 $t_{x}(\xi;h_{x}^{\uparrow})=h[p_{\xi}(y;\theta_{m\xi})]-\{d(p_{\xi}(y;\theta_{m\xi}), q_{\xi}^{m}(y x))+a_{x}[f^{m}(y x;h_{x}^{\uparrow})]\}$ ( Shannon entropy) ( ) \S $\alpha-d$ ivergence $\alpha$-divergence $D_{\alpha}(p(y;\theta), q(y x)):=e[u_{\alpha}(\frac{q(y x)}{p(y;\theta)}) p(y;\theta)],$ $u_{\alpha}(r):= \frac{4}{1-\alpha^{2}}(1-r$ $)$. $-1<\alpha<1$ $\alpha$-divergence $e$-divergence $m$-divergence $u_{1}(r):=r\log r$ $u_{-1}(r):=-\log r$ $e$-divergence $\alpha=+1,$ $m$-divergence $\alpha=-1$ $\alpha$-divergence $B$ayes $\min_{q(y x)}e[d_{\alpha}(p_{\xi}(y;\theta), q(y x)) \pi_{\xi}(\theta x)\lambda(\xi x)]$ (4.1) \S 2 \S 3 Yanagimoto &Ohnishi (2009) $\min_{q(y x)}e[d_{\alpha}(q_{\xi}^{\alpha}(y x), q(y x)) h(\xi)]$ (4.2) $q_{\xi}^{\alpha}(y x)$ Bayes $\min_{q(y x)}e[d_{\alpha}(p_{\xi}(y;\theta), q(y x)) \pi_{\xi}(\theta x)]$ $q_{\xi}^{\alpha}(y x)\propto(e[\{p(y;\theta)\}^{\frac{1-\alpha}{2}} \pi_{\xi}(\theta x)])^{\frac{2}{1-\alpha}}$
13 116 (Corcuera&Giummole, 1999). (4.2) $h(\xi)$ canonical weight Definition 4.1. (i) canonical weight $h$ $q_{\xi}^{\alpha}(y x)$ $\alpha$-mixture $f^{\alpha}(y x;h):= \frac{1}{c_{x}(h)}(e[\{q_{\xi}^{\alpha}(y x)\}^{\frac{1-\alpha}{2}} h(\xi)])^{\frac{2}{1-\alpha}}$ (4.3) $c_{x}(h)$ (ii) canonical weight $h$ divergence weight $t_{x}(\xi;h)=u_{\alpha}(f^{\alpha}(x x;h))-d_{\alpha}(q_{\xi}^{\alpha}(y x), f^{\alpha}(y x;h))$ (4.4) \S 2 \S 3 Theorem 4.1. (4.3) $f^{\alpha}(y x;h)$ (4.2) $E[u_{-\alpha}(\frac{q_{\xi}^{\alpha}(x x)}{f^{\alpha}(x x;h)})-d_{\alpha}(q_{\xi}^{\alpha}(y x), f^{\alpha}(y x;h)) h(\xi)]=0$. (4.5) (4.5) (4.2) $u_{-\alpha}(c_{x}$ $)$ ( ) $\psi_{x}(h)$ $\psi_{x}(h):=-u_{-\alpha}(c_{x}(h))$ (4.6) Theorem 4.2. $s(\xi)=t_{x}(\xi;h)$ $\min u_{\alpha}(q(x x))$ $q(y x)$ subject to $-D_{\alpha}(q_{\xi}^{\alpha}(y x),$ $q(y x))+u_{\alpha}(q(x x))=s(\xi)$ (4.2) $f^{\alpha}(y x;h)$ $u_{\alpha}(r)$ Bayesian -likelihood $\log$ $\log q(x x)$
14 $ $ $\mathcal{q}^{\alpha}$ 117 $\alpha=-1$ $\alpha=+1$ (4.5) $u_{-\alpha}( \frac{q_{\xi}^{\alpha}(x x)}{f^{\alpha}(x x;h)})$ canonical weight $h_{x}\dagger(\xi)$ $u_{-\alpha}(1/r)$ $r$ $\delta_{g}\log f^{\alpha}(x x;h_{x}\dagger;h-h_{x}^{\dagger})=0$ for any. (4.7) $h$ Theorem 4.2 $\alpha=+1$ $\alpha=-1$ $h_{x}^{\uparrow}$ Theorem 4.3. (4.7) $f^{\alpha}(y x;h_{x}\dagger)$ $u_{-\alpha}( \frac{q_{\xi}^{\alpha}(x x)}{f^{\alpha}(x x;h_{x}^{\dagger})})=d_{\alpha}(q_{\xi}^{\alpha}(y x),$ $f^{\alpha}(y x;h_{x}^{\uparrow}))$ for any. (4.8) Theorem4.3 $h_{x}^{*}(\xi)=\lambda(\xi x)$ $\mathcal{q}^{\alpha}$ Theorem 4.4. $E[u_{-\alpha}(\frac{q_{\xi}^{\alpha}(x x)}{f^{\alpha}(x x;h)})-d(q_{\xi}^{\alpha}(y x), f^{\alpha}(y x;h)) \lambda(\xi x)m(x)]=0.$ $h_{x}\dagger$ (4.7) Bayesian $\log$ -hkelihood $f^{\alpha}(y x;h_{x}^{*})$ $\log f^{\alpha}(x x;h)$ $f^{\alpha}(y x;h_{x}^{\uparrow})$ (4.6) $\psi_{x}(h)$ $h$ prior
15 118 averaging density Theorem 4.5. Canonical weight $h_{x}^{c}(\xi)$ $\delta_{g}\psi_{x}(h_{x}^{c};h-h_{x}^{c})=0$ for any $h.$ $f^{\alpha}(y x;h_{x}^{c})$ $D(q_{\xi}^{\alpha}(y x), f^{\alpha}(y x;h_{x}^{c}))=-\psi_{x}(h_{x}^{c})$ for any $\xi.$ REFERENCES Akaike, H. (1973). Information theory as an extension of the maximum hkelihood principle. Pages in B.N. Petrov and F. Csaki (editors) Second Intemational Symposium on Information Theory. Akademiai Kiado, Budapest. Aitchison, J. (1975). Goodness of prediction fit. Biometrika, 62, Amari, S-$I$. and Nagaoka, H. (2000). Methods of Information Geometry. American Mathematical Society, Load Island. Corcuera, J. $M$. and Giummole, F. (1999). $A$ generalized Bayes rule for prediction. Scandinavian Joumal of Statistics, 26, Hoeting, J. $A$., Madigan, D., Raftery, A. $E$. and Volinsky, C. $T$. (1999). Bayesian model averaging: a tutorial. Statistical Science, 14, Kullback, S. (1959). Information Theory and Statistics. Wiley, New York. Ohnishi, T. and Yanagimoto, T. (2013). Twofold structure of duality in Bayesian model averaging. Joumal of the Japan Statistical Society, to appear. Stein, C. $M$. (1981). Estimation of the mean of a multivariate normal distribution. Annals of Statistics, 9, Yanagimoto, T. and Ohnishi, T., (2009). Bayesian prediction of a density function
16 119 in terms of $e$ -mixture. Journal of Statistical Planning and Inferen ce, 139, Yanagimoto, T. and Ohnishi, T., (2011). Saddlepoint condition on a predictor to reconfirm the need for the assumption of a prior distribution. Journal of Statistical Planning and Inference, 141,
橡表紙参照.PDF
CIRJE-J-58 X-12-ARIMA 2000 : 2001 6 How to use X-12-ARIMA2000 when you must: A Case Study of Hojinkigyo-Tokei Naoto Kunitomo Faculty of Economics, The University of Tokyo Abstract: We illustrate how to
More informationカルマンフィルターによるベータ推定( )
β TOPIX 1 22 β β smoothness priors (the Capital Asset Pricing Model, CAPM) CAPM 1 β β β β smoothness priors :,,. E-mail: koiti@ism.ac.jp., 104 1 TOPIX β Z i = β i Z m + α i (1) Z i Z m α i α i β i (the
More informationkubostat2018d p.2 :? bod size x and fertilization f change seed number? : a statistical model for this example? i response variable seed number : { i
kubostat2018d p.1 I 2018 (d) model selection and kubo@ees.hokudai.ac.jp http://goo.gl/76c4i 2018 06 25 : 2018 06 21 17:45 1 2 3 4 :? AIC : deviance model selection misunderstanding kubostat2018d (http://goo.gl/76c4i)
More information1 IDC Wo rldwide Business Analytics Technology and Services 2013-2017 Forecast 2 24 http://www.soumu.go.jp/johotsusintokei/whitepaper/ja/h24/pdf/n2010000.pdf 3 Manyika, J., Chui, M., Brown, B., Bughin,
More informationohpmain.dvi
fujisawa@ism.ac.jp 1 Contents 1. 2. 3. 4. γ- 2 1. 3 10 5.6, 5.7, 5.4, 5.5, 5.8, 5.5, 5.3, 5.6, 5.4, 5.2. 5.5 5.6 +5.7 +5.4 +5.5 +5.8 +5.5 +5.3 +5.6 +5.4 +5.2 =5.5. 10 outlier 5 5.6, 5.7, 5.4, 5.5, 5.8,
More informationKobe University Repository : Kernel タイトル Title 著者 Author(s) 掲載誌 巻号 ページ Citation 刊行日 Issue date 資源タイプ Resource Type 版区分 Resource Version 権利 Rights DOI
Kobe University Repository : Kernel タイトル Title 著者 Author(s) 掲載誌 巻号 ページ Citation 刊行日 Issue date 資源タイプ Resource Type 版区分 Resource Version 権利 Rights DOI 平均に対する平滑化ブートストラップ法におけるバンド幅の選択に関する一考察 (A Study about
More informationMathematical Foundation of Statistical Learning
Statistical Learning Theory Part I 8. Knowledge Discovery and Free Energy Sumio Watanabe Tokyo Institute of Technology Review : Unified Framework of Statistical Learning Training data X 1, X 2,, X n q(x)
More information(= ) ( Web-GIS
Stochastic Analyses of Site Amplifications for Providing Digital Hazard Maps of Strong Ground Motion at the Areas in the City of Owariasahi ~ An Analysis of the Accuracies Based on the Observations~ ***********
More information03.Œk’ì
HRS KG NG-HRS NG-KG AIC Fama 1965 Mandelbrot Blattberg Gonedes t t Kariya, et. al. Nagahara ARCH EngleGARCH Bollerslev EGARCH Nelson GARCH Heynen, et. al. r n r n =σ n w n logσ n =α +βlogσ n 1 + v n w
More informationk3 ( :07 ) 2 (A) k = 1 (B) k = 7 y x x 1 (k2)?? x y (A) GLM (k
2012 11 01 k3 (2012-10-24 14:07 ) 1 6 3 (2012 11 01 k3) kubo@ees.hokudai.ac.jp web http://goo.gl/wijx2 web http://goo.gl/ufq2 1 3 2 : 4 3 AIC 6 4 7 5 8 6 : 9 7 11 8 12 8.1 (1)........ 13 8.2 (2) χ 2....................
More information( ) (, ) arxiv: hgm OpenXM search. d n A = (a ij ). A i a i Z d, Z d. i a ij > 0. β N 0 A = N 0 a N 0 a n Z A (β; p) = Au=β,u N n 0 A
( ) (, ) arxiv: 1510.02269 hgm OpenXM search. d n A = (a ij ). A i a i Z d, Z d. i a ij > 0. β N 0 A = N 0 a 1 + + N 0 a n Z A (β; p) = Au=β,u N n 0 A-. u! = n i=1 u i!, p u = n i=1 pu i i. Z = Z A Au
More informationk2 ( :35 ) ( k2) (GLM) web web 1 :
2012 11 01 k2 (2012-10-26 16:35 ) 1 6 2 (2012 11 01 k2) (GLM) kubo@ees.hokudai.ac.jp web http://goo.gl/wijx2 web http://goo.gl/ufq2 1 : 2 2 4 3 7 4 9 5 : 11 5.1................... 13 6 14 6.1......................
More informationx T = (x 1,, x M ) x T x M K C 1,, C K 22 x w y 1: 2 2
Takio Kurita Neurosceince Research Institute, National Institute of Advanced Indastrial Science and Technology takio-kurita@aistgojp (Support Vector Machine, SVM) 1 (Support Vector Machine, SVM) ( ) 2
More information1 Tokyo Daily Rainfall (mm) Days (mm)
( ) r-taka@maritime.kobe-u.ac.jp 1 Tokyo Daily Rainfall (mm) 0 100 200 300 0 10000 20000 30000 40000 50000 Days (mm) 1876 1 1 2013 12 31 Tokyo, 1876 Daily Rainfall (mm) 0 50 100 150 0 100 200 300 Tokyo,
More informationMicrosoft Word doc
. 正規線形モデルのベイズ推定翠川 大竹距離減衰式 (PGA(Midorikawa, S., and Ohtake, Y. (, Attenuation relationships of peak ground acceleration and velocity considering attenuation characteristics for shallow and deeper earthquakes,
More informationkubostat2015e p.2 how to specify Poisson regression model, a GLM GLM how to specify model, a GLM GLM logistic probability distribution Poisson distrib
kubostat2015e p.1 I 2015 (e) GLM kubo@ees.hokudai.ac.jp http://goo.gl/76c4i 2015 07 22 2015 07 21 16:26 kubostat2015e (http://goo.gl/76c4i) 2015 (e) 2015 07 22 1 / 42 1 N k 2 binomial distribution logit
More information& 3 3 ' ' (., (Pixel), (Light Intensity) (Random Variable). (Joint Probability). V., V = {,,, V }. i x i x = (x, x,, x V ) T. x i i (State Variable),
.... Deeping and Expansion of Large-Scale Random Fields and Probabilistic Image Processing Kazuyuki Tanaka The mathematical frameworks of probabilistic image processing are formulated by means of Markov
More informationkubostat2017e p.1 I 2017 (e) GLM logistic regression : : :02 1 N y count data or
kubostat207e p. I 207 (e) GLM kubo@ees.hokudai.ac.jp https://goo.gl/z9ycjy 207 4 207 6:02 N y 2 binomial distribution logit link function 3 4! offset kubostat207e (https://goo.gl/z9ycjy) 207 (e) 207 4
More information<8CA48B8694EF8E E E816991E C5816A5F8DC58F4994C52E706466>
3 4 5 7 8 9 13 14 15 16 17 18 19 21 23 25 27 28 29 30 1 KYUSHU UNIVERSITY KYUSHU UNIVERSITY 2 KYUSHU UNIVERSITY KYUSHU UNIVERSITY 2 3 4 KYUSHU UNIVERSITY KYUSHU UNIVERSITY 3 5 6 KYUSHU UNIVERSITY KYUSHU
More information「産業上利用することができる発明」の審査の運用指針(案)
1 1.... 2 1.1... 2 2.... 4 2.1... 4 3.... 6 4.... 6 1 1 29 1 29 1 1 1. 2 1 1.1 (1) (2) (3) 1 (4) 2 4 1 2 2 3 4 31 12 5 7 2.2 (5) ( a ) ( b ) 1 3 2 ( c ) (6) 2. 2.1 2.1 (1) 4 ( i ) ( ii ) ( iii ) ( iv)
More information1 Stata SEM LightStone 4 SEM 4.. Alan C. Acock, Discovering Structural Equation Modeling Using Stata, Revised Edition, Stata Press 3.
1 Stata SEM LightStone 4 SEM 4.. Alan C. Acock, 2013. Discovering Structural Equation Modeling Using Stata, Revised Edition, Stata Press 3. 2 4, 2. 1 2 2 Depress Conservative. 3., 3,. SES66 Alien67 Alien71,
More informationブック
ARMA Estimation on Process of ARMA Time Series Model Sanno University Bulletin Vol.26 No. 2 February 2006 ARMA Estimation on Process of ARMA Time Series Model Many papers and books have been published
More informationkubostat2017b p.1 agenda I 2017 (b) probability distribution and maximum likelihood estimation :
kubostat2017b p.1 agenda I 2017 (b) probabilit distribution and maimum likelihood estimation kubo@ees.hokudai.ac.jp http://goo.gl/76c4i 2017 11 14 : 2017 11 07 15:43 1 : 2 3? 4 kubostat2017b (http://goo.gl/76c4i)
More information1 Stata SEM LightStone 3 2 SEM. 2., 2,. Alan C. Acock, Discovering Structural Equation Modeling Using Stata, Revised Edition, Stata Press.
1 Stata SEM LightStone 3 2 SEM. 2., 2,. Alan C. Acock, 2013. Discovering Structural Equation Modeling Using Stata, Revised Edition, Stata Press. 2 3 2 Conservative Depress. 3.1 2. SEM. 1. x SEM. Depress.
More information研究シリーズ第40号
165 PEN WPI CPI WAGE IIP Feige and Pearce 166 167 168 169 Vector Autoregression n (z) z z p p p zt = φ1zt 1 + φ2zt 2 + + φ pzt p + t Cov( 0 ε t, ε t j )= Σ for for j 0 j = 0 Cov( ε t, zt j ) = 0 j = >
More informationNatural Language Processing Series 1 WWW WWW 1. ii Foundations of Statistical NLPMIT Press 1999 2. a. b. c. 25 3. a. b. Web WWW iii 2. 3. 2009 6 v 2010 6 1. 1.1... 1 1.2... 4 1.2.1... 6 1.2.2... 12 1.2.3...
More informationMantel-Haenszelの方法
Mantel-Haenszel 2008 6 12 ) 2008 6 12 1 / 39 Mantel & Haenzel 1959) Mantel N, Haenszel W. Statistical aspects of the analysis of data from retrospective studies of disease. J. Nat. Cancer Inst. 1959; 224):
More informationinkiso.dvi
Ken Urai May 19, 2004 5 27 date-event uncertainty risk 51 ordering preordering X X X (preordering) reflexivity x X x x transitivity x, y, z X x y y z x z asymmetric x y y x x = y X (ordering) completeness
More information山形大学紀要
x t IID t = b b x t t x t t = b t- AR ARMA IID AR ARMAMA TAR ARCHGARCH TARThreshold Auto Regressive Model TARTongTongLim y y X t y Self Exciting Threshold Auto Regressive, SETAR SETARTAR TsayGewekeTerui
More information,.,.,,. [15],.,.,,., 2003 3 2006 2 3. 2003 3 2004 2 2004 3 2005 2, 1., 2005 3 2006 2, 1., 1,., 1,,., 1. i
200520866 ( ) 19 1 ,.,.,,. [15],.,.,,., 2003 3 2006 2 3. 2003 3 2004 2 2004 3 2005 2, 1., 2005 3 2006 2, 1., 1,., 1,,., 1. i 1 1 1.1..................................... 1 1.2...................................
More information1 4 2 4 3 5 4? 7 5 9 6 10 7 11 8 13 9 16 10 17 11 19 12 20 13 21 2
78-2 (2002) p.172-193 1 1 4 2 4 3 5 4? 7 5 9 6 10 7 11 8 13 9 16 10 17 11 19 12 20 13 21 2 ( ) ( )? 3 1 N i p i log p i i p i log p i i N i q i N i p i log q i N i p i { ( log q i ) ( log p i ) } = N i
More information01.Œk’ì/“²fi¡*
AIC AIC y n r n = logy n = logy n logy n ARCHEngle r n = σ n w n logσ n 2 = α + β w n 2 () r n = σ n w n logσ n 2 = α + β logσ n 2 + v n (2) w n r n logr n 2 = logσ n 2 + logw n 2 logσ n 2 = α +β logσ
More informationVol. 29, No. 2, (2008) FDR Introduction of FDR and Comparisons of Multiple Testing Procedures that Control It Shin-ichi Matsuda Department of
Vol. 29, No. 2, 125 139 (2008) FDR Introduction of FDR and Comparisons of Multiple Testing Procedures that Control It Shin-ichi Matsuda Department of Information Systems and Mathematical Sciences, Faculty
More information講義のーと : データ解析のための統計モデリング. 第3回
Title 講義のーと : データ解析のための統計モデリング Author(s) 久保, 拓弥 Issue Date 2008 Doc URL http://hdl.handle.net/2115/49477 Type learningobject Note この講義資料は, 著者のホームページ http://hosho.ees.hokudai.ac.jp/~kub ードできます Note(URL)http://hosho.ees.hokudai.ac.jp/~kubo/ce/EesLecture20
More informationCHUO UNIVERSITY 3
2 CHUO UNIVERSITY 3 4 CHUO UNIVERSITY 5 6 CHUO UNIVERSITY 7 8 Journal of Economic Behavior and Orgnanization Games and Economic Behavior Journal of Comparative Economics CHUO UNIVERSITY 9 10 CHUO UNIVERSITY
More informationL. S. Abstract. Date: last revised on 9 Feb translated to Japanese by Kazumoto Iguchi. Original papers: Received May 13, L. Onsager and S.
L. S. Abstract. Date: last revised on 9 Feb 01. translated to Japanese by Kazumoto Iguchi. Original papers: Received May 13, 1953. L. Onsager and S. Machlup, Fluctuations and Irreversibel Processes, Physical
More informationKullback-Leibler
Kullback-Leibler 206 6 6 http://www.math.tohoku.ac.jp/~kuroki/latex/206066kullbackleibler.pdf 0 2 Kullback-Leibler 3. q i.......................... 3.2........... 3.3 Kullback-Leibler.............. 4.4
More informationX X X Y R Y R Y R MCAR MAR MNAR Figure 1: MCAR, MAR, MNAR Y R X 1.2 Missing At Random (MAR) MAR MCAR MCAR Y X X Y MCAR 2 1 R X Y Table 1 3 IQ MCAR Y I
(missing data analysis) - - 1/16/2011 (missing data, missing value) (list-wise deletion) (pair-wise deletion) (full information maximum likelihood method, FIML) (multiple imputation method) 1 missing completely
More informationRecent Developments and Perspectives of Statistical Time Series Analysis /ta) : t"i,,t Q) w (^ - p) dp *+*ffi t 1 ] Abraham, B. and Ledolter, J. (1986). Forecast functions implied by autoregressive
More information2 Recovery Theorem Spears [2013]Audrino et al. [2015]Backwell [2015] Spears [2013] Ross [2015] Audrino et al. [2015] Recovery Theorem Tikhonov (Tikhon
Recovery Theorem Forward Looking Recovery Theorem Ross [2015] forward looking Audrino et al. [2015] Tikhonov Tikhonov 1. Tikhonov 2. Tikhonov 3. 3 1 forward looking *1 Recovery Theorem Ross [2015] forward
More informationAbstract Gale-Shapley 2 (1) 2 (2) (1)
( ) 2011 3 Abstract Gale-Shapley 2 (1) 2 (2) (1) 1 1 1.1........................................... 1 1.2......................................... 2 2 4 2.1................................... 4 2.1.1 Gale-Shapley..........................
More information講義のーと : データ解析のための統計モデリング. 第2回
Title 講義のーと : データ解析のための統計モデリング Author(s) 久保, 拓弥 Issue Date 2008 Doc URL http://hdl.handle.net/2115/49477 Type learningobject Note この講義資料は, 著者のホームページ http://hosho.ees.hokudai.ac.jp/~kub ードできます Note(URL)http://hosho.ees.hokudai.ac.jp/~kubo/ce/EesLecture20
More informationAMR日本語版書式
MMRC DISCUSSION PAPER SERIES No. 229 顧 客 嗜 好 の 時 間 的 変 化 を 組 み 込 んだ 音 楽 CD 選 好 モデルの 構 築 と CRM への 応 用 東 京 大 学 大 学 院 経 済 学 研 究 科 経 営 専 攻 博 士 課 程 勝 又 壮 太 郎 東 京 大 学 大 学 院 経 済 学 研 究 科 教 授 阿 部 誠 2008 年 4 月 東
More information講義のーと : データ解析のための統計モデリング. 第5回
Title 講義のーと : データ解析のための統計モデリング Author(s) 久保, 拓弥 Issue Date 2008 Doc URL http://hdl.handle.net/2115/49477 Type learningobject Note この講義資料は, 著者のホームページ http://hosho.ees.hokudai.ac.jp/~kub ードできます Note(URL)http://hosho.ees.hokudai.ac.jp/~kubo/ce/EesLecture20
More information土木学会論文集 D3( 土木計画学 ), Vol. 71, No. 2, 31-43,
1 2 1 305 8506 16 2 E-mail: murakami.daisuke@nies.go.jp 2 305 8573 1 1 1 E-mail: tsutsumi@sk.tsukuba.ac.jp Key Words: sampling design, geostatistics, officially assessed land price, prefectural land price
More informationdvi
2017 65 2 201 215 2017 1 2 2 2 3 2017 1 31 3 22 3 30 AIC 1 1 Lasso 1. MLB 2015 1 Whiteside et al., 2016 1 112 8551 1 13 27 2 112 8551 1 13 27 3 522 8522 1 1 1 202 65 2 2017 1 MLB 4 1 Lyman et al., 2002
More information<966B91E58C6F8DCF8A77959493AF918B89EF89EF95F18E8F5F8DC58F495F8B5E8E9790462E696E6464>
1 Hokkaido University Faculty of Economics ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; 2 Hokkaido University Faculty of Economics ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; 3 Hokkaido University Faculty of Economics ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
More informationModal Phrase MP because but 2 IP Inflection Phrase IP as long as if IP 3 VP Verb Phrase VP while before [ MP MP [ IP IP [ VP VP ]]] [ MP [ IP [ VP ]]]
30 4 2016 3 pp.195-209. 2014 N=23 (S)AdvOV (S)OAdvV 2 N=17 (S)OAdvV 2014 3, 2008 Koizumi 1993 3 MP IP VP 1 MP 2006 2002 195 Modal Phrase MP because but 2 IP Inflection Phrase IP as long as if IP 3 VP Verb
More information1405350.indd
Guidebook Faculty of Engineering, The University of Tokushima http://www.tokushima-u.ac.jp/e/ Guidebook Faculty of Engineering, The University of Tokushima http://www.ce.tokushima-u.ac.jp Civil and
More information情報理論 第5回 情報量とエントロピー
5 () ( ) ( ) ( ) p(a) a I(a) p(a) p(a) I(a) p(a) I(a) (2) (self information) p(a) = I(a) = 0 I(a) = 0 I(a) a I(a) = log 2 p(a) = log 2 p(a) bit 2 (log 2 ) (3) I(a) 7 6 5 4 3 2 0 0.5 p(a) p(a) = /2 I(a)
More information1 1979 Carter Wegman[8] [1] error-correctiong code[9] [6] [14]( ) Impagliazzo,Levin,Luby [17] leftover hash lemma ( leftover hash lemma Impagliazzo Zu
Leftover Hash Lemma M1 Universal hash families and the leftover hash lemma, and applications to cryptography and computing D.R.Stinson Department of Combinatorics and Optimization University of Waterloo
More information東アジアへの視点
8 8 1955 1 2 3 1. Sakamoto 2012 2012a b 8 8 2. 2.1 AGI Industrial Structure of the Prefectural Economy in Kyushu Area in Japan: Trend and Future Prediction 56th European Regional Science Association Congress
More informationtoukei08.dvi
2004 52 1 117 134 c 2004 1 2 2003 8 4 2004 3 19 Piecewise Constant Intensity Model Piecewise Constant Intensity Model 1. the oldest-old Carey et al.1992 1 168 8508 2 19 1; sibuyam@takachiho.ac.jp 2 350
More informationkubostat2017c p (c) Poisson regression, a generalized linear model (GLM) : :
kubostat2017c p.1 2017 (c), a generalized linear model (GLM) : kubo@ees.hokudai.ac.jp http://goo.gl/76c4i 2017 11 14 : 2017 11 07 15:43 kubostat2017c (http://goo.gl/76c4i) 2017 (c) 2017 11 14 1 / 47 agenda
More information1 I p2/30
I I p1/30 1 I p2/30 1 ( ) I p3/30 1 ( ), y = y() d = f() g(y) ( g(y) = f()d) (1) I p4/30 1 ( ), y = y() d = f() g(y) ( g(y) = f()d) (1) g(y) = f()d I p4/30 1 ( ), y = y() d = f() g(y) ( g(y) = f()d) (1)
More information2002 31 3 4 5 6 10 12 15 1 18 19 20 1
2002 31 3 4 5 6 10 12 15 1 18 19 20 1 2002 31 2 21 NATO 22 NATO 23 1 24 25 26 27 2 Soviet Industrial Location: A Reexamination 28 29 Department of History and Classics, University of Alberta 30 31 2 2002
More informationヒストリカル法によるバリュー・アット・リスクの計測:市場価格変動の非定常性への実務的対応
VaR VaR VaR VaR GARCH E-mail : yoshitaka.andou@boj.or.jp VaR VaR LTCM VaR VaR VaR VaR VaR VaR VaR VaR t P(t) P(= P() P(t)) Pr[ P X] =, X t100 (1 )VaR VaR P100 P X X (1 ) VaR VaR VaR VaR VaR VaR VaR VaR
More information2 A A 3 A 2. A [2] A A A A 4 [3]
1 2 A A 1. ([1]3 3[ ]) 2 A A 3 A 2. A [2] A A A A 4 [3] Xi 1 1 2 1 () () 1 n () 1 n 0 i i = 1 1 S = S +! X S ( ) 02 n 1 2 Xi 1 0 2 ( ) ( 2) n ( 2) n 0 i i = 1 2 S = S +! X 0 k Xip 1 (1-p) 1 ( ) n n k Pr
More informationIsogai, T., Building a dynamic correlation network for fat-tailed financial asset returns, Applied Network Science (7):-24, 206,
H28. (TMU) 206 8 29 / 34 2 3 4 5 6 Isogai, T., Building a dynamic correlation network for fat-tailed financial asset returns, Applied Network Science (7):-24, 206, http://link.springer.com/article/0.007/s409-06-0008-x
More information:EM,,. 4 EM. EM Finch, (AIC)., ( ), ( ), Web,,.,., [1].,. 2010,,,, 5 [2]., 16,000.,..,,. (,, )..,,. (socio-dynamics) [3, 4]. Weidlich Haag.
:EM,,. 4 EM. EM Finch, (AIC)., ( ), ( ),. 1. 1990. Web,,.,., [1].,. 2010,,,, 5 [2]., 16,000.,..,,. (,, )..,,. (socio-dynamics) [3, 4]. Weidlich Haag. [5]. 606-8501,, TEL:075-753-5515, FAX:075-753-4919,
More information1
1 , Cross-Entropy Cross-Entropy ( ) (Coin flipping, finding max) 2 , ( ).. X = (X 1,, X n ) : R n - H : R n R f : X., l = H(x)f(x)dx = E f [H(X)] R n., E f : f X. 3 (CMC),, X 1,, X N, i.i.d, f, ˆl CMC
More information1 (1997) (1997) 1974:Q3 1994:Q3 (i) (ii) ( ) ( ) 1 (iii) ( ( 1999 ) ( ) ( ) 1 ( ) ( 1995,pp ) 1
1 (1997) (1997) 1974:Q3 1994:Q3 (i) (ii) ( ) ( ) 1 (iii) ( ( 1999 ) ( ) ( ) 1 ( ) ( 1995,pp.218 223 ) 1 2 ) (i) (ii) / (iii) ( ) (i ii) 1 2 1 ( ) 3 ( ) 2, 3 Dunning(1979) ( ) 1 2 ( ) ( ) ( ) (,p.218) (
More information1 Jensen et al.[6] GRT S&P500 GRT RT GRT Kiriu and Hibiki[8] Jensen et al.[6] GRT 3 GRT Generalized Recovery Theorem (Jensen et al.[6])
Generalized Recovery Theorem Ross[11] Recovery Theorem(RT) RT forward looking Kiriu and Hibiki[8] Generalized Recovery Theorem(GRT) Jensen et al.[6] GRT RT Kiriu and Hibiki[8] 1 backward looking forward
More informationわが国企業による資金調達方法の選択問題
* takeshi.shimatani@boj.or.jp ** kawai@ml.me.titech.ac.jp *** naohiko.baba@boj.or.jp No.05-J-3 2005 3 103-8660 30 No.05-J-3 2005 3 1990 * E-mailtakeshi.shimatani@boj.or.jp ** E-mailkawai@ml.me.titech.ac.jp
More information(note-02) Rademacher 1/57
(note-02) Rademacher 1/57 (x 1, y 1 ),..., (x n, y n ) X Y f : X Y Y = R f Y = {+1, 1}, {1, 2,..., G} f x y 1. (x 1, y 1 ),..., (x n, y n ) f(x i ) ( ) 2. x f(x) Y 2/57 (x, y) f(x) f(x) y (, loss) l(f(x),
More informationP-12 P-13 3 4 28 16 00 17 30 P-14 P-15 P-16 4 14 29 17 00 18 30 P-17 P-18 P-19 P-20 P-21 P-22
1 14 28 16 00 17 30 P-1 P-2 P-3 P-4 P-5 2 24 29 17 00 18 30 P-6 P-7 P-8 P-9 P-10 P-11 P-12 P-13 3 4 28 16 00 17 30 P-14 P-15 P-16 4 14 29 17 00 18 30 P-17 P-18 P-19 P-20 P-21 P-22 5 24 28 16 00 17 30 P-23
More informationFig. 3 Flow diagram of image processing. Black rectangle in the photo indicates the processing area (128 x 32 pixels).
Fig. 1 The scheme of glottal area as a function of time Fig. 3 Flow diagram of image processing. Black rectangle in the photo indicates the processing area (128 x 32 pixels). Fig, 4 Parametric representation
More information1) K. J. Laidler, "Reaction Kinetics", Vol. II, Pergamon Press, New York (1963) Chap. 1 ; P. G. Ashmore, "Catalysis and Inhibition of Chemical Reactio
1) K. J. Laidler, "Reaction Kinetics", Vol. II, Pergamon Press, New York (1963) Chap. 1 ; P. G. Ashmore, "Catalysis and Inhibition of Chemical Reactions", Butterworths, London (1963) Chap. 7, p. 185. 2)
More information¿ô³Ø³Ø½øÏÀ¥Î¡¼¥È
2011 i N Z Q R C A def B, A B. ii..,.,.. (, ), ( ),.?????????,. iii 04-13 04-20 04-27 05-04 [ ] 05-11 05-18 05-25 06-01 06-08 06-15 06-22 06-29 07-06 07-13 07-20 07-27 08-03 10-05 10-12 10-19 [ ] 10-26
More informationQ A Q Q Q Q 50
Faculty of Engineering 1. 2. 3. 4. keyword 49 Q A Q Q Q Q 50 Faculty of Engineering 51 Q A Q A Q Q Q Q 52 Faculty of Engineering 53 Q A Q A Q Q Q Q 54 Faculty of Engineering 55 Q A Q A Q Q Q Q 56 Faculty
More information分布
(normal distribution) 30 2 Skewed graph 1 2 (variance) s 2 = 1/(n-1) (xi x) 2 x = mean, s = variance (variance) (standard deviation) SD = SQR (var) or 8 8 0.3 0.2 0.1 0.0 0 1 2 3 4 5 6 7 8 8 0 1 8 (probability
More information10 2016 5 16 1 1 Lin, P. and Saggi, K. (2002) Product differentiation, process R&D, and the nature of market competition. European Economic Review 46(1), 201 211. Manasakis, C., Petrakis, E., and Zikos,
More information,,, 2 ( ), $[2, 4]$, $[21, 25]$, $V$,, 31, 2, $V$, $V$ $V$, 2, (b) $-$,,, (1) : (2) : (3) : $r$ $R$ $r/r$, (4) : 3
1084 1999 124-134 124 3 1 (SUGIHARA Kokichi),,,,, 1, [5, 11, 12, 13], (2, 3 ), -,,,, 2 [5], 3,, 3, 2 2, -, 3,, 1,, 3 2,,, 3 $R$ ( ), $R$ $R$ $V$, $V$ $R$,,,, 3 2 125 1 3,,, 2 ( ), $[2, 4]$, $[21, 25]$,
More informationStepwise Chow Test * Chow Test Chow Test Stepwise Chow Test Stepwise Chow Test Stepwise Chow Test Riddell Riddell first step second step sub-step Step
Stepwise Chow Test * Chow Test Chow Test Stepwise Chow Test Stepwise Chow Test Stepwise Chow Test Riddell Riddell first step second step sub-step Stepwise Chow Test a Stepwise Chow Test Takeuchi 1991Nomura
More information2
fukui@econ.tohoku.ac.jp http://www.econ.tohoku.ac.jp/~fukui/site.htm 200 7 Cookbook-style . (Inference) (Population) (Sample) f(x = θ = θ ) (up to parameter values) (estimation) 2 3 (multicolinearity)
More informationMeas- urement Angoff, W. H. 19654 Equating non-parallel tests. Journal of Educational Measurement, 1, 11-14. Angoff, W. H. 1971a Scales, norms and equivalent scores. In R. L. Thorndike (Ed.) Educational
More information( ) 1.1 Polychoric Correlation Polyserial Correlation Graded Response Model Partial Credit Model Tetrachoric Correlation ( ) 2 x y x y s r 1 x 2
1 (,2007) SPSSver8 1997 (2002) 1. 2. polychoric correlation coefficient (polyserial correlation coefficient) 3. (1999) M-plus R 1 ( ) 1.1 Polychoric Correlation Polyserial Correlation Graded Response Model
More information所得税と課税最低限に関する一考察
1. 2. 2.1 2.2 2.2.1 2.2.2 3. 3.1 3.2 3.3 4. 4.1 4.2 5. 1 1. 1 14 2 1985 3 1981 4 1997 5 2005 325 6 1 2005 19942000 2 http://www.mof.go.jp/singikai/zeicho/tosin/140614c.htm 3 1985p.107 4 1981p.122 5 1997p.232
More informationuntitled
c 645 2 1. GM 1959 Lindsey [1] 1960 Howard [2] Howard 1 25 (Markov Decision Process) 3 3 2 3 +1=25 9 Bellman [3] 1 Bellman 1 k 980 8576 27 1 015 0055 84 4 1977 D Esopo and Lefkowitz [4] 1 (SI) Cover and
More information