x T = (x 1,, x M ) x T x M K C 1,, C K 22 x w y 1: 2 2

Size: px
Start display at page:

Download "x T = (x 1,, x M ) x T x M K C 1,, C K 22 x w y 1: 2 2"

Transcription

1 Takio Kurita Neurosceince Research Institute, National Institute of Advanced Indastrial Science and Technology (Support Vector Machine, SVM) 1 (Support Vector Machine, SVM) ( ) 2 2 (Support Vector Machine, SVM) 1960 Vapnik Optimal Separating Hyperplane

2 x T = (x 1,, x M ) x T x M K C 1,, C K 22 x w y 1: 2 2

3 1 y = sign(w T x h) (1) 2 w h sign(u) u > 0 1 u C 1,C N x 1,, x N t 1,, t N H2 H1 2: ( 1-1 ) t i (w T x i h) 1, i = 1,, N (2) H1: w T x h = 1 H2: w T x h = w w h t i (w T x i h) 1, (i = 1,, N) (3) 3

4 L(w) = 1 2 w 2 (4) Lagrange α i ( 0), i = 1,, N L(w, h, α) = 1 2 w 2 α i {t i (w T x i h) 1} (5) w h w = 0 = α i t i x i (6) α i t i (7) α i t i = 0 (8) L D (α) = α i α i, i = 1,, N (9) α i α j t i t j x T i x j (10) Lagrange α i ( 0), i = 1,, N αi 0 α i > 0 x i w T x h = 1 w T x h = 1 αi 0 x i αi (i 0) w w = αi t i x i (11) i S i, S h w T x h = 1 w T x h = 1 x s, s S h = w T x s t s (12) αi (i 0) y = sign(w T x h ) = sign( i S α i t i x T i x h ) (13) 4

5 αi = 0 α i > 0 23 H2 H1 3: ( 1-1 ) 1 w 3 H1 H2 ξ i ( 0) ξ i w ξ i w (14) ξ i 0, t i (w T x i h) 1 ξ i, (i = 1,, N) (15) L(w, ξ) = 1 2 w 2 + γ ξ i (16) 5

6 γ 2 Lagrange α i ν i L(w, h, α, ν) = 1 2 w 2 + γ ξ i α i {t i (w T x i h) (1 ξ i )} ν i ξ i (17) w h ξ i 0 w = 0 = α i t i x i (18) α i t i (19) α i = γ ν i (20) α i t i = 0 (21) L D (α) = α i α i γ, i = 1,, N (22) α i α j t i t j x T i x j (23) αi H1 H2 ( ) H1 H2 αi α i = 0 H1 H2 0 < αi < γ H1 H2 αi = γ ξ i 0 H1 H2 i, 24 6

7 x ϕ(x) ϕ L D ϕ(x 1 ) ϕ(x 2 ) ϕ(x 1 ) T ϕ(x 2 ) = K(x 1, x 2 ) (24) x 1 x 2 ϕ(x 1 ) ϕ(x 2 ) K(x 1, x 2 ) K K K(x 1, x 2 ) = (1 + x T 1 x 2 ) p (25) Gauss ( x1 x 2 2 ) K(x 1, x 2 ) = exp 2σ 2 (26) K(x 1, x 2 ) = tanh ( ax T 1 x 2 b ) (27) (10) (23) L D L D (α) = = α i 1 α i α j t i t j ϕ(x i ) T ϕ(x j ) 2 i, α i 1 α i α j t i t j K(x i, x j ) (28) 2 i, (13) y = sign(w T ϕ(x) h ) = sign( i S α i t i ϕ(x i ) T ϕ(x) h ) 7

8 = sign( i S α i t i K(x i, x) h ) (29) Gauss Radial Basis Function (RBF) Kernel x w y 4: 25 CV Gauss σ 8

9 multinomial logit model 3 31 Rosenblatt x = (x 1,, x M ) T y y = f(η) η = w T x h = w T x (30) w i i h w = (h, w 1,, w M ) T x = ( 1, x 1,, x M ) T f Rosenblatt { 1 if η 0 f(η) = sign(η) = 0 otherwise f(η) = (31) f(η) = η (32) exp(η) 1 + exp(η) (33) 9

10 0 1 Threshold (a) 1 08 Linear (b) 1 09 Logistic (c) 5: 32 ( ) Rosenblatt N {(x i, t i ) i = 1,, N} x i ( ) t i 10

11 ε 2 emp = (t i y i ) 2 = ε 2 emp(i) (34) ( ) w ε 2 emp ε 2 emp w j ε 2 emp w j = h 2(t i y i )x ij = 2δ i x ij (35) ε 2 emp h N = 2(t i y i )( 1) = 2δ i ( 1) (36) δ i = (t i y i ) w j w j + α( δ i x ij ) (37) h h + α( δ i ( 1)) (38) α (learning rate) Widrow-Hoff (Widrow-Hoff learning rule) t i y i δ i (delta rule) Widrow-Hoff N (M + 1) X = ( x 1,, x N ) T N t = (t 1,, t N ) T 2 ε 2 emp = w 0 (t i y i ) 2 = t X w 2 (39) ε 2 emp w = XT (t X w) = 0 (40) (X T X) w w = (X T X) 1 X T t (41) (multiple regression analysis) x (explanatory variable) t (criterion variable) 11

12 34 (multiple regression analysis) (shrinkage method) (regularization method) (ridge regression) (1) x Forward stepwise selection Backward stepwise selection Forward stepwise selection 1 1 Backward stepwise selection 1 2 resampling leave-one-out leave-one-out N N 1 12

13 1 N 1 1 N jackknife [9, 10] bootstrap [11, 12, 13] resampling resampling 2 F AIC(An Information Theoretical Criterion)[14, 15] Rissanen MDL(Minimum Description Length)[16, 17] ( ) AIC MDL AIC J AIC = 2( ) + 2J (42) MDL Rissanen (Minimal Discription Length) MDL = ( ) + N 2 log J (43) AIC MDL (2) Shrinkage (rigde regression) ε 2 emp = (t i y i ) 2 = (t i ( w j x ij h)) 2 (44) w j wj 2 (45) Q(w, h) = (t i ( w j x ij h)) 2 + λ wj 2 (46) 13

14 λ 2 λ = 0 Q(w, h) h 0 Q(w, h) h h = 2N( t h = t + w j x j + h) = 0 (47) w j x j (48) t = 1 N N t i x j = 1 N N x ij Q(w, h) Q(w) = + λ {(t i t) w j (x ij x j )} 2 w 2 j = ( t Xw) T ( t Xw) + λw T w (49) t X (t i t) (x ij x j ) w w Q(w) w = 2 X t + 2( X T Xw + λi)w = 0 (50) w = ( X T X + λi) 1 XT t (51) X T X λ X T X 35 {(x i, u i ) i = 1,, N} u i x y x u 1 L = N y u i i (1 y i ) (1 ui) (52) 14

15 ( ) l = = = {u i log y i + (1 u i ) log(1 y i )} {u i log{ exp(η i) 1 + exp(η i ) } 1 +(1 u i ) log{ 1 + exp(η i ) }} {u i η i log{1 + exp(η i )}} (53) w w j l w j = (u i y i )x ij = δ i x ij (54) δ i = (u i y i ) h l N h = (u i y i )( 1) = δ i ( 1) (55) w j w j + α( δ i x ij ) (56) h h + α( δ i ( 1)) (57) Widrow-Hoff (Widrow-Hoff learning rule) y i Fisher y θ 1,, θ M f(y, θ 1,, θ M ) ( ) 2 F ij = E log f(y, θ 1,, θ M ) θ i θ j Fisher F = [F ij ] Fisher Fisher Fisher (53) l w k w j = (58) ω i x ik x ij (59) 15

16 025 omega : ω p ω i = y i (1 y i ) 1 2 l = δ i x i = X T δ, (60) 2 l = ω i x i x T i = X T W X X T = [ x 1,, x N ], W = diag(ω 1,, ω N ) δ = (δ 1,, δ N ) T w Fisher Hessian F = E( 2 l) = X T W X (61) { x i } ω i ω i Fisher (53) Fisher [18] Hessian Fisher Fisher Hessian Fisher w δ w w = w + δ w (62) δ w F δ w = l (63) (62) F F w = F w + F δ w = F w + l (64) F w F w = X T W η (65) 16

17 η = (η 1,, η N ) T w w = F 1 (F w + l) = (X T W X) 1 (X T W η + X T δ) δ = (δ 1,, δ N ) T = (X T W X) 1 X T W (η + W 1 δ) (66) X η + W 1 δ 0 w = 0 W = 1 4 I, η = 0 δ = u (66) w 0 w 0 = 4(X T X) 1 X T (u 1 1) (67) 2 t resampling (1) Weight Decay 2 Q( w) = l + λ = w 2 j {log{1 + exp(η i )} u i η i } +λ wj 2 (68) Q( w) w j Q = l + 2λw j w j w j 17

18 = (u i y i )x ij + 2λw j (69) Q( w) h Q h = l h = (u i y i )( 1) (70) Weight Decay w j w j + α( (u i y i )x ij ) 2αλw j (71) h h + α( (u i y i )( 1)) (72) w j 2 w j 0 37 (16) L(w, ξ) = = ξ i + λ w 2 j [1 t i η i ] + + λ wj 2 (73) [x] + x 7 (a) [1 x] + 1 t i η i 1 ( H1 H2 ) t i 0 1 Q = (1 t i η i ) 2 + λ wj 2 (74) (1 x) 2 7 (b) t i η i 1 t i η i 1 t i η i

19 0 3 (1-x)*(x<1) (a) 3 (1-x)*(1-x) (b) ( ) 3 log(1+exp(-x)) (c) (Weight Decay) 7: 19

20 Weight Decay u i (0, 1) t i ( 1, 1) Q = log{1 + exp(t i η i )} + λ wj 2 (75) log{1 + exp(t i η i )} 7 (c) 1 t i η i = 1 (2 ) t i η i [1] VNVapnik, Statistical Learning Theory, John Wiley & Sons (1998) [2],,,, No444, pp52-58 (2000) [3], -,, Vol42, No7, pp (2001) [4] BScholkopf, CJCBurges, AJSmola, Advances in Kernel Methods - Support Vector Learning, The MIT Press, 1999 [5] NCristianini, JS-Taylor, An Introduction to Support Vector Machines and other kernel-based learning methods, Cambridge University Press, 2000 [6] THastie, RTibshirani, JFriedman, The Elements of Statistical Learning - Data Mining, Inference, and Prediction, Springer-Verlag, 2001 [7] RODuda, PEHart, DGStork, Pattern Classification (Second Edition), John Wiley & Sons, 2001 [8] KRMuller, SMika, GRatsch, KTsuda, BScholkopf, An introduction to kernel-based learning algorithms, IEEE Trans On Neural Networks, Vol12, No2, pp , 2001 [9] Miller,RG,(1974): The jacknife -a review, Biometrika, Vol61, No1, pp1-15 [10] Stone,M,(1974): Cross-validatory choice and assessment of statistic al predictions, Journal of Royal Statistical Society, VolB36, pp

21 [11] Efron,B,(1979): Bootstrap methods: anothoe look at the jackknife, The Annals of Statistics, Vol7, No1, pp1-26 [12] Efron,B,(1983): Estimating the error rate of a prediction rule: imp rovements in cross-validation, Journal of Amerian Statistical Association, Vol 78, pp [13] Efron,B,(1985): The bootstrap method for assessing statistical accu racy, Behaviormetrika, Vol17, pp1-35 [14] Akaike,H,(1974): A new look at the statistical model identification, IEEE Trans on Automatic Control, volac-19, No6, pp [15],,,(1983):, [16] Rissanen,J,(1983): A universal prior for integers and estimation by minimum description length, The Annals of Statistics, Vol11, NO2, pp [17] Rissanen,J,(1986): Stochastic complexity and modeling, The Annals of Statistics, Vol14, No3, pp [18] PMcCullagh, and JANelder FRS, Generalized Linear Models, Chapman and Hall,

Kobe University Repository : Kernel タイトル Title 著者 Author(s) 掲載誌 巻号 ページ Citation 刊行日 Issue date 資源タイプ Resource Type 版区分 Resource Version 権利 Rights DOI

Kobe University Repository : Kernel タイトル Title 著者 Author(s) 掲載誌 巻号 ページ Citation 刊行日 Issue date 資源タイプ Resource Type 版区分 Resource Version 権利 Rights DOI Kobe University Repository : Kernel タイトル Title 著者 Author(s) 掲載誌 巻号 ページ Citation 刊行日 Issue date 資源タイプ Resource Type 版区分 Resource Version 権利 Rights DOI 平均に対する平滑化ブートストラップ法におけるバンド幅の選択に関する一考察 (A Study about

More information

18 2 20 W/C W/C W/C 4-4-1 0.05 1.0 1000 1. 1 1.1 1 1.2 3 2. 4 2.1 4 (1) 4 (2) 4 2.2 5 (1) 5 (2) 5 2.3 7 3. 8 3.1 8 3.2 ( ) 11 3.3 11 (1) 12 (2) 12 4. 14 4.1 14 4.2 14 (1) 15 (2) 16 (3) 17 4.3 17 5. 19

More information

「スウェーデン企業におけるワーク・ライフ・バランス調査 」報告書

「スウェーデン企業におけるワーク・ライフ・バランス調査 」報告書 1 2004 12 2005 4 5 100 25 3 1 76 2 Demoskop 2 2004 11 24 30 7 2 10 1 2005 1 31 2 4 5 2 3-1-1 3-1-1 Micromediabanken 2005 1 507 1000 55.0 2 77 50 50 /CEO 36.3 37.4 18.1 3-2-1 43.0 34.4 / 17.6 3-2-2 78 79.4

More information

& 3 3 ' ' (., (Pixel), (Light Intensity) (Random Variable). (Joint Probability). V., V = {,,, V }. i x i x = (x, x,, x V ) T. x i i (State Variable),

& 3 3 ' ' (., (Pixel), (Light Intensity) (Random Variable). (Joint Probability). V., V = {,,, V }. i x i x = (x, x,, x V ) T. x i i (State Variable), .... Deeping and Expansion of Large-Scale Random Fields and Probabilistic Image Processing Kazuyuki Tanaka The mathematical frameworks of probabilistic image processing are formulated by means of Markov

More information

9 8 7 (x-1.0)*(x-1.0) *(x-1.0) (a) f(a) (b) f(a) Figure 1: f(a) a =1.0 (1) a 1.0 f(1.0)

9 8 7 (x-1.0)*(x-1.0) *(x-1.0) (a) f(a) (b) f(a) Figure 1: f(a) a =1.0 (1) a 1.0 f(1.0) E-mail: takio-kurita@aist.go.jp 1 ( ) CPU ( ) 2 1. a f(a) =(a 1.0) 2 (1) a ( ) 1(a) f(a) a (1) a f(a) a =2(a 1.0) (2) 2 0 a f(a) a =2(a 1.0) = 0 (3) 1 9 8 7 (x-1.0)*(x-1.0) 6 4 2.0*(x-1.0) 6 2 5 4 0 3-2

More information

1 IDC Wo rldwide Business Analytics Technology and Services 2013-2017 Forecast 2 24 http://www.soumu.go.jp/johotsusintokei/whitepaper/ja/h24/pdf/n2010000.pdf 3 Manyika, J., Chui, M., Brown, B., Bughin,

More information

医系の統計入門第 2 版 サンプルページ この本の定価 判型などは, 以下の URL からご覧いただけます. このサンプルページの内容は, 第 2 版 1 刷発行時のものです.

医系の統計入門第 2 版 サンプルページ この本の定価 判型などは, 以下の URL からご覧いただけます.   このサンプルページの内容は, 第 2 版 1 刷発行時のものです. 医系の統計入門第 2 版 サンプルページ この本の定価 判型などは, 以下の URL からご覧いただけます. http://www.morikita.co.jp/books/mid/009192 このサンプルページの内容は, 第 2 版 1 刷発行時のものです. i 2 t 1. 2. 3 2 3. 6 4. 7 5. n 2 ν 6. 2 7. 2003 ii 2 2013 10 iii 1987

More information

…p…^†[…fiflF”¯ Pattern Recognition

…p…^†[…fiflF”¯   Pattern Recognition Pattern Recognition Shin ichi Satoh National Institute of Informatics June 11, 2019 (Support Vector Machines) (Support Vector Machines: SVM) SVM Vladimir N. Vapnik and Alexey Ya. Chervonenkis 1963 SVM

More information

研究シリーズ第40号

研究シリーズ第40号 165 PEN WPI CPI WAGE IIP Feige and Pearce 166 167 168 169 Vector Autoregression n (z) z z p p p zt = φ1zt 1 + φ2zt 2 + + φ pzt p + t Cov( 0 ε t, ε t j )= Σ for for j 0 j = 0 Cov( ε t, zt j ) = 0 j = >

More information

dvi

dvi 2017 65 2 185 200 2017 1 2 2016 12 28 2017 5 17 5 24 PITCHf/x PITCHf/x PITCHf/x MLB 2014 PITCHf/x 1. 1 223 8522 3 14 1 2 223 8522 3 14 1 186 65 2 2017 PITCHf/x 1.1 PITCHf/x PITCHf/x SPORTVISION MLB 30

More information

untitled

untitled 18 1 2,000,000 2,000,000 2007 2 2 2008 3 31 (1) 6 JCOSSAR 2007pp.57-642007.6. LCC (1) (2) 2 10mm 1020 14 12 10 8 6 4 40,50,60 2 0 1998 27.5 1995 1960 40 1) 2) 3) LCC LCC LCC 1 1) Vol.42No.5pp.29-322004.5.

More information

*2.5mm ”ŒŠá‡ÆfiÁ™¥‡Ì…Z†[…t…X…N…−†[…j…fi…O

*2.5mm ”ŒŠá‡ÆfiÁ™¥‡Ì…Z†[…t…X…N…−†[…j…fi…O I. Takeuchi, Nagoya Institute of Technology 1/38 f(x) = w 1 x 1 + w 2 x 2 +... + w d x d f(x) = α 1 K(x, x 1 ) + α 2 K(x, x 2 ) +... + α n K(x, x n ) {wj } d j=1 f {αi } n i=1 f I. Takeuchi, Nagoya Institute

More information

03.Œk’ì

03.Œk’ì HRS KG NG-HRS NG-KG AIC Fama 1965 Mandelbrot Blattberg Gonedes t t Kariya, et. al. Nagahara ARCH EngleGARCH Bollerslev EGARCH Nelson GARCH Heynen, et. al. r n r n =σ n w n logσ n =α +βlogσ n 1 + v n w

More information

ばらつき抑制のための確率最適制御

ばらつき抑制のための確率最適制御 ( ) http://wwwhayanuemnagoya-uacjp/ fujimoto/ 2011 3 9 11 ( ) 2011/03/09-11 1 / 46 Outline 1 2 3 4 5 ( ) 2011/03/09-11 2 / 46 Outline 1 2 3 4 5 ( ) 2011/03/09-11 3 / 46 (1/2) r + Controller - u Plant y

More information

4. C i k = 2 k-means C 1 i, C 2 i 5. C i x i p [ f(θ i ; x) = (2π) p 2 Vi 1 2 exp (x µ ] i) t V 1 i (x µ i ) 2 BIC BIC = 2 log L( ˆθ i ; x i C i ) + q

4. C i k = 2 k-means C 1 i, C 2 i 5. C i x i p [ f(θ i ; x) = (2π) p 2 Vi 1 2 exp (x µ ] i) t V 1 i (x µ i ) 2 BIC BIC = 2 log L( ˆθ i ; x i C i ) + q x-means 1 2 2 x-means, x-means k-means Bayesian Information Criterion BIC Watershed x-means Moving Object Extraction Using the Number of Clusters Determined by X-means Clustering Naoki Kubo, 1 Kousuke

More information

カルマンフィルターによるベータ推定( )

カルマンフィルターによるベータ推定( ) β TOPIX 1 22 β β smoothness priors (the Capital Asset Pricing Model, CAPM) CAPM 1 β β β β smoothness priors :,,. E-mail: koiti@ism.ac.jp., 104 1 TOPIX β Z i = β i Z m + α i (1) Z i Z m α i α i β i (the

More information

わが国企業による資金調達方法の選択問題

わが国企業による資金調達方法の選択問題 * takeshi.shimatani@boj.or.jp ** kawai@ml.me.titech.ac.jp *** naohiko.baba@boj.or.jp No.05-J-3 2005 3 103-8660 30 No.05-J-3 2005 3 1990 * E-mailtakeshi.shimatani@boj.or.jp ** E-mailkawai@ml.me.titech.ac.jp

More information

1 1 2 3 2.1.................. 3 2.2.......... 6 3 7 3.1......................... 7 3.1.1 ALAGIN................ 7 3.1.2 (SVM).........................

1 1 2 3 2.1.................. 3 2.2.......... 6 3 7 3.1......................... 7 3.1.1 ALAGIN................ 7 3.1.2 (SVM)......................... [5] Yahoo! Yahoo! (SVM) 3 F 7 7 (SVM) 3 F 6 0 1 1 2 3 2.1.................. 3 2.2.......... 6 3 7 3.1......................... 7 3.1.1 ALAGIN................ 7 3.1.2 (SVM)........................... 8

More information

ohpmain.dvi

ohpmain.dvi fujisawa@ism.ac.jp 1 Contents 1. 2. 3. 4. γ- 2 1. 3 10 5.6, 5.7, 5.4, 5.5, 5.8, 5.5, 5.3, 5.6, 5.4, 5.2. 5.5 5.6 +5.7 +5.4 +5.5 +5.8 +5.5 +5.3 +5.6 +5.4 +5.2 =5.5. 10 outlier 5 5.6, 5.7, 5.4, 5.5, 5.8,

More information

Stepwise Chow Test * Chow Test Chow Test Stepwise Chow Test Stepwise Chow Test Stepwise Chow Test Riddell Riddell first step second step sub-step Step

Stepwise Chow Test * Chow Test Chow Test Stepwise Chow Test Stepwise Chow Test Stepwise Chow Test Riddell Riddell first step second step sub-step Step Stepwise Chow Test * Chow Test Chow Test Stepwise Chow Test Stepwise Chow Test Stepwise Chow Test Riddell Riddell first step second step sub-step Stepwise Chow Test a Stepwise Chow Test Takeuchi 1991Nomura

More information

On the Limited Sample Effect of the Optimum Classifier by Bayesian Approach he Case of Independent Sample Size for Each Class Xuexian HA, etsushi WAKA

On the Limited Sample Effect of the Optimum Classifier by Bayesian Approach he Case of Independent Sample Size for Each Class Xuexian HA, etsushi WAKA Journal Article / 学術雑誌論文 ベイズアプローチによる最適識別系の有限 標本効果に関する考察 : 学習標本の大きさ がクラス間で異なる場合 (< 論文小特集 > パ ターン認識のための学習 : 基礎と応用 On the limited sample effect of bayesian approach : the case of each class 韓, 雪仙 ; 若林, 哲史

More information

untitled

untitled . x2.0 0.5 0 0.5.0 x 2 t= 0: : x α ij β j O x2 u I = α x j ij i i= 0 y j = + exp( u ) j v J = β y j= 0 j j o = + exp( v ) 0 0 e x p e x p J j I j ij i i o x β α = = = + +.. 2 3 8 x 75 58 28 36 x2 3 3 4

More information

,, Poisson 3 3. t t y,, y n Nµ, σ 2 y i µ + ɛ i ɛ i N0, σ 2 E[y i ] µ * i y i x i y i α + βx i + ɛ i ɛ i N0, σ 2, α, β *3 y i E[y i ] α + βx i

,, Poisson 3 3. t t y,, y n Nµ, σ 2 y i µ + ɛ i ɛ i N0, σ 2 E[y i ] µ * i y i x i y i α + βx i + ɛ i ɛ i N0, σ 2, α, β *3 y i E[y i ] α + βx i Armitage.? SAS.2 µ, µ 2, µ 3 a, a 2, a 3 a µ + a 2 µ 2 + a 3 µ 3 µ, µ 2, µ 3 µ, µ 2, µ 3 log a, a 2, a 3 a µ + a 2 µ 2 + a 3 µ 3 µ, µ 2, µ 3 * 2 2. y t y y y Poisson y * ,, Poisson 3 3. t t y,, y n Nµ,

More information

通信容量制約を考慮したフィードバック制御 - 電子情報通信学会 情報理論研究会(IT) 若手研究者のための講演会

通信容量制約を考慮したフィードバック制御 -  電子情報通信学会 情報理論研究会(IT)  若手研究者のための講演会 IT 1 2 1 2 27 11 24 15:20 16:05 ( ) 27 11 24 1 / 49 1 1940 Witsenhausen 2 3 ( ) 27 11 24 2 / 49 1940 2 gun director Warren Weaver, NDRC (National Defence Research Committee) Final report D-2 project #2,

More information

Kalman ( ) 1) (Kalman filter) ( ) t y 0,, y t x ˆx 3) 10) t x Y [y 0,, y ] ) x ( > ) ˆx (prediction) ) x ( ) ˆx (filtering) )

Kalman ( ) 1) (Kalman filter) ( ) t y 0,, y t x ˆx 3) 10) t x Y [y 0,, y ] ) x ( > ) ˆx (prediction) ) x ( ) ˆx (filtering) ) 1 -- 5 6 2009 3 R.E. Kalman ( ) H 6-1 6-2 6-3 H Rudolf Emil Kalman IBM IEEE Medal of Honor(1974) (1985) c 2011 1/(23) 1 -- 5 -- 6 6--1 2009 3 Kalman ( ) 1) (Kalman filter) ( ) t y 0,, y t x ˆx 3) 10) t

More information

untitled

untitled K-Means 1 5 2 K-Means 7 2.1 K-Means.............................. 7 2.2 K-Means.......................... 8 2.3................... 9 3 K-Means 11 3.1.................................. 11 3.2..................................

More information

seminar0220a.dvi

seminar0220a.dvi 1 Hi-Stat 2 16 2 20 16:30-18:00 2 2 217 1 COE 4 COE RA E-MAIL: ged0104@srv.cc.hit-u.ac.jp 2004 2 25 S-PLUS S-PLUS S-PLUS S-code 2 [8] [8] [8] 1 2 ARFIMA(p, d, q) FI(d) φ(l)(1 L) d x t = θ(l)ε t ({ε t }

More information

2 1,2, , 2 ( ) (1) (2) (3) (4) Cameron and Trivedi(1998) , (1987) (1982) Agresti(2003)

2 1,2, , 2 ( ) (1) (2) (3) (4) Cameron and Trivedi(1998) , (1987) (1982) Agresti(2003) 3 1 1 1 2 1 2 1,2,3 1 0 50 3000, 2 ( ) 1 3 1 0 4 3 (1) (2) (3) (4) 1 1 1 2 3 Cameron and Trivedi(1998) 4 1974, (1987) (1982) Agresti(2003) 3 (1)-(4) AAA, AA+,A (1) (2) (3) (4) (5) (1)-(5) 1 2 5 3 5 (DI)

More information

三石貴志.indd

三石貴志.indd 流通科学大学論集 - 経済 情報 政策編 - 第 21 巻第 1 号,23-33(2012) SIRMs SIRMs Fuzzy fuzzyapproximate approximatereasoning reasoningusing using Lukasiewicz Łukasiewicz logical Logical operations Operations Takashi Mitsuishi

More information

橡表紙参照.PDF

橡表紙参照.PDF CIRJE-J-58 X-12-ARIMA 2000 : 2001 6 How to use X-12-ARIMA2000 when you must: A Case Study of Hojinkigyo-Tokei Naoto Kunitomo Faculty of Economics, The University of Tokyo Abstract: We illustrate how to

More information

1 4 2 4 3 5 4? 7 5 9 6 10 7 11 8 13 9 16 10 17 11 19 12 20 13 21 2

1 4 2 4 3 5 4? 7 5 9 6 10 7 11 8 13 9 16 10 17 11 19 12 20 13 21 2 78-2 (2002) p.172-193 1 1 4 2 4 3 5 4? 7 5 9 6 10 7 11 8 13 9 16 10 17 11 19 12 20 13 21 2 ( ) ( )? 3 1 N i p i log p i i p i log p i i N i q i N i p i log q i N i p i { ( log q i ) ( log p i ) } = N i

More information

01.Œk’ì/“²fi¡*

01.Œk’ì/“²fi¡* AIC AIC y n r n = logy n = logy n logy n ARCHEngle r n = σ n w n logσ n 2 = α + β w n 2 () r n = σ n w n logσ n 2 = α + β logσ n 2 + v n (2) w n r n logr n 2 = logσ n 2 + logw n 2 logσ n 2 = α +β logσ

More information

(MIRU2008) HOG Histograms of Oriented Gradients (HOG)

(MIRU2008) HOG Histograms of Oriented Gradients (HOG) (MIRU2008) 2008 7 HOG - - E-mail: katsu0920@me.cs.scitec.kobe-u.ac.jp, {takigu,ariki}@kobe-u.ac.jp Histograms of Oriented Gradients (HOG) HOG Shape Contexts HOG 5.5 Histograms of Oriented Gradients D Human

More information

2008 : 80725872 1 2 2 3 2.1.......................................... 3 2.2....................................... 3 2.3......................................... 4 2.4 ()..................................

More information

1 4 1 ( ) ( ) ( ) ( ) () 1 4 2

1 4 1 ( ) ( ) ( ) ( ) () 1 4 2 7 1995, 2017 7 21 1 2 2 3 3 4 4 6 (1).................................... 6 (2)..................................... 6 (3) t................. 9 5 11 (1)......................................... 11 (2)

More information

(2) Fisher α (α) α Fisher α ( α) 0 Levi Civita (1) ( 1) e m (e) (m) ([1], [2], [13]) Poincaré e m Poincaré e m Kähler-like 2 Kähler-like

(2) Fisher α (α) α Fisher α ( α) 0 Levi Civita (1) ( 1) e m (e) (m) ([1], [2], [13]) Poincaré e m Poincaré e m Kähler-like 2 Kähler-like () 10 9 30 1 Fisher α (α) α Fisher α ( α) 0 Levi Civita (1) ( 1) e m (e) (m) ([1], [], [13]) Poincaré e m Poincaré e m Kähler-like Kähler-like Kähler M g M X, Y, Z (.1) Xg(Y, Z) = g( X Y, Z) + g(y, XZ)

More information

第10章 アイソパラメトリック要素

第10章 アイソパラメトリック要素 June 5, 2019 1 / 26 10.1 ( ) 2 / 26 10.2 8 2 3 4 3 4 6 10.1 4 2 3 4 3 (a) 4 (b) 2 3 (c) 2 4 10.1: 3 / 26 8.3 3 5.1 4 10.4 Gauss 10.1 Ω i 2 3 4 Ξ 3 4 6 Ξ ( ) Ξ 5.1 Gauss ˆx : Ξ Ω i ˆx h u 4 / 26 10.2.1

More information

(a) (b) (c) Canny (d) 1 ( x α, y α ) 3 (x α, y α ) (a) A 2 + B 2 + C 2 + D 2 + E 2 + F 2 = 1 (3) u ξ α u (A, B, C, D, E, F ) (4) ξ α (x 2 α, 2x α y α,

(a) (b) (c) Canny (d) 1 ( x α, y α ) 3 (x α, y α ) (a) A 2 + B 2 + C 2 + D 2 + E 2 + F 2 = 1 (3) u ξ α u (A, B, C, D, E, F ) (4) ξ α (x 2 α, 2x α y α, [II] Optimization Computation for 3-D Understanding of Images [II]: Ellipse Fitting 1. (1) 2. (2) (edge detection) (edge) (zero-crossing) Canny (Canny operator) (3) 1(a) [I] [II] [III] [IV ] E-mail sugaya@iim.ics.tut.ac.jp

More information

untitled

untitled 1 n m (ICA = independent component analysis) BSS (= blind source separation) : s(t) =(s 1 (t),...,s n (t)) R n : x(t) =(x 1 (t),...,x n (t)) R m 1 i s i (t) a ji R j 2 (A =(a ji )) x(t) =As(t) (1) n =

More information

Evaluation of a SATOYAMA Forest Using a Voluntary Labor Supply Curve Version: c 2003 Taku Terawaki, Akio Muranaka URL: http

Evaluation of a SATOYAMA Forest Using a Voluntary Labor Supply Curve Version: c 2003 Taku Terawaki, Akio Muranaka URL: http 14 9 27 2003 Evaluation of a SATOYAMA Forest Using a Voluntary Labor Supply Curve 1 1 2 Version: 15 10 1 c 2003 Taku Terawaki, Akio Muranaka URL: http://www.taku-t.com/ 1 [14] 3 [10] 3 2 Andreoni[1] Duncan[7]

More information

130 Oct Radial Basis Function RBF Efficient Market Hypothesis Fama ) 4) 1 Fig. 1 Utility function. 2 Fig. 2 Value function. (1) (2)

130 Oct Radial Basis Function RBF Efficient Market Hypothesis Fama ) 4) 1 Fig. 1 Utility function. 2 Fig. 2 Value function. (1) (2) Vol. 47 No. SIG 14(TOM 15) Oct. 2006 RBF 2 Effect of Stock Investor Agent According to Framing Effect to Stock Exchange in Artificial Stock Market Zhai Fei, Shen Kan, Yusuke Namikawa and Eisuke Kita Several

More information

1 Tokyo Daily Rainfall (mm) Days (mm)

1 Tokyo Daily Rainfall (mm) Days (mm) ( ) r-taka@maritime.kobe-u.ac.jp 1 Tokyo Daily Rainfall (mm) 0 100 200 300 0 10000 20000 30000 40000 50000 Days (mm) 1876 1 1 2013 12 31 Tokyo, 1876 Daily Rainfall (mm) 0 50 100 150 0 100 200 300 Tokyo,

More information

bag-of-words bag-of-keypoints Web bagof-keypoints Nearest Neighbor SVM Nearest Neighbor SIFT Nearest Neighbor bag-of-keypoints Nearest Neighbor SVM 84

bag-of-words bag-of-keypoints Web bagof-keypoints Nearest Neighbor SVM Nearest Neighbor SIFT Nearest Neighbor bag-of-keypoints Nearest Neighbor SVM 84 Bag-of-Keypoints Web G.Csurka bag-of-keypoints Web Bag-of-keypoints SVM 5.% Web Image Classification with Bag-of-Keypoints Taichi joutou and Keiji yanai Recently, need for generic image recognition is

More information

, 1.,,,.,., (Lin, 1955).,.,.,.,. f, 2,. main.tex 2011/08/13( )

, 1.,,,.,., (Lin, 1955).,.,.,.,. f, 2,. main.tex 2011/08/13( ) 81 4 2 4.1, 1.,,,.,., (Lin, 1955).,.,.,.,. f, 2,. 82 4.2. ζ t + V (ζ + βy) = 0 (4.2.1), V = 0 (4.2.2). (4.2.1), (3.3.66) R 1 Φ / Z, Γ., F 1 ( 3.2 ). 7,., ( )., (4.2.1) 500 hpa., 500 hpa (4.2.1) 1949,.,

More information

Mantel-Haenszelの方法

Mantel-Haenszelの方法 Mantel-Haenszel 2008 6 12 ) 2008 6 12 1 / 39 Mantel & Haenzel 1959) Mantel N, Haenszel W. Statistical aspects of the analysis of data from retrospective studies of disease. J. Nat. Cancer Inst. 1959; 224):

More information

untitled

untitled ISSN - ..................... 7............... Web........................ SVM... 7..... 7..........................7...................... Web........... ....... 7 7..... 7 7..... 7 7..... 7 7..............

More information

Convolutional Neural Network A Graduation Thesis of College of Engineering, Chubu University Investigation of feature extraction by Convolution

Convolutional Neural Network A Graduation Thesis of College of Engineering, Chubu University Investigation of feature extraction by Convolution Convolutional Neural Network 2014 3 A Graduation Thesis of College of Engineering, Chubu University Investigation of feature extraction by Convolutional Neural Network Fukui Hiroshi 1940 1980 [1] 90 3

More information

untitled

untitled 17 5 13 1 2 1.1... 2 1.2... 2 1.3... 3 2 3 2.1... 3 2.2... 5 3 6 3.1... 6 3.2... 7 3.3 t... 7 3.4 BC a... 9 3.5... 10 4 11 1 1 θ n ˆθ. ˆθ, ˆθ, ˆθ.,, ˆθ.,.,,,. 1.1 ˆθ σ 2 = E(ˆθ E ˆθ) 2 b = E(ˆθ θ). Y 1,,Y

More information

Autumn II III Zon and Muysken 2005 Zon and Muysken 2005 IV II 障害者への所得移転の経済効果 分析に用いるデータ

Autumn II III Zon and Muysken 2005 Zon and Muysken 2005 IV II 障害者への所得移転の経済効果 分析に用いるデータ 212 Vol. 44 No. 2 I はじめに 2008 1 2 Autumn 08 213 II III Zon and Muysken 2005 Zon and Muysken 2005 IV II 障害者への所得移転の経済効果 17 18 1 分析に用いるデータ 1 2005 10 12 200 2 2006 9 12 1 1 2 129 35 113 3 1 2 6 1 2 3 4 4 1

More information

ver.1 / c /(13)

ver.1 / c /(13) 1 -- 11 1 c 2010 1/(13) 1 -- 11 -- 1 1--1 1--1--1 2009 3 t R x R n 1 ẋ = f(t, x) f = ( f 1,, f n ) f x(t) = ϕ(x 0, t) x(0) = x 0 n f f t 1--1--2 2009 3 q = (q 1,..., q m ), p = (p 1,..., p m ) x = (q,

More information

第5章 偏微分方程式の境界値問題

第5章 偏微分方程式の境界値問題 October 5, 2018 1 / 113 4 ( ) 2 / 113 Poisson 5.1 Poisson ( A.7.1) Poisson Poisson 1 (A.6 ) Γ p p N u D Γ D b 5.1.1: = Γ D Γ N 3 / 113 Poisson 5.1.1 d {2, 3} Lipschitz (A.5 ) Γ D Γ N = \ Γ D Γ p Γ N Γ

More information

鉄鋼協会プレゼン

鉄鋼協会プレゼン NN :~:, 8 Nov., Adaptive H Control for Linear Slider with Friction Compensation positioning mechanism moving table stand manipulator Point to Point Control [G] Continuous Path Control ground Fig. Positoining

More information

基礎数学I

基礎数学I I & II ii ii........... 22................. 25 12............... 28.................. 28.................... 31............. 32.................. 34 3 1 9.................... 1....................... 1............

More information

main.dvi

main.dvi SGC - 70 2, 3 23 ɛ-δ 2.12.8 3 2.92.13 4 2 3 1 2.1 2.102.12 [8][14] [1],[2] [4][7] 2 [4] 1 2009 8 1 1 1.1... 1 1.2... 4 1.3 1... 8 1.4 2... 9 1.5... 12 1.6 1... 16 1.7... 18 1.8... 21 1.9... 23 2 27 2.1

More information

X X X Y R Y R Y R MCAR MAR MNAR Figure 1: MCAR, MAR, MNAR Y R X 1.2 Missing At Random (MAR) MAR MCAR MCAR Y X X Y MCAR 2 1 R X Y Table 1 3 IQ MCAR Y I

X X X Y R Y R Y R MCAR MAR MNAR Figure 1: MCAR, MAR, MNAR Y R X 1.2 Missing At Random (MAR) MAR MCAR MCAR Y X X Y MCAR 2 1 R X Y Table 1 3 IQ MCAR Y I (missing data analysis) - - 1/16/2011 (missing data, missing value) (list-wise deletion) (pair-wise deletion) (full information maximum likelihood method, FIML) (multiple imputation method) 1 missing completely

More information

202 2 9 Vol. 9 yasuhisa.toyosawa@mizuho-cb.co.jp 3 3 Altman968 Z Kaplan and Urwitz 979 Merton974 Support Vector Machine SVM 20 20 2 SVM i s i x b si t = b x i i r i R * R r (R,R, L,R ), R < R < L < R

More information

要旨 1. 始めに PCA 2. 不偏分散, 分散, 共分散 N N 49

要旨 1. 始めに PCA 2. 不偏分散, 分散, 共分散 N N 49 要旨 1. 始めに PCA 2. 不偏分散, 分散, 共分散 N N 49 N N Web x x y x x x y x y x y N 三井信宏 : 統計の落とし穴と蜘蛛の糸,https://www.yodosha.co.jp/jikkenigaku/statistics_pitfall/pitfall_.html 50 標本分散 不偏分散 図 1: 不偏分散のほうが母集団の分散に近付くことを示すシミュレーション

More information

(Compton Scattering) Beaming 1 exp [i (k x ωt)] k λ k = 2π/λ ω = 2πν k = ω/c k x ωt ( ω ) k α c, k k x ωt η αβ k α x β diag( + ++) x β = (ct, x) O O x

(Compton Scattering) Beaming 1 exp [i (k x ωt)] k λ k = 2π/λ ω = 2πν k = ω/c k x ωt ( ω ) k α c, k k x ωt η αβ k α x β diag( + ++) x β = (ct, x) O O x Compton Scattering Beaming exp [i k x ωt] k λ k π/λ ω πν k ω/c k x ωt ω k α c, k k x ωt η αβ k α x β diag + ++ x β ct, x O O x O O v k α k α β, γ k γ k βk, k γ k + βk k γ k k, k γ k + βk 3 k k 4 k 3 k

More information

IS1-09 第 回画像センシングシンポジウム, 横浜,14 年 6 月 2 Hough Forest Hough Forest[6] Random Forest( [5]) Random Forest Hough Forest Hough Forest 2.1 Hough Forest 1 2.2

IS1-09 第 回画像センシングシンポジウム, 横浜,14 年 6 月 2 Hough Forest Hough Forest[6] Random Forest( [5]) Random Forest Hough Forest Hough Forest 2.1 Hough Forest 1 2.2 IS1-09 第 回画像センシングシンポジウム, 横浜,14 年 6 月 MI-Hough Forest () E-mail: ym@vision.cs.chubu.ac.jphf@cs.chubu.ac.jp Abstract Hough Forest Random Forest MI-Hough Forest Multiple Instance Learning Bag Hough Forest

More information

SO(2)

SO(2) TOP URL http://amonphys.web.fc2.com/ 1 12 3 12.1.................................. 3 12.2.......................... 4 12.3............................. 5 12.4 SO(2).................................. 6

More information

微分積分 サンプルページ この本の定価 判型などは, 以下の URL からご覧いただけます. このサンプルページの内容は, 初版 1 刷発行時のものです.

微分積分 サンプルページ この本の定価 判型などは, 以下の URL からご覧いただけます.   このサンプルページの内容は, 初版 1 刷発行時のものです. 微分積分 サンプルページ この本の定価 判型などは, 以下の URL からご覧いただけます. ttp://www.morikita.co.jp/books/mid/00571 このサンプルページの内容は, 初版 1 刷発行時のものです. i ii 014 10 iii [note] 1 3 iv 4 5 3 6 4 x 0 sin x x 1 5 6 z = f(x, y) 1 y = f(x)

More information

fiš„v3.dvi

fiš„v3.dvi (2001) 49 1 23 42 2000 10 16 2001 4 23 NTT * 1. 1.1 1998 * 104 0033 1 21 2 7F 24 49 1 2001 1999 70 91 MIT M. Turk Recognition Using Eigenface (Turk and Pentland (1991)). 1998 IC 1 CPU (Jain and Waller

More information

( ) (, ) arxiv: hgm OpenXM search. d n A = (a ij ). A i a i Z d, Z d. i a ij > 0. β N 0 A = N 0 a N 0 a n Z A (β; p) = Au=β,u N n 0 A

( ) (, ) arxiv: hgm OpenXM search. d n A = (a ij ). A i a i Z d, Z d. i a ij > 0. β N 0 A = N 0 a N 0 a n Z A (β; p) = Au=β,u N n 0 A ( ) (, ) arxiv: 1510.02269 hgm OpenXM search. d n A = (a ij ). A i a i Z d, Z d. i a ij > 0. β N 0 A = N 0 a 1 + + N 0 a n Z A (β; p) = Au=β,u N n 0 A-. u! = n i=1 u i!, p u = n i=1 pu i i. Z = Z A Au

More information

Support Vector Machine (SVM) 4 SVM SVM 2 80% 100% SVM SVM SVM 4 SVM 2 2 SVM 4

Support Vector Machine (SVM) 4 SVM SVM 2 80% 100% SVM SVM SVM 4 SVM 2 2 SVM 4 Analysis of Groove Feelings of Drums Plays 47 56340 19 1 31 Support Vector Machine (SVM) 4 SVM SVM 2 80% 100% SVM SVM SVM 4 SVM 2 2 SVM 4 1 1 1.1........................................ 1 1.1.1.............................

More information

y = x x R = 0. 9, R = σ $ = y x w = x y x x w = x y α ε = + β + x x x y α ε = + β + γ x + x x x x' = / x y' = y/ x y' =

y = x x R = 0. 9, R = σ $ = y x w = x y x x w = x y α ε = + β + x x x y α ε = + β + γ x + x x x x' = / x y' = y/ x y' = y x = α + β + ε =,, ε V( ε) = E( ε ) = σ α $ $ β w ( 0) σ = w σ σ y α x ε = + β + w w w w ε / w ( w y x α β ) = α$ $ W = yw βwxw $β = W ( W) ( W)( W) w x x w x x y y = = x W y W x y x y xw = y W = w w

More information

2005 2006.2.22-1 - 1 Fig. 1 2005 2006.2.22-2 - Element-Free Galerkin Method (EFGM) Meshless Local Petrov-Galerkin Method (MLPGM) 2005 2006.2.22-3 - 2 MLS u h (x) 1 p T (x) = [1, x, y]. (1) φ(x) 0.5 φ(x)

More information

untitled

untitled c 645 2 1. GM 1959 Lindsey [1] 1960 Howard [2] Howard 1 25 (Markov Decision Process) 3 3 2 3 +1=25 9 Bellman [3] 1 Bellman 1 k 980 8576 27 1 015 0055 84 4 1977 D Esopo and Lefkowitz [4] 1 (SI) Cover and

More information

スケーリング理論とはなにか? - --尺度を変えて見えること--

スケーリング理論とはなにか?  - --尺度を変えて見えること-- ? URL: http://maildbs.c.u-tokyo.ac.jp/ fukushima mailto:hukusima@phys.c.u-tokyo.ac.jp DEX-SMI @ 2006 12 17 ( ) What is scaling theory? DEX-SMI 1 / 40 Outline Outline 1 2 3 4 ( ) What is scaling theory?

More information

Microsoft Word doc

Microsoft Word doc . 正規線形モデルのベイズ推定翠川 大竹距離減衰式 (PGA(Midorikawa, S., and Ohtake, Y. (, Attenuation relationships of peak ground acceleration and velocity considering attenuation characteristics for shallow and deeper earthquakes,

More information

φ 4 Minimal subtraction scheme 2-loop ε 2008 (University of Tokyo) (Atsuo Kuniba) version 21/Apr/ Formulas Γ( n + ɛ) = ( 1)n (1 n! ɛ + ψ(n + 1)

φ 4 Minimal subtraction scheme 2-loop ε 2008 (University of Tokyo) (Atsuo Kuniba) version 21/Apr/ Formulas Γ( n + ɛ) = ( 1)n (1 n! ɛ + ψ(n + 1) φ 4 Minimal subtraction scheme 2-loop ε 28 University of Tokyo Atsuo Kuniba version 2/Apr/28 Formulas Γ n + ɛ = n n! ɛ + ψn + + Oɛ n =,, 2, ψn + = + 2 + + γ, 2 n ψ = γ =.5772... Euler const, log + ax x

More information

SAMA- SUKU-RU Contents p-adic families of Eisenstein series (modular form) Hecke Eisenstein Eisenstein p T

SAMA- SUKU-RU Contents p-adic families of Eisenstein series (modular form) Hecke Eisenstein Eisenstein p T SAMA- SUKU-RU Contents 1. 1 2. 7.1. p-adic families of Eisenstein series 3 2.1. modular form Hecke 3 2.2. Eisenstein 5 2.3. Eisenstein p 7 3. 7.2. The projection to the ordinary part 9 3.1. The ordinary

More information

IPSJ SIG Technical Report 1, Instrument Separation in Reverberant Environments Using Crystal Microphone Arrays Nobutaka ITO, 1, 2 Yu KITANO, 1

IPSJ SIG Technical Report 1, Instrument Separation in Reverberant Environments Using Crystal Microphone Arrays Nobutaka ITO, 1, 2 Yu KITANO, 1 1, 2 1 1 1 Instrument Separation in Reverberant Environments Using Crystal Microphone Arrays Nobutaka ITO, 1, 2 Yu KITANO, 1 Nobutaka ONO 1 and Shigeki SAGAYAMA 1 This paper deals with instrument separation

More information

Microsoft PowerPoint - SSII_harada pptx

Microsoft PowerPoint - SSII_harada pptx The state of the world The gathered data The processed data w d r I( W; D) I( W; R) The data processing theorem states that data processing can only destroy information. David J.C. MacKay. Information

More information

2 (March 13, 2010) N Λ a = i,j=1 x i ( d (a) i,j x j ), Λ h = N i,j=1 x i ( d (h) i,j x j ) B a B h B a = N i,j=1 ν i d (a) i,j, B h = x j N i,j=1 ν i

2 (March 13, 2010) N Λ a = i,j=1 x i ( d (a) i,j x j ), Λ h = N i,j=1 x i ( d (h) i,j x j ) B a B h B a = N i,j=1 ν i d (a) i,j, B h = x j N i,j=1 ν i 1. A. M. Turing [18] 60 Turing A. Gierer H. Meinhardt [1] : (GM) ) a t = D a a xx µa + ρ (c a2 h + ρ 0 (0 < x < l, t > 0) h t = D h h xx νh + c ρ a 2 (0 < x < l, t > 0) a x = h x = 0 (x = 0, l) a = a(x,

More information

all.dvi

all.dvi 5,, Euclid.,..,... Euclid,.,.,, e i (i =,, ). 6 x a x e e e x.:,,. a,,. a a = a e + a e + a e = {e, e, e } a (.) = a i e i = a i e i (.) i= {a,a,a } T ( T ),.,,,,. (.),.,...,,. a 0 0 a = a 0 + a + a 0

More information

all.dvi

all.dvi 72 9 Hooke,,,. Hooke. 9.1 Hooke 1 Hooke. 1, 1 Hooke. σ, ε, Young. σ ε (9.1), Young. τ γ G τ Gγ (9.2) X 1, X 2. Poisson, Poisson ν. ν ε 22 (9.) ε 11 F F X 2 X 1 9.1: Poisson 9.1. Hooke 7 Young Poisson G

More information

A11 (1993,1994) 29 A12 (1994) 29 A13 Trefethen and Bau Numerical Linear Algebra (1997) 29 A14 (1999) 30 A15 (2003) 30 A16 (2004) 30 A17 (2007) 30 A18

A11 (1993,1994) 29 A12 (1994) 29 A13 Trefethen and Bau Numerical Linear Algebra (1997) 29 A14 (1999) 30 A15 (2003) 30 A16 (2004) 30 A17 (2007) 30 A18 2013 8 29y, 2016 10 29 1 2 2 Jordan 3 21 3 3 Jordan (1) 3 31 Jordan 4 32 Jordan 4 33 Jordan 6 34 Jordan 8 35 9 4 Jordan (2) 10 41 x 11 42 x 12 43 16 44 19 441 19 442 20 443 25 45 25 5 Jordan 26 A 26 A1

More information

土木学会論文集 D3( 土木計画学 ), Vol. 71, No. 2, 31-43,

土木学会論文集 D3( 土木計画学 ), Vol. 71, No. 2, 31-43, 1 2 1 305 8506 16 2 E-mail: murakami.daisuke@nies.go.jp 2 305 8573 1 1 1 E-mail: tsutsumi@sk.tsukuba.ac.jp Key Words: sampling design, geostatistics, officially assessed land price, prefectural land price

More information

163 KdV KP Lax pair L, B L L L 1/2 W 1 LW = ( / x W t 1, t 2, t 3, ψ t n ψ/ t n = B nψ (KdV B n = L n/2 KP B n = L n KdV KP Lax W Lax τ KP L ψ τ τ Cha

163 KdV KP Lax pair L, B L L L 1/2 W 1 LW = ( / x W t 1, t 2, t 3, ψ t n ψ/ t n = B nψ (KdV B n = L n/2 KP B n = L n KdV KP Lax W Lax τ KP L ψ τ τ Cha 63 KdV KP Lax pair L, B L L L / W LW / x W t, t, t 3, ψ t n / B nψ KdV B n L n/ KP B n L n KdV KP Lax W Lax τ KP L ψ τ τ Chapter 7 An Introduction to the Sato Theory Masayui OIKAWA, Faculty of Engneering,

More information

[1] SBS [2] SBS Random Forests[3] Random Forests ii

[1] SBS [2] SBS Random Forests[3] Random Forests ii Random Forests 2013 3 A Graduation Thesis of College of Engineering, Chubu University Proposal of an efficient feature selection using the contribution rate of Random Forests Katsuya Shimazaki [1] SBS

More information

ISCO自動コーディングシステムの分類精度向上に向けて―SSM およびJGSS データセットによる実験の結果―

ISCO自動コーディングシステムの分類精度向上に向けて―SSM およびJGSS データセットによる実験の結果― ISCO SSM JGSS Improvement of Classification Accuracy in an ISCO Automatic Coding System: Results of Experiments Using both the SSM Dataset and the JGSS Dataset Kazuko TAKAHASHI Faculty of International

More information

n (1.6) i j=1 1 n a ij x j = b i (1.7) (1.7) (1.4) (1.5) (1.4) (1.7) u, v, w ε x, ε y, ε x, γ yz, γ zx, γ xy (1.8) ε x = u x ε y = v y ε z = w z γ yz

n (1.6) i j=1 1 n a ij x j = b i (1.7) (1.7) (1.4) (1.5) (1.4) (1.7) u, v, w ε x, ε y, ε x, γ yz, γ zx, γ xy (1.8) ε x = u x ε y = v y ε z = w z γ yz 1 2 (a 1, a 2, a n ) (b 1, b 2, b n ) A (1.1) A = a 1 b 1 + a 2 b 2 + + a n b n (1.1) n A = a i b i (1.2) i=1 n i 1 n i=1 a i b i n i=1 A = a i b i (1.3) (1.3) (1.3) (1.1) (ummation convention) a 11 x

More information

The Annual Report of Educational Psychology in Japan 2008, Vol.47, 148-158 Qualitative Research in Action:Reflections on its Implications for Educational Psychology Yuji MORO (GRADUATE SCHOOL OF COMPREHENSIVE

More information

TOP URL 1

TOP URL   1 TOP URL http://amonphys.web.fc.com/ 3.............................. 3.............................. 4.3 4................... 5.4........................ 6.5........................ 8.6...........................7

More information

Twist knot orbifold Chern-Simons

Twist knot orbifold Chern-Simons Twist knot orbifold Chern-Simons 1 3 M π F : F (M) M ω = {ω ij }, Ω = {Ω ij }, cs := 1 4π 2 (ω 12 ω 13 ω 23 + ω 12 Ω 12 + ω 13 Ω 13 + ω 23 Ω 23 ) M Chern-Simons., S. Chern J. Simons, F (M) Pontrjagin 2.,

More information

Trapezoidal Rule θ = 1/ x n x n 1 t = 1 [f(t n 1, x n 1 ) + f(t n, x n )] (6) 1. dx dt = f(t, x), x(t 0) = x 0 (7) t [t 0, t 1 ] f t [t 0, t 1 ], x x

Trapezoidal Rule θ = 1/ x n x n 1 t = 1 [f(t n 1, x n 1 ) + f(t n, x n )] (6) 1. dx dt = f(t, x), x(t 0) = x 0 (7) t [t 0, t 1 ] f t [t 0, t 1 ], x x University of Hyogo 8 8 1 d x(t) =f(t, x(t)), dt (1) x(t 0 ) =x 0 () t n = t 0 + n t x x n n x n x 0 x i i = 0,..., n 1 x n x(t) 1 1.1 1 1 1 0 θ 1 θ x n x n 1 t = θf(t n 1, x n 1 ) + (1 θ)f(t n, x n )

More information

Duality in Bayesian prediction and its implication

Duality in Bayesian prediction and its implication $\theta$ 1860 2013 104-119 104 Duality in Bayesian prediction and its implication Toshio Ohnishi and Takemi Yanagimotob) a) Faculty of Economics, Kyushu University b) Department of Industrial and Systems

More information

Isogai, T., Building a dynamic correlation network for fat-tailed financial asset returns, Applied Network Science (7):-24, 206,

Isogai, T., Building a dynamic correlation network for fat-tailed financial asset returns, Applied Network Science (7):-24, 206, H28. (TMU) 206 8 29 / 34 2 3 4 5 6 Isogai, T., Building a dynamic correlation network for fat-tailed financial asset returns, Applied Network Science (7):-24, 206, http://link.springer.com/article/0.007/s409-06-0008-x

More information

X G P G (X) G BG [X, BG] S 2 2 2 S 2 2 S 2 = { (x 1, x 2, x 3 ) R 3 x 2 1 + x 2 2 + x 2 3 = 1 } R 3 S 2 S 2 v x S 2 x x v(x) T x S 2 T x S 2 S 2 x T x S 2 = { ξ R 3 x ξ } R 3 T x S 2 S 2 x x T x S 2

More information

L Y L( ) Y0.15Y 0.03L 0.01L 6% L=(10.15)Y 108.5Y 6%1 Y y p L ( 19 ) [1990] [1988] 1

L Y L( ) Y0.15Y 0.03L 0.01L 6% L=(10.15)Y 108.5Y 6%1 Y y p L ( 19 ) [1990] [1988] 1 1. 1-1 00 001 9 J-REIT 1- MM CAPM 1-3 [001] [1997] [003] [001] [1999] [003] 1-4 0 . -1 18 1-1873 6 1896 L Y L( ) Y0.15Y 0.03L 0.01L 6% L=(10.15)Y 108.5Y 6%1 Y y p L 6 1986 ( 19 ) -3 17 3 18 44 1 [1990]

More information

211 kotaro@math.titech.ac.jp 1 R *1 n n R n *2 R n = {(x 1,..., x n ) x 1,..., x n R}. R R 2 R 3 R n R n R n D D R n *3 ) (x 1,..., x n ) f(x 1,..., x n ) f D *4 n 2 n = 1 ( ) 1 f D R n f : D R 1.1. (x,

More information

it-ken_open.key

it-ken_open.key 深層学習技術の進展 ImageNet Classification 画像認識 音声認識 自然言語処理 機械翻訳 深層学習技術は これらの分野において 特に圧倒的な強みを見せている Figure (Left) Eight ILSVRC-2010 test Deep images and the cited4: from: ``ImageNet Classification with Networks et

More information

Vol.8 No (July 2015) 2/ [3] stratification / *1 2 J-REIT *2 *1 *2 J-REIT % J-REIT J-REIT 6 J-REIT J-REIT 10 J-REIT *3 J-

Vol.8 No (July 2015) 2/ [3] stratification / *1 2 J-REIT *2 *1 *2 J-REIT % J-REIT J-REIT 6 J-REIT J-REIT 10 J-REIT *3 J- Vol.8 No.2 1 9 (July 2015) 1,a) 2 3 2012 1 5 2012 3 24, 2013 12 12 2 1 2 A Factor Model for Measuring Market Risk in Real Estate Investment Hiroshi Ishijima 1,a) Akira Maeda 2 Tomohiko Taniyama 3 Received:

More information

QMII_10.dvi

QMII_10.dvi 65 1 1.1 1.1.1 1.1 H H () = E (), (1.1) H ν () = E ν () ν (). (1.) () () = δ, (1.3) μ () ν () = δ(μ ν). (1.4) E E ν () E () H 1.1: H α(t) = c (t) () + dνc ν (t) ν (), (1.5) H () () + dν ν () ν () = 1 (1.6)

More information

Haiku Generation Based on Motif Images Using Deep Learning Koki Yoneda 1 Soichiro Yokoyama 2 Tomohisa Yamashita 2 Hidenori Kawamura Scho

Haiku Generation Based on Motif Images Using Deep Learning Koki Yoneda 1 Soichiro Yokoyama 2 Tomohisa Yamashita 2 Hidenori Kawamura Scho Haiku Generation Based on Motif Images Using Deep Learning 1 2 2 2 Koki Yoneda 1 Soichiro Yokoyama 2 Tomohisa Yamashita 2 Hidenori Kawamura 2 1 1 School of Engineering Hokkaido University 2 2 Graduate

More information

ヒストリカル法によるバリュー・アット・リスクの計測:市場価格変動の非定常性への実務的対応

ヒストリカル法によるバリュー・アット・リスクの計測:市場価格変動の非定常性への実務的対応 VaR VaR VaR VaR GARCH E-mail : yoshitaka.andou@boj.or.jp VaR VaR LTCM VaR VaR VaR VaR VaR VaR VaR VaR t P(t) P(= P() P(t)) Pr[ P X] =, X t100 (1 )VaR VaR P100 P X X (1 ) VaR VaR VaR VaR VaR VaR VaR VaR

More information

b3e2003.dvi

b3e2003.dvi 15 II 5 5.1 (1) p, q p = (x + 2y, xy, 1), q = (x 2 + 3y 2, xyz, ) (i) p rotq (ii) p gradq D (2) a, b rot(a b) div [11, p.75] (3) (i) f f grad f = 1 2 grad( f 2) (ii) f f gradf 1 2 grad ( f 2) rotf 5.2

More information

2016 Institute of Statistical Research

2016 Institute of Statistical Research 2016 Institute of Statistical Research 2016 Institute of Statistical Research 2016 Institute of Statistical Research 2016 Institute of Statistical Research 2016 Institute of Statistical Research 2016 Institute

More information

2007/8 Vol. J90 D No. 8 Stauffer [7] 2 2 I 1 I 2 2 (I 1(x),I 2(x)) 2 [13] I 2 = CI 1 (C >0) (I 1,I 2) (I 1,I 2) Field Monitoring Server

2007/8 Vol. J90 D No. 8 Stauffer [7] 2 2 I 1 I 2 2 (I 1(x),I 2(x)) 2 [13] I 2 = CI 1 (C >0) (I 1,I 2) (I 1,I 2) Field Monitoring Server a) Change Detection Using Joint Intensity Histogram Yasuyo KITA a) 2 (0 255) (I 1 (x),i 2 (x)) I 2 = CI 1 (C>0) (I 1,I 2 ) (I 1,I 2 ) 2 1. [1] 2 [2] [3] [5] [6] [8] Intelligent Systems Research Institute,

More information

Title 最適年金の理論 Author(s) 藤井, 隆雄 ; 林, 史明 ; 入谷, 純 ; 小黒, 一正 Citation Issue Date Type Technical Report Text Version publisher URL

Title 最適年金の理論 Author(s) 藤井, 隆雄 ; 林, 史明 ; 入谷, 純 ; 小黒, 一正 Citation Issue Date Type Technical Report Text Version publisher URL Title 最適年金の理論 Author(s) 藤井, 隆雄 ; 林, 史明 ; 入谷, 純 ; 小黒, 一正 Citation Issue 2012-06 Date Type Technical Report Text Version publisher URL http://hdl.handle.net/10086/23085 Right Hitotsubashi University Repository

More information