, Shannon (1948) A mathematical theory of communication : 3. THE SERIES OF APPROXIMATIONS TO ENGLISH To give a visual idea of how this series of proce

Size: px
Start display at page:

Download ", Shannon (1948) A mathematical theory of communication : 3. THE SERIES OF APPROXIMATIONS TO ENGLISH To give a visual idea of how this series of proce"

Transcription

1 ( ) SITA / 54

2 , Shannon (1948) A mathematical theory of communication : 3. THE SERIES OF APPROXIMATIONS TO ENGLISH To give a visual idea of how this series of processes approaches a language, typical sequences in the approximations to English have been constructed and are given below. In all cases we have assumed a 27-symbol alphabet, the 26 letters and a space. 1. Zero-order approximation (symbols independent and equiprobable). XFOML RXKHRJFFJUJ ZLPWCFWKCYJ FFJEYVKCQSGHYD QPAAMKBZAACIBZL- HJQD. 2. First-order approximation (symbols independent but with frequencies of English text). OCRO HLI RGWR NMIELWIS EU LL NBNESEBYA TH EEI ALHENHTTPA OOBTTVA NAH BRL. 3. Second-order approximation (digram structure as in English). ON IE ANTSOUTINYS ARE T INCTORE ST BE S DEAMY ACHIN D ILONASIVE TU- COOWE AT TEASONARE FUSO TIZIN ANDY TOBE SEACE CTISBE. 4. Third-order approximation (trigram structure as in English). IN NO IST LAT WHEY CRATICT FROURE BIRS GROCID PONDENOME OF DEMONS- TURES OF THE REPTAGIN IS REGOACTIONA OF CRE. 2 / 54

3 Noisy Channel : A B : : SNS (Eisenstein+ EMNLP2013) 3 / 54

4 Google Neural (2016) 2016 ( ) 4 / 54

5 , 10km 50km :, :, :, 5 / 54

6 ( ) ( ) (? ),? 6 / 54

7 (2) (PPM, HMM, Lempel-Ziv, ) (PCFG,,,, ), 0/1 ( 256), 7 / 54

8 ( ) Context-Tree Weighting Method (Willems+ 1985) Markov ( +, NIPS 2007) (Csiszár 1998) ( & +, ISMIR 2016) 8 / 54

9 CTW Markov The Infinite Markov Model, D.Mochihashi, E.Sumita, NIPS ( ) 9 / 54

10 , [ 94] : x 1 x t 1, x t p(x t x 1 x t 1 ) (1) PPM-II / PPMd: escape Kneser-Ney (Kneser&Ney 1995)! 10 / 54

11 Context Tree Weighting method (Willems+ 1995) e 1 g(0)=0.5 g(1)=0.5 0 g(0)=0.6 g(1)= g(0)=0.3 g(1)= g(0)=0.9 g(1)= g(0)=0.6 g(1)= g(0)=0.8 g(1)=0.2, Suffix Tree { p(x t s) (s ) p(x t x 1 x t 1 ) = γp(x t s)+(1 γ)p(x t 0s)p(x t 1s) (otherwise) p(x t s) Krichevski-Trofimov (KT) estimator (Be(1/2,1/2)) p(x t s) = n(x t )+1/2 n(x t = 0)+n(x t =1)+1 (2) 11 / 54

12 CTW p(x t x 1 x t 1 ) = { p(x t s) (s ) γp(x t s)+(1 γ)p(x t 0s)p(x t 1s) (otherwise) γ? (γ?)? (?) : ( 2000) ( + SITA1999) 12 / 54

13 n n p( ) = 0.2, p( ) = 0.7, p( ) = , [ ] p( ) = ) p( p( ) p( ), (n 1) T p(w 1...w T ) = p(w t w t 1 w t 2 w 1 ) (3) t=1 T p(w t w t 1 w t (n 1) ) (4) }{{} t=1 n 1 n = (n 1) 13 / 54

14 n p(w 1...w T ) T p(w t w t 1 w t (n 1) ) (5) }{{} n 1 t=1 n = V, V n 1 V = 10000, V 2 = (3 ), V 3 = (4 ),, n = 3 5 Google 5 gzipped 24GB, V = V 2 = (3 ) V 3 = ( ) (4 ), 14 / 54

15 n (2) n, (n 1) (n 1) 3, 4, 5,... # # GM, n? 15 / 54

16 n (3), 3, 5 the united states of america 1 ( ) [ 1/0?], DNA,, 16 / 54

17 n n n? n,, (1999), Stolcke (1998), Siu and Ostendorf (2000), Pereira et al. (1995) n = n n, MDL, KL, 17 / 54

18 n,? n, n n, (n 1).. 18 / 54

19 n Hierarchical Pitman-Yor Language Model (HPYLM) (Yee Whye Teh, 2006),n Kneser-Ney (K-N ) (HDP) Pitman-Yor (=2- = PD(α,θ) (Pitman and Yor 1997)) Marc Yor (Université Paris VI, France) Jim Pitman (Dept. of Statistics, Berkeley) 19 / 54

20 HPYLM (1) n, (n 1) Suffix Tree, Depth 0 ǫ 1 sing like will and model 2 she he it bread language go like butter is = = ( ) sing she will sing ǫ will she, 2 ( ), sing 20 / 54

21 HPYLM (2) Depth 0 ǫ 1 sing like will and model 2 she he it bread language sing go like butter is = = ( ) ( ), p(sing she will) p(like she will)? like ( ) he will like like, 21 / 54

22 HPYLM Depth 0 ǫ 1 sing like will and model 2 she he it bread language sing go like butter is = = ( ) HPYLM =, 2? will like 1 ( ) the united states of america. 22 / 54

23 Variable-order Pitman-Yor Language Model 1 q i i 1 q j j 1 q k k ( ), i, q i (1 q i : ) q i, q i Be(α,β) (6), n n 1 p(n h) = q n (1 q i ). (7) i=0 23 / 54

24 VPYLM (2) ǫ 0.95 is of will language order states infinite united q i n will, the 24 / 54

25 Inference of VPYLM, Suffix Tree q i? VPYLM : w = w 1 w 2 w 3 w T, n-gram n = n 1 n 2 n 3 n T p(w) = p(w, n, s) (8) n s s: Gibbs, n 25 / 54

26 Inference of VPYLM (2) Gibbs : (MCMC), w t n-gram n t, n t p(n t w,n t,s t ) (9),n t = 0,1,2,, p(n t w,n t,s t ) p(w t n t,w,n t,s t ) }{{} p(n t w t,n t,s t ) }{{} (10) n t- n t 2 (n t ) 1 : HPYLM n t - ; 2? 26 / 54

27 Inference of VPYLM (3) w n w t 2 w t 1 w t w t ǫ (a,b) = (100,900) 900+β 1000+α+β w t 1 70+β 80+α+β 20+β 50+α+β (a,b) = (10,70) w t 2 (a,b) = (30,20 β 5+α+β w t 3, (a,b) = (5,0) q i i a i, b i, n 1 p(n t = n w t,n t,s t ) = q n (1 q i ) = i=0 a n +α a n +b n +α+β n 1 i=0 b i +β a i +b i +α+β. 27 / 54

28 n n n, p(w h) = p(w, n h) (11) =, n=0 p(w n, h) p(n h). (12) n=0 28 / 54

29 p(w h,n + ) q n p(w h,n) +(1 q }{{} n ) p(w h,(n+1) + ) }{{} n (n+1)+ p(w h) = p(w h,0 + ), q n Be(α,β). Stick-breaking, CTW { p(x t s) (s ) p(x t x 1 x t 1 ) = γp(x t s)+(1 γ)p(x t 0s)p(x t 1s) (otherwise) 29 / 54

30 :, NAB (North American Business News) Wall Street Journal 10M, 1 Chen and Goodman (1996), Goodman (2001) =26,497 : 2000, 10M (52 ), 1 =32,783 n max = 3,5,7,8,,n=7 (Goodman 2001) 30 / 54

31 (a) NAB ( ) n SRILM HPYLM VPYLM Nodes(H) Nodes(V) ,417K 1,344K ,699K 7,466K N/A N/A 10,182K N/A N/A 10,434K ,837K (b) ( ) n SRILM HPYLM VPYLM Nodes(H) Nodes(V) ,341K 1,243K ,140K 6,705K N/A N/A 9,134K N/A N/A 9,490K ,396K 31 / 54

32 VPYLM... (VPYLM, n = 5) 5-gram LM,, : (0.6560), (0.7953), (0.8424), / 54

33 Musical Typicality: How Many Similar Songs Exist?, T.Nakano, D.Mochihashi, K.Yoshii, M.Goto, ISMIR ( ) 33 / 54

34 ,,,, ( )? ( )? 34 / 54

35 ? ([Nakano+ 2015]) θ max x x p(x θ)? 35 / 54

36 (2) 2/3 1/3 0 1 type Previous approach generative probability high Proposed approach song a song b song c typicality high! { 2 {0,1} 3, 1 }, / 54

37 (3) 2/3 1/3 0 1 type Previous approach generative probability high Proposed approach song a song b song c typicality high ,! (Typical sequence) 37 / 54

38 (Csiszár 1998) x = x 1 x 2 x n (x i X ), P(x), ( ). { } 1 P(x) = n N(x x) x X N(x x) : x x : x = 12243, { 1 P(x) = 5, 2 5, 1 5, 1 } 5 38 / 54

39 (2) x = , 1 x = , Q, 1. Q P? 2. P? 39 / 54

40 (3) 1 Q P x, [ ( )] Q n (x) = exp n H(P)+D(P Q) (13) Proof. n Q n (x) = i ) = i=1q(x Q(x) N(x x) = x x = exp [ np(x)logq(x) ] x [ ( = exp n )] P(x)logQ(x) x [ ( )] = exp n H(P)+D(P Q). Q(x) np(x) 40 / 54

41 (4) 2 P T n (P), 1 (n+1) exp{nh(p)} X 1 Tn (P) exp{nh(p)} (14) Proof: 41 / 54

42 (5) 1 2, Q P x = x 1 x 2 x n,, Proof. Q n (T n (P)). = exp( nd(p Q)) (15) a n. = bn iff lim n (1/n)log(a n /b n ) = 0 Q n (T n (P)) = x T n (P) Qn (x) = T n (P) exp( n(h(p)+d(p Q))). = exp(nh(p)) exp( n(h(p)+d(p Q))) = exp( nd(p Q)). 42 / 54

43 Q n (T n (P)). = exp( nd(p Q)) n (AEP),, n Typicality(P Q) = exp( D(P Q)) (16), Q P (Typicality). exp( n ), 43 / 54

44 , (LDA; ) / 54

45 (2) MFCC, K θ ( ) θ 45 / 54

46 (3) Θ = {θ 1,θ 2,,θ M } (θ i : ),θ? Θ θ Θ Dir(α) α, MCMC 46 / 54

47 (4) :, x 10-3 θ Dir(α) Dir(α) ᾱ, θ X, 100 X = {0,1},, / 54

48 (5) Q Dir(α), Typicality(P Θ) = exp( D(P θ)) θ Dir(α) K = exp p k log θ k k=1 p k θ Dir(α) 1 = exp( k p exp klogp k ) k K = exp(h(p)) k=1 θ p k k p k logθ k = exp(h(p)) θ Dir(α) k α k θ Dir(α) k (17) (18) (19) Γ(α k +p k ) Γ(α k ) (20) 48 / 54

49 JPOP MDB: ,278 RWC MDB ( 100 ) GMM LDA 100 (X = 100) : 50, 50 1:49 49:1 : : 49 / 54

50 ( ) T1+M1 T2+M2 (previous method) T3+M2 T3+M3 49:1 ratio (male:female) 1:49 49:1 T4+M3 (proposed method) 1:49 49:1 1:49 49:1 1:49 49:1 1: :, 50 :, (1:49 49:1) ( ), 50 / 54

51 ( ) T1+M1 T2+M2 (previous method) T3+M2 T3+M3 49:1 ratio (male:female) 1:49 49:1 T4+M3 (proposed method) 1:49 49:1 1:49 49:1 1:49 49:1 1: [0,1], 51 / 54

52 Twitter, : :??,?, 52 / 54

53 , : CTW + :,,, 53 / 54

54 [1],. [ 11]. 13., [2] Reinhard Kneser and Hermann Ney. Improved backing-off for m-gram language modeling. In Proceedings of ICASSP, volume 1, pages , [3] F.M.J. Willems, Y.M. Shtarkov, and T.J. Tjalkens. The Context-Tree Weighting Method: Basic Properties. IEEE Trans. on Information Theory, 41: , [4] Daichi Mochihashi and Eiichiro Sumita. The Infinite Markov Model. In Advances in Neural Information Processing Systems 20 (NIPS 2007), pages , [5],. Pitman-Yor n-gram NL-178, pages 63 70, [6] Tomoyasu Nakano, Daichi Mochihashi, Kazuyoshi Yoshii, and Masataka Goto. Musical Typicality: How Many Similar Songs Exist? In ISMIR 2016, pages , / 54

…}…‰…R…tŸA“½‡Ì−î‚b

…}…‰…R…tŸA“½‡Ì−î‚b 2012 OR 2 ( ) 2012 OR 2 1 / 29 1 2 3 Google PageRank ( ) 2012 OR 2 2 / 29 41 1, 2004,, 1, 2 3, 2, 5 5, 6-4,? 42 2, 2, 3, 2,? ( ) 2012 OR 2 3 / 29 Exercise 41, 43, or, 70% 30%, 60% 40% ( ) 2012 OR 2 4 /

More information

IPSJ SIG Technical Report Pitman-Yor 1 1 Pitman-Yor n-gram A proposal of the melody generation method using hierarchical pitman-yor language model Aki

IPSJ SIG Technical Report Pitman-Yor 1 1 Pitman-Yor n-gram A proposal of the melody generation method using hierarchical pitman-yor language model Aki Pitman-Yor Pitman-Yor n-gram A proposal of the melody generation method using hierarchical pitman-yor language model Akira Shirai and Tadahiro Taniguchi Although a lot of melody generation method has been

More information

21 Pitman-Yor Pitman- Yor [7] n -gram W w n-gram G Pitman-Yor P Y (d, θ, G 0 ) (1) G P Y (d, θ, G 0 ) (1) Pitman-Yor d, θ, G 0 d 0 d 1 θ Pitman-Yor G

21 Pitman-Yor Pitman- Yor [7] n -gram W w n-gram G Pitman-Yor P Y (d, θ, G 0 ) (1) G P Y (d, θ, G 0 ) (1) Pitman-Yor d, θ, G 0 d 0 d 1 θ Pitman-Yor G ol2013-nl-214 No6 1,a) 2,b) n-gram 1 M [1] (TG: Tree ubstitution Grammar) [2], [3] TG TG 1 2 a) ohno@ilabdoshishaacjp b) khatano@maildoshishaacjp [4], [5] [6] 2 Pitman-Yor 3 Pitman-Yor 1 21 Pitman-Yor

More information

? (EM),, EM? (, 2004/ 2002) von Mises-Fisher ( 2004) HMM (MacKay 1997) LDA (Blei et al. 2001) PCFG ( 2004)... Variational Bayesian methods for Natural

? (EM),, EM? (, 2004/ 2002) von Mises-Fisher ( 2004) HMM (MacKay 1997) LDA (Blei et al. 2001) PCFG ( 2004)... Variational Bayesian methods for Natural SLC Internal tutorial Daichi Mochihashi daichi.mochihashi@atr.jp ATR SLC 2005.6.21 (Tue) 13:15 15:00@Meeting Room 1 Variational Bayesian methods for Natural Language Processing p.1/30 ? (EM),, EM? (, 2004/

More information

OngaCREST [10] A 3. Latent Dirichlet Allocation: LDA [11] Songle [12] Pitman-Yor (VPYLM) [13] [14,15] n n n 3.1 [16 18] PreFEst [19] F

OngaCREST [10] A 3. Latent Dirichlet Allocation: LDA [11] Songle [12] Pitman-Yor (VPYLM) [13] [14,15] n n n 3.1 [16 18] PreFEst [19] F 1,a) 2,b) 1,c) LPMCC MFCC Fluctuation Pattern (LDA) Songle Pitman-Yor (VPYLM) 3278 1. (MIR: Music Information Retrieval) [1 5] [6 8] 1 National Institute of Advanced Industrial Science and Technology (AIST)

More information

1 [1, 2, 3, 4, 5, 8, 9, 10, 12, 15] The Boston Public Schools system, BPS (Deferred Acceptance system, DA) (Top Trading Cycles system, TTC) cf. [13] [

1 [1, 2, 3, 4, 5, 8, 9, 10, 12, 15] The Boston Public Schools system, BPS (Deferred Acceptance system, DA) (Top Trading Cycles system, TTC) cf. [13] [ Vol.2, No.x, April 2015, pp.xx-xx ISSN xxxx-xxxx 2015 4 30 2015 5 25 253-8550 1100 Tel 0467-53-2111( ) Fax 0467-54-3734 http://www.bunkyo.ac.jp/faculty/business/ 1 [1, 2, 3, 4, 5, 8, 9, 10, 12, 15] The

More information

ばらつき抑制のための確率最適制御

ばらつき抑制のための確率最適制御 ( ) http://wwwhayanuemnagoya-uacjp/ fujimoto/ 2011 3 9 11 ( ) 2011/03/09-11 1 / 46 Outline 1 2 3 4 5 ( ) 2011/03/09-11 2 / 46 Outline 1 2 3 4 5 ( ) 2011/03/09-11 3 / 46 (1/2) r + Controller - u Plant y

More information

Kullback-Leibler

Kullback-Leibler Kullback-Leibler 206 6 6 http://www.math.tohoku.ac.jp/~kuroki/latex/206066kullbackleibler.pdf 0 2 Kullback-Leibler 3. q i.......................... 3.2........... 3.3 Kullback-Leibler.............. 4.4

More information

untitled

untitled c 645 2 1. GM 1959 Lindsey [1] 1960 Howard [2] Howard 1 25 (Markov Decision Process) 3 3 2 3 +1=25 9 Bellman [3] 1 Bellman 1 k 980 8576 27 1 015 0055 84 4 1977 D Esopo and Lefkowitz [4] 1 (SI) Cover and

More information

Computational Semantics 1 category specificity Warrington (1975); Warrington & Shallice (1979, 1984) 2 basic level superiority 3 super-ordinate catego

Computational Semantics 1 category specificity Warrington (1975); Warrington & Shallice (1979, 1984) 2 basic level superiority 3 super-ordinate catego Computational Semantics 1 category specificity Warrington (1975); Warrington & Shallice (1979, 1984) 2 basic level superiority 3 super-ordinate category preservation 1 / 13 analogy by vector space Figure

More information

Dirichlet process mixture Dirichlet process mixture 2 /40 MIRU2008 :

Dirichlet process mixture Dirichlet process mixture 2 /40 MIRU2008 : Dirichlet Process : joint work with: Max Welling (UC Irvine), Yee Whye Teh (UCL, Gatsby) http://kenichi.kurihara.googlepages.com/miru_workshop.pdf 1 /40 MIRU2008 : Dirichlet process mixture Dirichlet process

More information

数学概論I

数学概論I {a n } M >0 s.t. a n 5 M for n =1, 2,... lim n a n = α ε =1 N s.t. a n α < 1 for n > N. n > N a n 5 a n α + α < 1+ α. M := max{ a 1,..., a N, 1+ α } a n 5 M ( n) 1 α α 1+ α t a 1 a N+1 a N+2 a 2 1 a n

More information

nl226ithmm.dvi

nl226ithmm.dvi Markov 1,a),b) Markov (HMM),, Stick-breaking (Adams+ 010), Markov (ithmm),,, Markov, PCFG Stick-breaking,, The Infinite Tree Hidden Markov Model for Unsupervised Hierarchical Part-of-speech Induction Daichi

More information

山形大学紀要

山形大学紀要 x t IID t = b b x t t x t t = b t- AR ARMA IID AR ARMAMA TAR ARCHGARCH TARThreshold Auto Regressive Model TARTongTongLim y y X t y Self Exciting Threshold Auto Regressive, SETAR SETARTAR TsayGewekeTerui

More information

& 3 3 ' ' (., (Pixel), (Light Intensity) (Random Variable). (Joint Probability). V., V = {,,, V }. i x i x = (x, x,, x V ) T. x i i (State Variable),

& 3 3 ' ' (., (Pixel), (Light Intensity) (Random Variable). (Joint Probability). V., V = {,,, V }. i x i x = (x, x,, x V ) T. x i i (State Variable), .... Deeping and Expansion of Large-Scale Random Fields and Probabilistic Image Processing Kazuyuki Tanaka The mathematical frameworks of probabilistic image processing are formulated by means of Markov

More information

S I. dy fx x fx y fx + C 3 C dy fx 4 x, y dy v C xt y C v e kt k > xt yt gt [ v dt dt v e kt xt v e kt + C k x v + C C k xt v k 3 r r + dr e kt S dt d

S I. dy fx x fx y fx + C 3 C dy fx 4 x, y dy v C xt y C v e kt k > xt yt gt [ v dt dt v e kt xt v e kt + C k x v + C C k xt v k 3 r r + dr e kt S dt d S I.. http://ayapin.film.s.dendai.ac.jp/~matuda /TeX/lecture.html PDF PS.................................... 3.3.................... 9.4................5.............. 3 5. Laplace................. 5....

More information

(Basic of Proability Theory). (Probability Spacees ad Radom Variables , (Expectatios, Meas) (Weak Law

(Basic of Proability Theory). (Probability Spacees ad Radom Variables , (Expectatios, Meas) (Weak Law I (Radom Walks ad Percolatios) 3 4 7 ( -2 ) (Preface),.,,,...,,.,,,,.,.,,.,,. (,.) (Basic of Proability Theory). (Probability Spacees ad Radom Variables...............2, (Expectatios, Meas).............................

More information

(1) (2) (1) (2) 2 3 {a n } a 2 + a 4 + a a n S n S n = n = S n

(1) (2) (1) (2) 2 3 {a n } a 2 + a 4 + a a n S n S n = n = S n . 99 () 0 0 0 () 0 00 0 350 300 () 5 0 () 3 {a n } a + a 4 + a 6 + + a 40 30 53 47 77 95 30 83 4 n S n S n = n = S n 303 9 k d 9 45 k =, d = 99 a d n a n d n a n = a + (n )d a n a n S n S n = n(a + a n

More information

2017 (413812)

2017 (413812) 2017 (413812) Deep Learning ( NN) 2012 Google ASIC(Application Specific Integrated Circuit: IC) 10 ASIC Deep Learning TPU(Tensor Processing Unit) NN 12 20 30 Abstract Multi-layered neural network(nn) has

More information

1 Tokyo Daily Rainfall (mm) Days (mm)

1 Tokyo Daily Rainfall (mm) Days (mm) ( ) r-taka@maritime.kobe-u.ac.jp 1 Tokyo Daily Rainfall (mm) 0 100 200 300 0 10000 20000 30000 40000 50000 Days (mm) 1876 1 1 2013 12 31 Tokyo, 1876 Daily Rainfall (mm) 0 50 100 150 0 100 200 300 Tokyo,

More information

音響モデル triphone 入力音声 音声分析 デコーダ 言語モデル N-gram bigram HMM の状態確率として利用 出力層 triphone: 3003 ノード リスコア trigram 隠れ層 2048 ノード X7 層 1 Structure of recognition syst

音響モデル triphone 入力音声 音声分析 デコーダ 言語モデル N-gram bigram HMM の状態確率として利用 出力層 triphone: 3003 ノード リスコア trigram 隠れ層 2048 ノード X7 層 1 Structure of recognition syst 1,a) 1 1 1 deep neural netowrk(dnn) (HMM) () GMM-HMM 2 3 (CSJ) 1. DNN [6]. GPGPU HMM DNN HMM () [7]. [8] [1][2][3] GMM-HMM Gaussian mixture HMM(GMM- HMM) MAP MLLR [4] [3] DNN 1 1 triphone bigram [5]. 2

More information

2

2 NTT 2012 NTT Corporation. All rights reserved. 2 3 4 5 Noisy Channel f : (source), e : (target) ê = argmax e p(e f) = argmax e p(f e)p(e) 6 p( f e) (Brown+ 1990) f1 f2 f3 f4 f5 f6 f7 He is a high school

More information

Mathematical Logic I 12 Contents I Zorn

Mathematical Logic I 12 Contents I Zorn Mathematical Logic I 12 Contents I 2 1 3 1.1............................. 3 1.2.......................... 5 1.3 Zorn.................. 5 2 6 2.1.............................. 6 2.2..............................

More information

S I. dy fx x fx y fx + C 3 C vt dy fx 4 x, y dy yt gt + Ct + C dt v e kt xt v e kt + C k x v k + C C xt v k 3 r r + dr e kt S Sr πr dt d v } dt k e kt

S I. dy fx x fx y fx + C 3 C vt dy fx 4 x, y dy yt gt + Ct + C dt v e kt xt v e kt + C k x v k + C C xt v k 3 r r + dr e kt S Sr πr dt d v } dt k e kt S I. x yx y y, y,. F x, y, y, y,, y n http://ayapin.film.s.dendai.ac.jp/~matuda n /TeX/lecture.html PDF PS yx.................................... 3.3.................... 9.4................5..............

More information

No. 3 Oct The person to the left of the stool carried the traffic-cone towards the trash-can. α α β α α β α α β α Track2 Track3 Track1 Track0 1

No. 3 Oct The person to the left of the stool carried the traffic-cone towards the trash-can. α α β α α β α α β α Track2 Track3 Track1 Track0 1 ACL2013 TACL 1 ACL2013 Grounded Language Learning from Video Described with Sentences (Yu and Siskind 2013) TACL Transactions of the Association for Computational Linguistics What Makes Writing Great?

More information

IPSJ SIG Technical Report Vol.2011-MUS-91 No /7/ , 3 1 Design and Implementation on a System for Learning Songs by Presenting Musical St

IPSJ SIG Technical Report Vol.2011-MUS-91 No /7/ , 3 1 Design and Implementation on a System for Learning Songs by Presenting Musical St 1 2 1, 3 1 Design and Implementation on a System for Learning Songs by Presenting Musical Structures based on Phrase Similarity Yuma Ito, 1 Yoshinari Takegawa, 2 Tsutomu Terada 1, 3 and Masahiko Tsukamoto

More information

IPSJ SIG Technical Report Vol.2009-BIO-17 No /5/26 DNA 1 1 DNA DNA DNA DNA Correcting read errors on DNA sequences determined by Pyrosequencing

IPSJ SIG Technical Report Vol.2009-BIO-17 No /5/26 DNA 1 1 DNA DNA DNA DNA Correcting read errors on DNA sequences determined by Pyrosequencing DNA 1 1 DNA DNA DNA DNA Correcting read errors on DNA sequences determined by Pyrosequencing Youhei Namiki 1 and Yutaka Akiyama 1 Pyrosequencing, one of the DNA sequencing technologies, allows us to determine

More information

揃 24 1681 0 20 40 60 80 100 0 21 42 63 84 Lag [hour] Lag [day] 35

揃 24 1681 0 20 40 60 80 100 0 21 42 63 84 Lag [hour] Lag [day] 35 Forecasting Model for Electricity Consumption in Residential House Based on Time Series Analysis * ** *** Shuhei Kondo Nobayasi Masamori Shuichi Hokoi ( 2015 7 3 2015 12 11 ) After the experience of electric

More information

03.Œk’ì

03.Œk’ì HRS KG NG-HRS NG-KG AIC Fama 1965 Mandelbrot Blattberg Gonedes t t Kariya, et. al. Nagahara ARCH EngleGARCH Bollerslev EGARCH Nelson GARCH Heynen, et. al. r n r n =σ n w n logσ n =α +βlogσ n 1 + v n w

More information

,.,.,,. [15],.,.,,., 2003 3 2006 2 3. 2003 3 2004 2 2004 3 2005 2, 1., 2005 3 2006 2, 1., 1,., 1,,., 1. i

,.,.,,. [15],.,.,,., 2003 3 2006 2 3. 2003 3 2004 2 2004 3 2005 2, 1., 2005 3 2006 2, 1., 1,., 1,,., 1. i 200520866 ( ) 19 1 ,.,.,,. [15],.,.,,., 2003 3 2006 2 3. 2003 3 2004 2 2004 3 2005 2, 1., 2005 3 2006 2, 1., 1,., 1,,., 1. i 1 1 1.1..................................... 1 1.2...................................

More information

untitled

untitled 2 : n =1, 2,, 10000 0.5125 0.51 0.5075 0.505 0.5025 0.5 0.4975 0.495 0 2000 4000 6000 8000 10000 2 weak law of large numbers 1. X 1,X 2,,X n 2. µ = E(X i ),i=1, 2,,n 3. σi 2 = V (X i ) σ 2,i=1, 2,,n ɛ>0

More information

橡表紙参照.PDF

橡表紙参照.PDF CIRJE-J-58 X-12-ARIMA 2000 : 2001 6 How to use X-12-ARIMA2000 when you must: A Case Study of Hojinkigyo-Tokei Naoto Kunitomo Faculty of Economics, The University of Tokyo Abstract: We illustrate how to

More information

ITの経済分析に関する調査

ITの経済分析に関する調査 14 IT IT 1. 1 2. 12 3. 15 1. 19 2. 19 3. 27 1. 29 2. 31 3. 32 4. 33 5. 42 6. TFP GDP 46 1. 49 2. 50 3. 54 4. 55 5. 69 1. toc 81 2. tob 82 1 1 IT 1. 1.1. 1.2. 1 vintage model Perpetual inventory method

More information

Kullback-Leibler

Kullback-Leibler Kullback-Leible Sanov 206 6 6 http://www.math.tohoku.ac.jp/~kuoki/latex/206066kullbackleible.pdf 0 2 Kullback-Leible 3........................... 3.2........... 4.3 Kullback-Leible.............. 4.4 Kullback-Leible......................

More information

201711grade1ouyou.pdf

201711grade1ouyou.pdf 2017 11 26 1 2 52 3 12 13 22 23 32 33 42 3 5 3 4 90 5 6 A 1 2 Web Web 3 4 1 2... 5 6 7 7 44 8 9 1 2 3 1 p p >2 2 A 1 2 0.6 0.4 0.52... (a) 0.6 0.4...... B 1 2 0.8-0.2 0.52..... (b) 0.6 0.52.... 1 A B 2

More information

漸化式のすべてのパターンを解説しましたー高校数学の達人・河見賢司のサイト

漸化式のすべてのパターンを解説しましたー高校数学の達人・河見賢司のサイト https://www.hmg-gen.com/tuusin.html https://www.hmg-gen.com/tuusin1.html 1 2 OK 3 4 {a n } (1) a 1 = 1, a n+1 a n = 2 (2) a 1 = 3, a n+1 a n = 2n a n a n+1 a n = ( ) a n+1 a n = ( ) a n+1 a n {a n } 1,

More information

1. 2. (Rowthorn, 2014) / 39 1

1. 2. (Rowthorn, 2014) / 39 1 ,, 43 ( ) 2015 7 18 ( ) E-mail: sasaki@econ.kyoto-u.ac.jp 1 / 39 1. 2. (Rowthorn, 2014) 3. 4. 5. 6. 7. 2 / 39 1 ( 1). ( 2). = +. 1. g. r. r > g ( 3).. 3 / 39 2 50% Figure I.1. Income inequality in the

More information

4. ϵ(ν, T ) = c 4 u(ν, T ) ϵ(ν, T ) T ν π4 Planck dx = 0 e x 1 15 U(T ) x 3 U(T ) = σt 4 Stefan-Boltzmann σ 2π5 k 4 15c 2 h 3 = W m 2 K 4 5.

4. ϵ(ν, T ) = c 4 u(ν, T ) ϵ(ν, T ) T ν π4 Planck dx = 0 e x 1 15 U(T ) x 3 U(T ) = σt 4 Stefan-Boltzmann σ 2π5 k 4 15c 2 h 3 = W m 2 K 4 5. A 1. Boltzmann Planck u(ν, T )dν = 8πh ν 3 c 3 kt 1 dν h 6.63 10 34 J s Planck k 1.38 10 23 J K 1 Boltzmann u(ν, T ) T ν e hν c = 3 10 8 m s 1 2. Planck λ = c/ν Rayleigh-Jeans u(ν, T )dν = 8πν2 kt dν c

More information

28 Horizontal angle correction using straight line detection in an equirectangular image

28 Horizontal angle correction using straight line detection in an equirectangular image 28 Horizontal angle correction using straight line detection in an equirectangular image 1170283 2017 3 1 2 i Abstract Horizontal angle correction using straight line detection in an equirectangular image

More information

1 I

1 I 1 I 3 1 1.1 R x, y R x + y R x y R x, y, z, a, b R (1.1) (x + y) + z = x + (y + z) (1.2) x + y = y + x (1.3) 0 R : 0 + x = x x R (1.4) x R, 1 ( x) R : x + ( x) = 0 (1.5) (x y) z = x (y z) (1.6) x y =

More information

( ) 2.1. C. (1) x 4 dx = 1 5 x5 + C 1 (2) x dx = x 2 dx = x 1 + C = 1 2 x + C xdx (3) = x dx = 3 x C (4) (x + 1) 3 dx = (x 3 + 3x 2 + 3x +

( ) 2.1. C. (1) x 4 dx = 1 5 x5 + C 1 (2) x dx = x 2 dx = x 1 + C = 1 2 x + C xdx (3) = x dx = 3 x C (4) (x + 1) 3 dx = (x 3 + 3x 2 + 3x + (.. C. ( d 5 5 + C ( d d + C + C d ( d + C ( ( + d ( + + + d + + + + C (5 9 + d + d tan + C cos (sin (6 sin d d log sin + C sin + (7 + + d ( + + + + d log( + + + C ( (8 d 7 6 d + 6 + C ( (9 ( d 6 + 8 d

More information

浜松医科大学紀要

浜松医科大学紀要 On the Statistical Bias Found in the Horse Racing Data (1) Akio NODA Mathematics Abstract: The purpose of the present paper is to report what type of statistical bias the author has found in the horse

More information

Title 最適年金の理論 Author(s) 藤井, 隆雄 ; 林, 史明 ; 入谷, 純 ; 小黒, 一正 Citation Issue Date Type Technical Report Text Version publisher URL

Title 最適年金の理論 Author(s) 藤井, 隆雄 ; 林, 史明 ; 入谷, 純 ; 小黒, 一正 Citation Issue Date Type Technical Report Text Version publisher URL Title 最適年金の理論 Author(s) 藤井, 隆雄 ; 林, 史明 ; 入谷, 純 ; 小黒, 一正 Citation Issue 2012-06 Date Type Technical Report Text Version publisher URL http://hdl.handle.net/10086/23085 Right Hitotsubashi University Repository

More information

1 1 tf-idf tf-idf i

1 1 tf-idf tf-idf i 14 A Method of Article Retrieval Utilizing Characteristics in Newspaper Articles 1055104 2003 1 31 1 1 tf-idf tf-idf i Abstract A Method of Article Retrieval Utilizing Characteristics in Newspaper Articles

More information

IPSJ SIG Technical Report Vol.2016-CE-137 No /12/ e β /α α β β / α A judgment method of difficulty of task for a learner using simple

IPSJ SIG Technical Report Vol.2016-CE-137 No /12/ e β /α α β β / α A judgment method of difficulty of task for a learner using simple 1 2 3 4 5 e β /α α β β / α A judgment method of difficulty of task for a learner using simple electroencephalograph Katsuyuki Umezawa 1 Takashi Ishida 2 Tomohiko Saito 3 Makoto Nakazawa 4 Shigeichi Hirasawa

More information

Proceedings of the 61st Annual Conference of the Institute of Systems, Control and Information Engineers (ISCIE), Kyoto, May 23-25, 2017 The Visual Se

Proceedings of the 61st Annual Conference of the Institute of Systems, Control and Information Engineers (ISCIE), Kyoto, May 23-25, 2017 The Visual Se The Visual Servo Control of Drone in Consideration of Dead Time,, Junpei Shirai and Takashi Yamaguchi and Kiyotsugu Takaba Ritsumeikan University Abstract Recently, the use of drones has been expected

More information

第86回日本感染症学会総会学術集会後抄録(I)

第86回日本感染症学会総会学術集会後抄録(I) κ κ κ κ κ κ μ μ β β β γ α α β β γ α β α α α γ α β β γ μ β β μ μ α ββ β β β β β β β β β β β β β β β β β β γ β μ μ μ μμ μ μ μ μ β β μ μ μ μ μ μ μ μ μ μ μ μ μ μ β

More information

3 m = [n, n1, n 2,..., n r, 2n] p q = [n, n 1, n 2,..., n r ] p 2 mq 2 = ±1 1 1 6 1.1................................. 6 1.2......................... 8 1.3......................... 13 2 15 2.1.............................

More information

F = 0 F α, β F = t 2 + at + b (t α)(t β) = t 2 (α + β)t + αβ G : α + β = a, αβ = b F = 0 F (t) = 0 t α, β G t F = 0 α, β G. α β a b α β α β a b (α β)

F = 0 F α, β F = t 2 + at + b (t α)(t β) = t 2 (α + β)t + αβ G : α + β = a, αβ = b F = 0 F (t) = 0 t α, β G t F = 0 α, β G. α β a b α β α β a b (α β) 19 7 12 1 t F := t 2 + at + b D := a 2 4b F = 0 a, b 1.1 F = 0 α, β α β a, b /stlasadisc.tex, cusp.tex, toileta.eps, toiletb.eps, fromatob.tex 1 F = 0 F α, β F = t 2 + at + b (t α)(t β) = t 2 (α + β)t

More information

DEIM Forum 2013 C10-2 Re-Pair {k sekine, sasakawa, syoshid, 1999 Larsson Moffat Re-Pair Re-Pair Re-Pair

DEIM Forum 2013 C10-2 Re-Pair {k sekine, sasakawa, syoshid, 1999 Larsson Moffat Re-Pair Re-Pair Re-Pair DEIM Forum 2013 C10-2 Re-Pair 060 0814 14 9 E-mail: k sekine, sasakawa, syoshid, kida}@ist.hokudai.ac.jp 1999 Larsson Moffat Re-Pair Re-Pair Re-Pair VF Variable-to-Fixed-Length Encoding for Large Texts

More information

さくらの個別指導 ( さくら教育研究所 ) 1 φ = φ 1 : φ [ ] a [ ] 1 a : b a b b(a + b) b a 2 a 2 = b(a + b). b 2 ( a b ) 2 = a b a/b X 2 X 1 = 0 a/b > 0 2 a

さくらの個別指導 ( さくら教育研究所 ) 1 φ = φ 1 : φ [ ] a [ ] 1 a : b a b b(a + b) b a 2 a 2 = b(a + b). b 2 ( a b ) 2 = a b a/b X 2 X 1 = 0 a/b > 0 2 a φ + 5 2 φ : φ [ ] a [ ] a : b a b b(a + b) b a 2 a 2 b(a + b). b 2 ( a b ) 2 a b + a/b X 2 X 0 a/b > 0 2 a b + 5 2 φ φ : 2 5 5 [ ] [ ] x x x : x : x x : x x : x x 2 x 2 x 0 x ± 5 2 x x φ : φ 2 : φ ( )

More information

Ichiro KATO In the previous paper, The Tasks and Composition of the Public Fiscal 1 written by Ichiro Kato The Economic Journal of Taka

Ichiro KATO In the previous paper, The Tasks and Composition of the Public Fiscal 1 written by Ichiro Kato The Economic Journal of Taka 117 2 Ichiro KATO In the previous paper, The Tasks and Composition of the Public Fiscal 1 written by Ichiro Kato The Economic Journal of Takasaki City University of Economics44-4, the writer introduced

More information

Feynman Encounter with Mathematics 52, [1] N. Kumano-go, Feynman path integrals as analysis on path space by time slicing approximation. Bull

Feynman Encounter with Mathematics 52, [1] N. Kumano-go, Feynman path integrals as analysis on path space by time slicing approximation. Bull Feynman Encounter with Mathematics 52, 200 9 [] N. Kumano-go, Feynman path integrals as analysis on path space by time slicing approximation. Bull. Sci. Math. vol. 28 (2004) 97 25. [2] D. Fujiwara and

More information

Test IV, March 22, 2016 6. Suppose that 2 n a n converges. Prove or disprove that a n converges. Proof. Method I: Let a n x n be a power series, which converges at x = 2 by the assumption. Applying Theorem

More information

(Basics of Proability Theory). (Probability Spacees ad Radom Variables,, (Ω, F, P ),, X,. (Ω, F, P ) (probability space) Ω ( ω Ω ) F ( 2 Ω ) Ω σ (σ-fi

(Basics of Proability Theory). (Probability Spacees ad Radom Variables,, (Ω, F, P ),, X,. (Ω, F, P ) (probability space) Ω ( ω Ω ) F ( 2 Ω ) Ω σ (σ-fi I (Basics of Probability Theory ad Radom Walks) 25 4 5 ( 4 ) (Preface),.,,,.,,,...,,.,.,,.,,. (,.) (Basics of Proability Theory). (Probability Spacees ad Radom Variables...............2, (Expectatios,

More information

newmain.dvi

newmain.dvi 数論 サンプルページ この本の定価 判型などは, 以下の URL からご覧いただけます. http://www.morikita.co.jp/books/mid/008142 このサンプルページの内容は, 第 2 版 1 刷発行当時のものです. Daniel DUVERNEY: THÉORIE DES NOMBRES c Dunod, Paris, 1998, This book is published

More information

1990 IMO 1990/1/15 1:00-4:00 1 N N N 1, N 1 N 2, N 2 N 3 N 3 2 x x + 52 = 3 x x , A, B, C 3,, A B, C 2,,,, 7, A, B, C

1990 IMO 1990/1/15 1:00-4:00 1 N N N 1, N 1 N 2, N 2 N 3 N 3 2 x x + 52 = 3 x x , A, B, C 3,, A B, C 2,,,, 7, A, B, C 0 9 (1990 1999 ) 10 (2000 ) 1900 1994 1995 1999 2 SAT ACT 1 1990 IMO 1990/1/15 1:00-4:00 1 N 1990 9 N N 1, N 1 N 2, N 2 N 3 N 3 2 x 2 + 25x + 52 = 3 x 2 + 25x + 80 3 2, 3 0 4 A, B, C 3,, A B, C 2,,,, 7,

More information

nsg02-13/ky045059301600033210

nsg02-13/ky045059301600033210 φ φ φ φ κ κ α α μ μ α α μ χ et al Neurosci. Res. Trpv J Physiol μ μ α α α β in vivo β β β β β β β β in vitro β γ μ δ μδ δ δ α θ α θ α In Biomechanics at Micro- and Nanoscale Levels, Volume I W W v W

More information

25 7 18 1 1 1.1 v.s............................. 1 1.1.1.................................. 1 1.1.2................................. 1 1.1.3.................................. 3 1.2................... 3

More information

‚åŁÎ“·„´Šš‡ðŠp‡¢‡½‹âfi`fiI…A…‰…S…−…Y…•‡ÌMarkovŸA“½fiI›ð’Í

‚åŁÎ“·„´Šš‡ðŠp‡¢‡½‹âfi`fiI…A…‰…S…−…Y…•‡ÌMarkovŸA“½fiI›ð’Í Markov 2009 10 2 Markov 2009 10 2 1 / 25 1 (GA) 2 GA 3 4 Markov 2009 10 2 2 / 25 (GA) (GA) L ( 1) I := {0, 1} L f : I (0, ) M( 2) S := I M GA (GA) f (i) i I Markov 2009 10 2 3 / 25 (GA) ρ(i, j), i, j I

More information

ホットスポット 1 音リアクションイベント BIC GMM 2 3 BIC GMM HMM 10) SVM 11) 12) 13) Bayesian Information Criterion BIC 14) BIC M = M 1, M 2,,

ホットスポット 1 音リアクションイベント BIC GMM 2 3 BIC GMM HMM 10) SVM 11) 12) 13) Bayesian Information Criterion BIC 14) BIC M = M 1, M 2,, 1 1 2 2 BIC GMM Acoustic Event Detection for Finding Hot Spots in Podcasts Kouhei Sumi, 1 Tatsuya Kawahara, 1 Jun Ogata 2 and Masataka Goto 2 This paper presents a method to detect acoustic events that

More information

n 2 + π2 6 x [10 n x] x = lim n 10 n n 10 k x 1.1. a 1, a 2,, a n, (a n ) n=1 {a n } n=1 1.2 ( ). {a n } n=1 Q ε > 0 N N m, n N a m

n 2 + π2 6 x [10 n x] x = lim n 10 n n 10 k x 1.1. a 1, a 2,, a n, (a n ) n=1 {a n } n=1 1.2 ( ). {a n } n=1 Q ε > 0 N N m, n N a m 1 1 1 + 1 4 + + 1 n 2 + π2 6 x [10 n x] x = lim n 10 n n 10 k x 1.1. a 1, a 2,, a n, (a n ) n=1 {a n } n=1 1.2 ( ). {a n } n=1 Q ε > 0 N N m, n N a m a n < ε 1 1. ε = 10 1 N m, n N a m a n < ε = 10 1 N

More information

O1-1 O1-2 O1-3 O1-4 O1-5 O1-6

O1-1 O1-2 O1-3 O1-4 O1-5 O1-6 O1-1 O1-2 O1-3 O1-4 O1-5 O1-6 O1-7 O1-8 O1-9 O1-10 O1-11 O1-12 O1-13 O1-14 O1-15 O1-16 O1-17 O1-18 O1-19 O1-20 O1-21 O1-22 O1-23 O1-24 O1-25 O1-26 O1-27 O1-28 O1-29 O1-30 O1-31 O1-32 O1-33 O1-34 O1-35

More information

II (Percolation) ( 3-4 ) 1. [ ],,,,,,,. 2. [ ],.. 3. [ ],. 4. [ ] [ ] G. Grimmett Percolation Springer-Verlag New-York [ ] 3

II (Percolation) ( 3-4 ) 1. [ ],,,,,,,. 2. [ ],.. 3. [ ],. 4. [ ] [ ] G. Grimmett Percolation Springer-Verlag New-York [ ] 3 II (Percolation) 12 9 27 ( 3-4 ) 1 [ ] 2 [ ] 3 [ ] 4 [ ] 1992 5 [ ] G Grimmett Percolation Springer-Verlag New-York 1989 6 [ ] 3 1 3 p H 2 3 2 FKG BK Russo 2 p H = p T (=: p c ) 3 2 Kesten p c =1/2 ( )

More information

動画コンテンツ 動画 1 動画 2 動画 3 生成中の映像 入力音楽 選択された素片 テンポによる伸縮 音楽的構造 A B B B B B A C C : 4) 6) Web Web 2 2 c 2009 Information Processing S

動画コンテンツ 動画 1 動画 2 動画 3 生成中の映像 入力音楽 選択された素片 テンポによる伸縮 音楽的構造 A B B B B B A C C : 4) 6) Web Web 2 2 c 2009 Information Processing S 1 2 2 1 Web An Automatic Music Video Creation System by Reusing Dance Video Content Sora Murofushi, 1 Tomoyasu Nakano, 2 Masataka Goto 2 and Shigeo Morishima 1 This paper presents a system that automatically

More information

Graham Neubig ノンパラメトリックベイズ法 ノンパラメトリックベイズ法 Graham Neubig 2011 年 5 月 10 1

Graham Neubig ノンパラメトリックベイズ法 ノンパラメトリックベイズ法 Graham Neubig 2011 年 5 月 10 1 ノンパラメトリックベイズ法 Graham Neubig 2011 年 5 月 10 日 @NAIST 1 概要 ノンパラメトリックベイズ法について ベイズ法の基礎理論 サンプリングによる推論 サンプリングを利用した HMM の学習 有限 HMM から無限 HMM へ 近年の展開 ( サンプリング法 モデル化法 音声処理 言語処理のおける応用 基本は離散分布の教師なし学習 2 Non-parametric

More information

(1) 3 A B E e AE = e AB OE = OA + e AB = (1 35 e ) e OE z 1 1 e E xy e = 0 e = 5 OE = ( 2 0 0) E ( 2 0 0) (2) 3 E P Q k EQ = k EP E y 0

(1) 3 A B E e AE = e AB OE = OA + e AB = (1 35 e ) e OE z 1 1 e E xy e = 0 e = 5 OE = ( 2 0 0) E ( 2 0 0) (2) 3 E P Q k EQ = k EP E y 0 (1) 3 A B E e AE = e AB OE = OA + e AB = (1 35 e 0 1 15 ) e OE z 1 1 e E xy 5 1 1 5 e = 0 e = 5 OE = ( 2 0 0) E ( 2 0 0) (2) 3 E P Q k EQ = k EP E y 0 Q y P y k 2 M N M( 1 0 0) N(1 0 0) 4 P Q M N C EP

More information

163 KdV KP Lax pair L, B L L L 1/2 W 1 LW = ( / x W t 1, t 2, t 3, ψ t n ψ/ t n = B nψ (KdV B n = L n/2 KP B n = L n KdV KP Lax W Lax τ KP L ψ τ τ Cha

163 KdV KP Lax pair L, B L L L 1/2 W 1 LW = ( / x W t 1, t 2, t 3, ψ t n ψ/ t n = B nψ (KdV B n = L n/2 KP B n = L n KdV KP Lax W Lax τ KP L ψ τ τ Cha 63 KdV KP Lax pair L, B L L L / W LW / x W t, t, t 3, ψ t n / B nψ KdV B n L n/ KP B n L n KdV KP Lax W Lax τ KP L ψ τ τ Chapter 7 An Introduction to the Sato Theory Masayui OIKAWA, Faculty of Engneering,

More information

13 0 1 1 4 11 4 12 5 13 6 2 10 21 10 22 14 3 20 31 20 32 25 33 28 4 31 41 32 42 34 43 38 5 41 51 41 52 43 53 54 6 57 61 57 62 60 70 0 Gauss a, b, c x, y f(x, y) = ax 2 + bxy + cy 2 = x y a b/2 b/2 c x

More information

1 Kinect for Windows M = [X Y Z] T M = [X Y Z ] T f (u,v) w 3.2 [11] [7] u = f X +u Z 0 δ u (X,Y,Z ) (5) v = f Y Z +v 0 δ v (X,Y,Z ) (6) w = Z +

1 Kinect for Windows M = [X Y Z] T M = [X Y Z ] T f (u,v) w 3.2 [11] [7] u = f X +u Z 0 δ u (X,Y,Z ) (5) v = f Y Z +v 0 δ v (X,Y,Z ) (6) w = Z + 3 3D 1,a) 1 1 Kinect (X, Y) 3D 3D 1. 2010 Microsoft Kinect for Windows SDK( (Kinect) SDK ) 3D [1], [2] [3] [4] [5] [10] 30fps [10] 3 Kinect 3 Kinect Kinect for Windows SDK 3 Microsoft 3 Kinect for Windows

More information

01.Œk’ì/“²fi¡*

01.Œk’ì/“²fi¡* AIC AIC y n r n = logy n = logy n logy n ARCHEngle r n = σ n w n logσ n 2 = α + β w n 2 () r n = σ n w n logσ n 2 = α + β logσ n 2 + v n (2) w n r n logr n 2 = logσ n 2 + logw n 2 logσ n 2 = α +β logσ

More information

main.dvi

main.dvi 6 FIR FIR FIR FIR 6.1 FIR 6.1.1 H(e jω ) H(e jω )= H(e jω ) e jθ(ω) = H(e jω ) (cos θ(ω)+jsin θ(ω)) (6.1) H(e jω ) θ(ω) θ(ω) = KωT, K > 0 (6.2) 6.1.2 6.1 6.1 FIR 123 6.1 H(e jω 1, ω

More information

..,,,, , ( ) 3.,., 3.,., 500, 233.,, 3,,.,, i

..,,,, , ( ) 3.,., 3.,., 500, 233.,, 3,,.,, i 25 Feature Selection for Prediction of Stock Price Time Series 1140357 2014 2 28 ..,,,,. 2013 1 1 12 31, ( ) 3.,., 3.,., 500, 233.,, 3,,.,, i Abstract Feature Selection for Prediction of Stock Price Time

More information

Trial for Value Quantification from Exceptional Utterances 37-066593 1 5 1.1.................................. 5 1.2................................ 8 2 9 2.1.............................. 9 2.1.1.........................

More information

1 1.1 H = µc i c i + c i t ijc j + 1 c i c j V ijklc k c l (1) V ijkl = V jikl = V ijlk = V jilk () t ij = t ji, V ijkl = V lkji (3) (1) V 0 H mf = µc

1 1.1 H = µc i c i + c i t ijc j + 1 c i c j V ijklc k c l (1) V ijkl = V jikl = V ijlk = V jilk () t ij = t ji, V ijkl = V lkji (3) (1) V 0 H mf = µc 013 6 30 BCS 1 1.1........................ 1................................ 3 1.3............................ 3 1.4............................... 5 1.5.................................... 5 6 3 7 4 8

More information

A pp CALL College Life CD-ROM Development of CD-ROM English Teaching Materials, College Life Series, for Improving English Communica

A pp CALL College Life CD-ROM Development of CD-ROM English Teaching Materials, College Life Series, for Improving English Communica A CALL College Life CD-ROM Development of CD-ROM English Teaching Materials, College Life Series, for Improving English Communicative Skills of Japanese College Students The purpose of the present study

More information

main.dvi

main.dvi DEIM Forum 2012 E2-4 1 2 2 2 3 4 5 6 7 1 305-8573 1-1-1 2 305-8573 1-1-1 3 305-8573 1-1-1 4 ( ) 141-0031 8-3-6 5 060-0808 8 5 6 101-8430 2-1-2 7 135-0064. 2-3-26 113-0033 7-3-1 305-8550 1-2 Analyzing Correlation

More information

gengo.dvi

gengo.dvi 4 97.52% tri-gram 92.76% 98.49% : Japanese word segmentation by Adaboost using the decision list as the weak learner Hiroyuki Shinnou In this paper, we propose the new method of Japanese word segmentation

More information

seminar0220a.dvi

seminar0220a.dvi 1 Hi-Stat 2 16 2 20 16:30-18:00 2 2 217 1 COE 4 COE RA E-MAIL: ged0104@srv.cc.hit-u.ac.jp 2004 2 25 S-PLUS S-PLUS S-PLUS S-code 2 [8] [8] [8] 1 2 ARFIMA(p, d, q) FI(d) φ(l)(1 L) d x t = θ(l)ε t ({ε t }

More information

フリーソフトでつくる音声認識システム ( 第 2 版 ) サンプルページ この本の定価 判型などは, 以下の URL からご覧いただけます. このサンプルページの内容は, 第 2 版 1 刷発行時のものです.

フリーソフトでつくる音声認識システム ( 第 2 版 ) サンプルページ この本の定価 判型などは, 以下の URL からご覧いただけます.   このサンプルページの内容は, 第 2 版 1 刷発行時のものです. フリーソフトでつくる音声認識システム ( 第 2 版 ) サンプルページ この本の定価 判型などは, 以下の URL からご覧いただけます. http://www.morikita.co.jp/books/mid/084712 このサンプルページの内容は, 第 2 版 1 刷発行時のものです. i 2007 10 1 Scilab 2 2017 2 1 2 1 ii 2 web 2007 9 iii

More information

December 28, 2018

December 28, 2018 e-mail : kigami@i.kyoto-u.ac.jp December 28, 28 Contents 2............................. 3.2......................... 7.3..................... 9.4................ 4.5............. 2.6.... 22 2 36 2..........................

More information

(Basics of Proability Theory). (Probability Spacees ad Radom Variables,, (Ω, F, P ),, X,. (Ω, F, P ) (probability space) Ω ( ω Ω ) F ( 2 Ω ) Ω σ (σ-fi

(Basics of Proability Theory). (Probability Spacees ad Radom Variables,, (Ω, F, P ),, X,. (Ω, F, P ) (probability space) Ω ( ω Ω ) F ( 2 Ω ) Ω σ (σ-fi II (Basics of Probability Theory ad Radom Walks) (Preface),.,,,.,,,...,,.,.,,.,,. (Basics of Proability Theory). (Probability Spacees ad Radom Variables...............2, (Expectatios, Meas).............................

More information

0 Speedy & Simple Kenji, Yoshio, and Goro are good at English. They have their ways of learning. Kenji often listens to English songs and tries to remember all the words. Yoshio reads one English book every

More information

08-Note2-web

08-Note2-web r(t) t r(t) O v(t) = dr(t) dt a(t) = dv(t) dt = d2 r(t) dt 2 r(t), v(t), a(t) t dr(t) dt r(t) =(x(t),y(t),z(t)) = d 2 r(t) dt 2 = ( dx(t) dt ( d 2 x(t) dt 2, dy(t), dz(t) dt dt ), d2 y(t) dt 2, d2 z(t)

More information

COE-RES Discussion Paper Series Center of Excellence Project The Normative Evaluation and Social Choice of Contemporary Economic Systems Graduate Scho

COE-RES Discussion Paper Series Center of Excellence Project The Normative Evaluation and Social Choice of Contemporary Economic Systems Graduate Scho COE-RES Discussion Paper Series Center of Excellence Project The Normative Evaluation and Social Choice of Contemporary Economic Systems Graduate School of Economics and Institute of Economic Research

More information

< D8291BA2E706466>

< D8291BA2E706466> A 20 1 26 20 10 10 16 4 4! 20 6 11 2 2 3 3 10 2 A. L. T. Assistant Language Teacher DVD AV 3 A. E. T.Assistant English Teacher A. L. T. 40 3 A 4 B A. E. T. A. E. T. 6 C 2 CD 4 4 4 4 4 8 10 30 5 20 3 5

More information

log F0 意識 しゃべり 葉の log F0 Fig. 1 1 An example of classification of substyles of rap. ' & 2. 4) m.o.v.e 5) motsu motsu (1) (2) (3) (4) (1) (2) mot

log F0 意識 しゃべり 葉の log F0 Fig. 1 1 An example of classification of substyles of rap. ' & 2. 4) m.o.v.e 5) motsu motsu (1) (2) (3) (4) (1) (2) mot 1. 1 2 1 3 2 HMM Rap-style Singing Voice Synthesis Keijiro Saino, 1 Keiichiro Oura, 2 Makoto Tachibana, 1 Hieki Kenmochi 3 an Keiichi Tokua 2 This paper aresses rap-style singing voice synthesis. Since

More information

9 5 ( α+ ) = (α + ) α (log ) = α d = α C d = log + C C 5. () d = 4 d = C = C = 3 + C 3 () d = d = C = C = 3 + C 3 =

9 5 ( α+ ) = (α + ) α (log ) = α d = α C d = log + C C 5. () d = 4 d = C = C = 3 + C 3 () d = d = C = C = 3 + C 3 = 5 5. 5.. A II f() f() F () f() F () = f() C (F () + C) = F () = f() F () + C f() F () G() f() G () = F () 39 G() = F () + C C f() F () f() F () + C C f() f() d f() f() C f() f() F () = f() f() f() d =

More information

ax 2 + bx + c = n 8 (n ) a n x n + a n 1 x n a 1 x + a 0 = 0 ( a n, a n 1,, a 1, a 0 a n 0) n n ( ) ( ) ax 3 + bx 2 + cx + d = 0 4

ax 2 + bx + c = n 8 (n ) a n x n + a n 1 x n a 1 x + a 0 = 0 ( a n, a n 1,, a 1, a 0 a n 0) n n ( ) ( ) ax 3 + bx 2 + cx + d = 0 4 20 20.0 ( ) 8 y = ax 2 + bx + c 443 ax 2 + bx + c = 0 20.1 20.1.1 n 8 (n ) a n x n + a n 1 x n 1 + + a 1 x + a 0 = 0 ( a n, a n 1,, a 1, a 0 a n 0) n n ( ) ( ) ax 3 + bx 2 + cx + d = 0 444 ( a, b, c, d

More information