Similar documents
…p…^†[…fiflF”¯ Pattern Recognition

第3章 非線形計画法の基礎

Microsoft PowerPoint - SSII_harada pptx

パターン認識と機械学習 - ベイズ理論による統計的予測

k3 ( :07 ) 2 (A) k = 1 (B) k = 7 y x x 1 (k2)?? x y (A) GLM (k

Drive-by-Download JavaScript

こんにちは由美子です

& 3 3 ' ' (., (Pixel), (Light Intensity) (Random Variable). (Joint Probability). V., V = {,,, V }. i x i x = (x, x,, x V ) T. x i i (State Variable),

確率論と統計学の資料

tokei01.dvi

フリーソフトではじめる機械学習入門 サンプルページ この本の定価 判型などは, 以下の URL からご覧いただけます. このサンプルページの内容は, 初版 1 刷発行時のものです.

..,,,, , ( ) 3.,., 3.,., 500, 233.,, 3,,.,, i

浜松医科大学紀要

Formal Model for Kana-Kanji Conversion (KKC) In Japanese input, users type in phonetic Hiragana, but proper Japanese is written in logographic Kanji K

フリーソフトでつくる音声認識システム ( 第 2 版 ) サンプルページ この本の定価 判型などは, 以下の URL からご覧いただけます. このサンプルページの内容は, 第 2 版 1 刷発行時のものです.

X X X Y R Y R Y R MCAR MAR MNAR Figure 1: MCAR, MAR, MNAR Y R X 1.2 Missing At Random (MAR) MAR MCAR MCAR Y X X Y MCAR 2 1 R X Y Table 1 3 IQ MCAR Y I

<4D F736F F D B B83578B6594BB2D834A836F815B82D082C88C60202E646F63>

2 1/2 1/4 x 1 x 2 x 1, x 2 9 3x 1 + 2x 2 9 (1.1) 1/3 RDA 1 15 x /4 RDA 1 6 x /6 1 x 1 3 x 2 15 x (1.2) (1.3) (1.4) 1 2 (1.5) x 1


21 Pitman-Yor Pitman- Yor [7] n -gram W w n-gram G Pitman-Yor P Y (d, θ, G 0 ) (1) G P Y (d, θ, G 0 ) (1) Pitman-Yor d, θ, G 0 d 0 d 1 θ Pitman-Yor G

Dirichlet process mixture Dirichlet process mixture 2 /40 MIRU2008 :

医系の統計入門第 2 版 サンプルページ この本の定価 判型などは, 以下の URL からご覧いただけます. このサンプルページの内容は, 第 2 版 1 刷発行時のものです.

講義のーと : データ解析のための統計モデリング. 第2回

24 SPAM Performance Comparison of Machine Learning Algorithms for SPAM Discrimination

/ 2 n n n n x 1,..., x n 1 n 2 n R n n ndimensional Euclidean space R n vector point R n set space R n R n x = x 1 x n y = y 1 y n distance dx,

kubostat2017c p (c) Poisson regression, a generalized linear model (GLM) : :

f(x) x S (optimal solution) f(x ) (optimal value) f(x) (1) 3 GLPK glpsol -m -d -m glpsol -h -m -d -o -y --simplex ( ) --interior --min --max --check -

kubostat2018d p.2 :? bod size x and fertilization f change seed number? : a statistical model for this example? i response variable seed number : { i

¥ì¥·¥Ô¤Î¸À¸ì½èÍý¤Î¸½¾õ

( ) ? () 1.1 ( 3 ) j x j 10 j 1 10 j = 1,..., 10 x 1 + x x 10 =

1 Stata SEM LightStone 4 SEM 4.. Alan C. Acock, Discovering Structural Equation Modeling Using Stata, Revised Edition, Stata Press 3.

音響モデル triphone 入力音声 音声分析 デコーダ 言語モデル N-gram bigram HMM の状態確率として利用 出力層 triphone: 3003 ノード リスコア trigram 隠れ層 2048 ノード X7 層 1 Structure of recognition syst

(b) BoF codeword codeword BoF (c) BoF Fergus Weber [11] Weber [12] Weber Fergus BoF (b) Fergus [13] Fergus 2. Fergus 2. 1 Fergus [3]

ii

講義のーと : データ解析のための統計モデリング. 第5回


Microsoft PowerPoint - 03Weka.ppt

2 1 Introduction

JFE.dvi

inkiso.dvi

[1] SBS [2] SBS Random Forests[3] Random Forests ii

kubostat2017e p.1 I 2017 (e) GLM logistic regression : : :02 1 N y count data or

数学の基礎訓練I

ばらつき抑制のための確率最適制御

分布

Studies of Foot Form for Footwear Design (Part 9) : Characteristics of the Foot Form of Young and Elder Women Based on their Sizes of Ball Joint Girth

% 2 3 [1] Semantic Texton Forests STFs [1] ( ) STFs STFs ColorSelf-Simlarity CSS [2] ii

Convolutional Neural Network A Graduation Thesis of College of Engineering, Chubu University Investigation of feature extraction by Convolution

kut-paper-template.dvi

自然言語処理24_705

Stata 11 Stata ts (ARMA) ARCH/GARCH whitepaper mwp 3 mwp-083 arch ARCH 11 mwp-051 arch postestimation 27 mwp-056 arima ARMA 35 mwp-003 arima postestim


kubostat2017b p.1 agenda I 2017 (b) probability distribution and maximum likelihood estimation :

Transcription:

Natural Language Processing Series 1

WWW WWW 1.

ii Foundations of Statistical NLPMIT Press 1999 2. a. b. c. 25 3. a. b. Web WWW

iii 2. 3. 2009 6

v 2010 6

1. 1.1... 1 1.2... 4 1.2.1... 6 1.2.2... 12 1.2.3... 14 1.2.4... 18 1.3... 20 1.3.1... 23 1.3.2... 26 1.3.3... 29 1.3.4... 31 1.4... 37 1.4.1... 38 1.4.2... 38 1.5... 41 1.5.1 i.i.d.... 42 1.5.2... 42 1.5.3... 46 1.6... 49 1.6.1... 49 1.6.2... 51

vii 1.6.3... 54 1.6.4... 56 1.6.5... 57 1.7... 58... 59 2. 2.1... 61 2.2 n... 62 2.2.1 n... 62 2.2.2 n... 63 2.3... 64 2.3.1... 64 2.3.2... 66 2.4... 67 2.4.1... 67 2.4.2... 70 2.4.3... 71 2.5... 71 2.5.1... 72 2.5.2... 73 2.6... 74 2.7... 75... 76

viii 3. 3.1... 77 3.2... 78 3.3 k -... 82 3.4... 84 3.5 EM... 87 3.6... 94 3.7... 95... 97 4. 4.1... 99 4.2... 101 4.2.1... 102 4.2.2... 110 4.3... 117 4.3.1... 118 4.3.2 SVM... 120 4.3.3 SVM... 124 4.3.4... 125 4.3.5... 126 4.4... 127 4.5... 132 4.5.1... 132 4.5.2... 134

ix 4.6... 138 4.6.1... 138 4.6.2... 141 4.7... 143... 145 5. 5.1... 147 5.2... 148 5.2.1 HMM... 149 5.2.2... 149 5.2.3 HMM... 150 5.3... 151 5.4... 153 5.4.1... 153 5.4.2... 154 5.5... 159 5.6... 160... 161 6. 6.1... 162 6.2... 163 6.2.1... 164 6.2.2... 165 6.3... 166

x 6.3.1... 166 6.3.2... 166 6.3.3... 168 6.3.4... 170 6.3.5... 171 6.3.6... 175 6.4... 175 6.5... 177... 178... 180 A.1... 180 A.2 logsumexp... 183 A.3 KKT... 184 A.4... 185... 186... 190... 206

1 1.1 1.2 1.3 1.5 1.6 1.1 word segmentation part-of-speech tagging syntactic parsing A.1

2 1. text classification instance corpus

1.1 3 xyz x 1 x 2 x 3 x 4 x X x x (1) x (2) x (3) x (4) x (1) x (2) n(w, d) n w,d d w n(w, s) n w,s s w n(w, c) n w,c c w N(w, c) N w,c c w N(c) N c c n N ds δ(w, d) δ w,d d w 1 0 δ(w, s) δ w,s d s δ

4 1. 1.2 optimization problem maximization problemminimization problem max. maximize min. 1.1 a max. x 1 x 2 s.t. x 1 x 2 a =0. x 2 = x 1 a x 1x 2 = x 1(x 1 a) = x 2 1 + ax 1. x 1 0 x 1 = a/2x 2 = a/2 x 1 x 2 objective function (x 1,x 2 )=(a/2, a/2) optimal solution max. f(x) (1.1) s.t. g(x) > = 0 (1.2) h(x) =0. (1.3)

1.2 5 f(x) g(x) > = 0h(x) =0 g(x) > = 0 inequality constrainth(x) =0 equality constraints.t. subject to feasible solutionx 1 x 2 a =0 feasible regionmax.f(x) min. f(x) x 1 = a/2x 2 = a/2 x 1 = closed-form 1 analytically solvable 2 convex programming problem 1.2.1 1.2.2 1 2

6 1. 1.2.3 1.2.4 1.2.1 1.1 (a) (b) (a) (b) 1.1 A R d convex set 1 x (1) A x (2) A t [0, 1] tx (1) +(1 t)x (2) A tx (1) +(1 t)x (2) t [0, 1] x (1) x (2) A.1.2 1 A 2 1.2 A = {x m x + b =0, x R d } x (1), x (2) A m x (1) + b =0 m x (2) + b =0 t [0, 1] tx (1) +(1 t)x (2) m (tx (1) +(1 t)x (2) )+b = tm x (1) +(1 t)m x (2) + b 1 2 d R d R d R d

i.i.d. 42 IOB2 159 180 RBF 130 18 EM 87 177 HMM 148 Expectation-Maximization 87 n 62 F 168 49 7 5 164 38 78 78 13 22 22 137 93 93 161 22 21 37 90 148 100 128 128 184 51 180 126 90 90 129 100 23 176 Q 88 101 101 78 100 78 78 100 78 78 78 KL 52 70 k - 82 147 147 27 76 164 1 70 62 2 85 13 13 13 167 167 / break-even 169 4 132 4 46 4

207 4 43 117 25 CRF 153 JS 54 JS 54 54 54 180 85 46 56 21 21 46 5 5 11 169 81 30 29 79 19 137 51 27 153 27 30 141 2 12 180 180 68 68 96 110 166 38 118 134 167 118 71 57 19 64 132 138 64 132 42 62 165 129 35 110 165 32 102 63 165 n 63 62 62 1 123 159 t- 177 39 71 130 175 176 27 5 62 64 64 30 42 7 1 10 2 10 5 6 62 180 101 33 2 122 165 165 65 13 62

208 62-160 bag-of-ngrams 66 bag-of-words 65 144 11 11 90 180 150 p 176 164 164 31 26 25 1 65 90 165 118 177 5 118 98 167 24 2 72 72 72 119 100 100 100 166 127 23 25 5 58 28 180 180 11 31 180 36 182 183 68 79 172 157 172 119 119 MAP 46 68 4 n 63 129 176 176 42 62 14 14 15 100 100 101 22 22, 37 37 one-versus-rest 126 A accuracy 166 agglomerative clustering 78 analytically solvable 5 argument 180 arithmetic mean 25 attribute 64 attribute value 64 B Baum-Welch algorithm 160 Bayesian inference 58 Bayes theorem 28 belief propagation 161 Bernoulli distribution 31 bigram 62 binary classification

problem 165 binary vector 65 binary-class dataset 165 binomial distribution 33 bottom-up clustering 79 C categorization 100 category 100 centroid 81 character n-gram 63 chunking 159 class 100 classification 100 classification accuracy 166 classification rule 100 classifier 100 class label 100 closed-form 5 cluster 78 clustering 78 complete data 90 concave 7 conditionally independent 30 conditional entropy 51 conditional probability 27 conditional probability distribution 27 conditional random fields 153 context vector 72 context window 72 context window size 73 contingency table 167 continuous random variable 37 continuous variable 37 convex function 7 convex programming problem 5 convex set 6 corpus 2 CRF 153 cross-validation 164 D data sparseness problem 71 dendrogram 79 dependent 30 development data 164 dimension 180 direction vector 182 Dirichlet distribution 39 discrete random variable 22 dual problem 19 dummy word 63 E eleven point average precision 169 EM algorithm 87 entropy 49 equality constraint 5 event 21 event space 21 Expectation-Maximization algorithm 87 expected value 23 F feasible region 5 feasible solution 5 feature 64 feature function 132 feature selection 138 feature value 64 first-order convexity condition 10 forward-backward algorithm 157 frequency vector 65 function 180 functional distance 126 F-measure 168 209 G Gaussian distribution 38 Gaussian mixture 85 gradient ascent method 13 gradient descent method 13 gradient method 13 H Hessian 11 hidden Markov model 148 HMM 148 I i.i.d. 42 incomplete data 90 independent 30 independently, identically distributed 42 inequality constraint 5 information gain 141 inner product 180 instance 2 IOB2 tag 159 J Jensen-Shannon divergence 54 joint probability 27 JS divergence 54 K Karush-Kuhn-Tucker condition 184 kernel function 128 kernel method 128 KL divergence 52 Kullback-Leibler divergence 52 k-means 82 L label 100

210 labeled data 100 Lagrange multiplier 14 Lagrangian 14 language model 76 latent variable 90 learning 78 learning data 78 learning rate 13 lemmatization 68 likelihood 42 log-likelihood 42 log-linear model 132 M macro average 172 MAP estimation 46 margin 119 marginal probability 29 margin maximization 119 maximization problem 4 maximum a posteriori estimation 46 maximum entropy model 132 maximum likelihood estimation 43 mean 23 mean vector 25 micro average 172 minimization problem 4 morphological analysis 70 multinomial distribution 35 multinomial model 110 multivariate Bernoulli distribution 32 multivariate Bernoulli model 102 multi-class classification problem 165 multi-class dataset 165 multi-label dataset 165 mutual information 57 N naive bayes classifier 101 negative class 118 negative example 118 negative instance 118 negative semi-definite 11 Newton s method 13 normal distribution 38 normal vector 183 null hypothesis 176 numerical method 12 n-gram 62 O objective function 4 observed variable 90 one-versus-rest method 126 optimal solution 4 optimization problem 4 P pairwise method 127 partial differentiation 180 part-of-speech tagging 1 PLSA 93 PLSI 93 PMI 56 pointwise mutual information 56 Poisson distribution 36 polynomial kernel 129 Porter s stemmer 68 positive class 118 positive example 118 positive instance 118 positive semi-definite 11 posterior distribution 46 posterior probability 85 precision 167 primal problem 19 prior distribution 46 probabilistic latent semantic analysis 93 probabilistic latent semantic indexing 93 probability density function 37 probability distribution 22 probability function 22 probability mass function 22 product model 98 p-value 176 Q quadratic programming problem 122 quasi-newton method 137 Q-function 88 R radial basis function kernel 130 random variable 21 RBF kernel 130 recall 167 recall-precision curve 167 recall/precision break-even point 169 regularization 134 rule-based method 100 S saddle point 18 sample mean 25 sample space 31 sample variance 26 scalar 180 scalar function 180 second-order convexity condition 10 semi-supervised learning 144 separating plane 119 sequence 147

sequential labeling 147 sequential minimal optimization 123 significance level 176 significant 176 sign test 177 single-label dataset 165 SMO 123 smoothing 110 sparse 71 spectral clustering 96 statistically significant 176 statistical test 175 stemming 68 stochastic gradient method 137 stopword 68 string kernel 129 supervised learning 101 Support Vector Machine 117 SVM 117 syntactic parsing 1 T test data 164 test instance 164 text classification 2 the method of Lagrange multipliers 15 token 62 training 78 training data 78 training instance 78 tree kernel 129 trigram 62 type 62 t-test 177 U unigram 62 unlabeled data 101 211 unobserved variable 90 unsupervised learning 101 V value 180 variance 24 vector 180 vector function 180 Viterbi algorithm 150 W Wilcoxon s signed rank sum test 177 word segmentation 1 word sense disambiguation 70 word token 62 word type 62 word n-gram 63

1984 1989 1989 1992 2000 2007 2009 1997 2000 2003 2003 2007 2010 Introduction to Machine Learning for Natural Language Processing c Hiroya Takamura 2010 2010 8 5 1 112 0011 4 46 10 CORONA PUBLISHING CO., LTD. Tokyo Japan 00140 8 14844033941 3131 :// ISBN 978 4 339 02751 8 Printed in Japan