Linear-Chain CRF Conditional Random Fields(CRF) CRF Linear-Chain CRF Ye (2009) Linear-Chain CRF i

Size: px
Start display at page:

Download "Linear-Chain CRF Conditional Random Fields(CRF) CRF Linear-Chain CRF Ye (2009) Linear-Chain CRF i"

Transcription

1 Linear-Chain CRF

2 Linear-Chain CRF Conditional Random Fields(CRF) CRF Linear-Chain CRF Ye (2009) Linear-Chain CRF i

3 Abstract An Efficient Algorithm for Variable-Order Linear-Chain CRFs ii Hiroshi MANABE Conditional Random Fields (CRFs) is one of the most popular models for sequence labeling. Applying the maximum entropy model to the whole sequence, it can describe relationship between the observations and labels with consistency. Linear-Chain CRFs is the most prevalent and practical application of CRFs which focuses on linear-chained graphs and usually makes first-order Markov assumption. It allows us to efficiently compute the model expectation of the feature functions by using forward-backward algorithm. Extending the model to higher-order Markov model, however, has been problematic due to the exponential computational complexity. Ye et al.(2009) presented an algorithm which computes the model expectation in polynomial time. However, because of its still high computational complexity, the algorithm is only practical when the higher-order features are sparse. This paper presents a novel algorithm that computes the model expectation of the feature functions more efficiently. When applied to English POS-tagging, this model yields a higher precision than the traditional first-order Markov model.

4 Linear-Chain CRF Hidden Markov Model (HMM) Maximum Entropy Markov Model (MEMM) Linear-Chain Conditional Random Fields(Linear-Chain CRF). 4 3 Linear-Chain CRF Linear-Chain CRF :Ye(2009) :Qian(2009) Linear-Chain CRF

5 38

6 1 Hidden Markov Model (HMM) Support Vector Machine (SVM) Maximum Entropy Markov Model (MEMM)[1] Conditional Random Fields(CRF)[2] Linear-Chain CRF Linear-Chain CRF MEMM Stanford Tagger[3] Linear-Chain CRF Ye [4] 1

7 2 x, y t x t, y t T L 0 T + 1 l BOS, l EOS P (x, y) P (y x) Linear-Chain Conditional Random Fields (Linear-Chain CRF) Hidden Markov Model (HMM) Maximum Entropy Markov Model (MEMM) Hidden Markov Model (HMM) Hidden Markov Model (HMM) P (x, y) P (x, y) = t P (y t y t 1 )P (x t y t ) (1) HMM Linear-Chain CRF HMM Linear-Chain CRF (forward-backward algorithm) ( ) (Viterbi algorithm) HMM 2

8 2.2 Maximum Entropy Markov Model (MEMM) CRF Maximum Entropy Markov Model (MEMM) P (y x) y t x t P (y t y t 1, x, t) (2) ( ) 1 P (y t y t 1, x, t) = Z(x, t, y t 1 ) exp λ i f i (x, t, y t 1, y t ) (3) i Z(x, t, y t 1 ) 1 f i λ i < λ i < + P (y x) P (y x) = t P (y t x, t) (4) λ i L-BFGS [5] MEMM HMM Linear-Chain CRF 1 Linear-Chain CRF 3

9 2.3 Linear-Chain Conditional Random Fields(Linear-Chain CRF) Conditional Random Fields (CRF) MEMM CRF Linear-Chain Conditional Random Fields (Linear-Chain CRF) CRF Linear-Chain CRF P (y x) P (y x) = 1 ( ) Z(x) exp λ i f i (x, t, y t 1, y t ) t i Z(x) 1 f i f i y t 1 y t y t 1 unigram bigram λ i < λ i < + λ i L-BFGS [5] (5) 4

10 3 Linear-Chain CRF Linear-Chain CRF Linear-Chain CRF Linear-Chain CRF 3.1 Linear-Chain CRF T L T = log (x,ỹ) T P (ỹ x) i λ 2 i 2σ 2 reg (6) λ i σ reg λ i L T Ẽ[f i] f i E[f i ] f i L T λ i = Ẽ[f i] E[f i ] λ i σ 2 reg (7) L T λ i i (7) 0 (forward-backward algorithm) t y α, β α(y, t) def = ( t 1 exp λ i f i (x, t, y t 1, y t ) + y 0:t 1 t =1 i i β(y, t) def = T +1 exp λ i f i (x, t, y t 1, y t ) + y t+1:t +1 t =t+2 i i α(l BOS, 0) = 1, β(l EOS, T + 1) = 1 5 λ i f i (x, t, y t 1, y) ) (8) λ i f i (x, t, y, y t+1 ) (9)

11 α(y, t), β(y, t) α(y, t 1), β(y, t + 1) α(y, t) = ( ) α(y, t 1) exp λ i f i (x, t, y, y) y i β(y, t) = ( ) β(y, t + 1) exp λ i f i (x, t, y, y ) y i (10) (11) α, β O(L 2 T ) Algorithm 1 Algorithm 2 Algorithm 1 Forward-backward: forward part α(l BOS, 0) 1 for t = 1 to T + 1 do for all y t do α(y t, t) y (α(y, t 1) exp ( i λ i f i (x, t, y, y t ))) Algorithm 2 Forward-backward: backward part β(l EOS, T + 1) 1 for t = T for all y t do downto 0 do β(y t, t) y (β(y, t + 1) exp ( i λ i f i (x, t, y t, y ))) α, β t 1 t y y P (y t 1 = y, y t = y) = 1 ( ) Z(x) α(y, t 1) exp λ i f i (x, t, y, y) β(y, t) (12) i Z(x) 6

12 Z(x) = α(l EOS, T + 1) = β(l BOS, 0) (13) 3.2 Linear-Chain CRF x y (Viterbi algorithm) y y = arg max y = arg max exp y ( ) 1 Z(x) exp λ i f i (x, t, y t 1, y t ) t i ( ) λ i f i (x, t, y t 1, y t ) t i (14) p(y t, t), q(y t, t) p(y t, t) q(y t, t) ( ( )) def = max p(y t 1, t 1) exp λ i f i (x, t, y t 1, y t ) (15) y t 1 i def = arg max p(y t, t) (16) y t 1 p q Algorithm 3 Algorithm 4 q y L 2 T 7

13 Algorithm 3 Viterbi algorithm(1) p(l BOS, 0) 1 for t = 1 to T + 1 do for all y t do p(y t, t) max yt 1 (p(y t 1, t 1) exp ( i λ i f i (x, t, y t 1, y t ))) q(y t, t) arg max yt 1 (p(y t, t)) Algorithm 4 Viterbi algorithm(2) for t = T to 1 do y t q(y t+1, t + 1) 8

14 4 Linear-Chain CRF 1 Linear-Chain CRF n n n n 1 Linear-Chain CRF n L n 2 {a, b, c} 9 acb cba, cbb, cbc 3 (L n ) (L ) L n+1 L n+1 T n 4.1 Linear-Chain CRF ( ) 1 L L ( ) 9

15 {a, b, c} a > b > c aa, ab, ac, ba, bb, bc, cb ca, cc unigram a, b, c bigram ca, cc cc ca ca, cc ca ca, cc cb cc ca CRFSuite[6] 96.06% 96.00% Linear-Chain CRF Linear-Chain CRF(Variable-Order Linear-Chain CRF) Ye [4] CRF Qian [7] Sparse Higher Order CRFs(SHO-CRFs) 10

16 4.2 :Ye(2009) Ye [4] Linear-Chain CRF M K X O(M 3 K 4 X) O(MKX) :Qian(2009) Qian sparse higher order CRFs (SHO-CRFs) [7] Z s:t = Z s Z s+1 Z t Qian 11

17 5 Linear-Chain CRF Linear-Chain CRF 5.1 x, y T T Y = {l 1,, l L } 0 T + 1 l BOS, l EOS t Y t {l BOS } if t = 0 Y t = {l EOS } if t = T + 1 Y otherwise (17) Y n,, Y m z z = m n + 1, (t : 1 t m n + 1)z n Y n+t 1 z Y n:m L = 3, T = 2 Y 0 = {l BOS }, Y 1 = {l 1, l 2, l 3 }, Y 2 = {l 1, l 2, l 3 }, Y 3 = {l EOS } {z z Y 0:3 } = {l BOS l 1 l 1 l EOS, l BOS l 1 l 2 l EOS, l BOS l 1 l 3 l EOS, l BOS l 2 l 1 l EOS, l BOS l 2 l 2 l EOS, l BOS l 2 l 3 l EOS, l BOS l 3 l 1 l EOS, l BOS l 3 l 2 l EOS, l BOS l 3 l 3 l EOS } z z i:j def = z i j j < i z i:j 12

18 (ϵ ) z 1, z 2, z 1 z 2 z 3 z 3 = z 1 + z 2 z 1 z 2 (z 2 z 1 ) z 1 s z 2 (z 1 = z 2 ) z ϵ s z z 1 z 2 z 1 s z 2 z 1 z 2 z 1 z 2 z 1 < s z 2 z 1 z 2 z 1 < s z 2, z 1 ϵ z 1 1: z 1 1 < s z 2 1: z 2 1 (18) z 1 z 2 z 1 ϵ z 1 1 z 2 ( ) z 1 < s z 3, z 2 < s z 3 z 1 < s z 2 or z 2 s z 1 (19) z 1, z 2 z 3 z 1 z 2 z 2 z 1 z 2 S z 1 s(z 1, S) = z 2 z 2 S z 1 (z 1 S ) s(z 1, S) = z 2 if and only if z 2 S and z 2 < s z 1 and (z S)z < s z 1 z s z 2 (20) z 2 S z 1 S(z 1, S) = z 2 z 2 S z 1 S(z 1, S) = z 2 if and only if z 2 S and z 2 s z 1 and (z S)z s z 1 z s z 2 (21) z S S(z, S) = z 13

19 S s(ϵ, S) = ( (z)z ) n n ( (z)z n, n = n ) f 1,, f m f i x, t b i (x, t)( ) L i (y, t)( ) f i (x, y, t) = b i (x, t)l i (y, t) (22) L i z i 1 if y t z L i (y, t) = i +1:t = z i 0 otherwise (23) z i = l 1 l 2 y t 1 = l 1, y t = l 2 L i (y, t) = 1 z i L i z i 1 f i z i f i E[f i ] = x +1 P (y x) f i (x, y, t) (24) (x,ỹ) T y t= z i 1 T x x + 1 t = 1 1 t = 0 (24) 14

20 L-BFGS [5] 1 0 t T + 1( T = 3) 0, T + 1 / ( ) a1 s a2 a3 d a4 ( ) a0 : (b i ) (L i ) exp(exp(λ i ) ) ( exp ) a3:[yz]:2.80 a3 Y Z b i (x, t) z i z i 1 f i (x, y, t) = b i (x, t)l(y, t) 1 l BOS, l EOS 1 Y = {X, Y, Z} l BOS = B, l EOS = E t P t T P t = Y t {ϵ} {z k 1: z k (t t) b k(x, t) = 1, z k t t} (25) t =t 15

21 P t t t (22) f i b i L i b i 1 f i L i z i 1 : exp(w), exp(w ), σ( ) α, γ, β, δ( ) ( exp ) t = 3 a0, a3 1 a0:[x]:0.10, a0:[y]:0.20, a0:[z]:0.30, a0:[xy]0.70, a0:[xyz]:0.80, a0:[yx]:0.90, a0:[yz]:1.00, a0:[zy]:1.10, a3:[yyy]:2.50, a3:[y]:2.60, a3:[z]:2.70, a3:[yz]:2.80, 12 X, Y, Z, Y X, XY, Y Y Y, ZY, Y Z, XY Z 9 ϵ 10 t = 3 Y Z a0:[yz]:1.00 a3:[yz]:2.80 (25) 1 t = 2 Y Y t = 2 a0, a1 Y Y t = 3 Y Y Y t = 2 1 Y Y t = 2 16

22 z P t, z ϵ, t > 0 z 1: z 1 P t 1 (26) (25) t z T +1 z Y t:t +1 t =t f i (x, z + z, t) = T +1 f i (x, S(z, P t ) + z, t) (27) i z Y t:t +1 t =t i f i (x, z + z, t) f i (x, t +1 z + z + z, t) f i (x, S(z, P t ) + z, t) f i (x, t +1 S(z,P t) + S(z, P t ) + z, t) (27) t z = t t z S(z) < s z s z z b i (x, t ) = 1, L(z +z, t) = 1(L ) f i t z +z (26) t z S(z, P t ) < s z s z z P t (21) 17

23 a0:[x]:0.10 a0:[y]:0.20 a0:[z]:0.30 a0:[e]:0.40 a0:[bx]:0.50 a0:[bxy]:0.60 a0:[xy]:0.70 a0:[xyz]:0.80 a0:[yx]:0.90 a0:[yz]:1.00 a0:[zy]:1.10 a0:[xe]:1.20 a0:[ye]:1.30 a0:[ze]:1.40 a0:[zye]:1.50 a0:[yze]:1.60 a1:[x]:1.70 a1:[y]:1.80 a1:[z]:1.90 a1:[bx]:2.00 a1:[xy]:2.10 a2:[x]:2.20 a2:[y]:2.30 a2:[z]:2.40 a3:[yyy]:2.50 a3:[y]:2.60 a3:[z]:2.70 a3:[yz]:2.80 a4:[ze]:2.90 []:w 1.00 W 1.00 σ 9.24 α 0.00 γ 1.00 β 9.64 δ 9.64 (BOS)/ "This"/a0,a1,a2 "is"/a0,a1 "good"/a0,a3 (EOS)/a0,a4 []:w 1.00 W 1.00 σ 9.24 α 0.00 γ 2.57 β 3.53 δ 3.53 []:w 1.00 W 1.00 σ 9.24 α 0.00 γ 2.85 β 1.63 δ 1.63 []:w 1.00 W 1.00 σ 9.24 α 0.00 γ 5.65 β 0.40 δ 0.40 []:w 1.00 W 1.00 σ 9.24 α 0.00 γ 9.24 β 1.00 δ 1.00 [B]:w 1.00 W 1.00 σ 9.24 α 1.00 γ 1.00 β 9.24 δ-0.40 [X]:w 0.37 W 0.37 σ 1.08 α 0.00 γ 0.37 β 3.96 δ 0.43 [X]:w 0.17 W 0.17 σ 0.66 α 1.74 γ 0.42 β 1.55 δ-0.08 [X]:w 0.10 W 0.10 σ 0.13 α 1.89 γ 0.28 β 0.48 δ 0.08 [E]:w 0.40 W 0.40 σ 9.24 α 0.00 γ 9.24 β 1.00 δ 0.00 [BX]:w 1.00 W 0.37 σ 1.08 α 1.00 γ 0.37 β 2.89 δ-1.06 [YX]:w 0.90 W 0.15 σ 0.20 α 0.83 γ 0.13 β 1.55 δ 0.00 [YX]:w 0.90 W 0.09 σ 0.04 α 0.96 γ 0.09 β 0.48 δ 0.00 [XE]:w 1.20 W 0.48 σ 0.13 α 0.28 γ 0.13 β 1.00 δ 0.00 [Y]:w 0.83 W 0.83 σ 3.02 α 1.00 γ 0.83 β 3.65 δ 0.12 [Y]:w 0.36 W 0.36 σ 5.93 α 0.00 γ 0.96 β 6.21 δ 4.57 [Y]:w 0.52 W 0.52 σ 1.11 α 0.66 γ 1.72 β 0.52 δ 0.12 [YE]:w 1.30 W 0.52 σ 1.11 α 0.88 γ 1.11 β 1.00 δ 0.00 [Z]:w 1.37 W 1.37 σ 5.13 α 1.00 γ 1.37 β 3.75 δ 0.22 [XY]:w 1.47 W 0.53 σ 0.60 α 0.00 γ 0.12 β 5.03 δ-1.18 [XY]:w 0.70 W 0.36 σ 0.08 α 0.42 γ 0.15 β 0.52 δ 0.00 [ZYE]:w 1.50 W 0.78 σ 0.65 α 0.84 γ 0.65 β 1.00 δ 0.00 [BXY]:w 0.60 W 0.32 σ 0.60 α 0.37 γ 0.12 β 5.03 δ 0.00 [YYY]:w 2.50 W 1.30 σ 0.20 α 0.30 γ 0.39 β 0.52 δ 0.00 [ZE]:w 4.06 W 1.62 σ 7.99 α 1.53 γ 7.99 β 1.00 δ 0.00 [YY]:w 1.00 W 0.36 σ 1.97 α 0.83 γ 0.30 β 6.61 δ 0.41 [ZY]:w 1.10 W 0.57 σ 0.65 α 1.46 γ 0.84 β 0.78 δ 0.26 [YZE]:w 1.60 W 2.60 σ 5.51 α 2.12 γ 5.51 β 1.00 δ 0.00 [ZY]:w 1.10 W 0.40 σ 3.36 α 1.37 γ 0.54 β 6.21 δ 0.00 [Z]:w 0.81 W 0.81 σ 7.99 α 1.89 γ 3.65 β 1.62 δ 1.22 [Z]:w 0.57 W 0.57 σ 2.65 α 1.74 γ 1.46 β 1.81 δ 0.18 [YZ]:w 2.80 W 2.27 σ 5.51 α 0.84 γ 2.12 β 2.60 δ 0.97 [YZ]:w 1.00 W 0.57 σ 0.85 α 0.83 γ 0.47 β 1.81 δ 0.00 [XYZ]:w 0.80 W 1.81 σ 0.56 α 0.12 γ 0.22 β 2.60 δ : 18

24 5.2.2 z p P t t w(z p, t) w(z p, t) def = i:z i =z p,b i (x,t)=1 λ i (28) 1 exp t = 2 XY a0:[xy]:0.70 a1:[xy]: exp(w) exp w z p P t t W (z p, t) W (z p, t) def = i:z i sz p,b i (x,t)=1 λ i (29) ( ) 1 W (29) (28) w W (z p, t) = i:z i P t,z i s z p w(z i, t) (30) 3 Y X exp(w) 0.90 X 0.10 exp(w ) exp(w ) 2 exp W W z p ϵ W (z p, t) = W (s(z, P t ), t) + w(z p, t) s(z p, P t ) (20) P t z p P t z p z p s(z, P t ) 19

25 5.2.3 z p ϵ P t t 1 α(z p, t) α(z p, t) def = z:z Y 0:t,z p sz z:z Y 0:t, (z P t,s(z,p t )=z p )z s z t 1 exp λ i t =1 i:f(x,z,t )=1 t 1 λ i t =1 i:f(x,z,t )=1 exp (31) z p 1 t 1 z p t 1 t 1 α(ϵ, t) = 0 1 t = 1 X α 0 BX α(x, 1) t = 1 X BX t = 1 X BX (t = 0 B ) 0 t = 2 X α 1.74 Y X t = 2 X Y X XX ZX z p P t t γ(z p, t) γ(z p, t) def = z:z Y 0:t,z p sz exp t λ i t =1 i:f(x,z,t )=1 (32) z p t γ(ϵ, 0) = γ(l BOS, 0) = 1 t = 2 X γ 0.42 X 20

26 α γ α(z p, t) = γ(z p 1: z p 1, t 1) γ(z p, t) = z P t,z p s z z:z P t,s(z,p t )=z p γ(z 1: z 1, t 1) (33) α(z, t) exp(w (z, t)) (34) (33) γ(z p 1: z p 1, t 1) zp P t z p α(z p, t) (34) α(z, t) exp(w (z, t)) α(z, t) P t P t z α exp(w ) γ(z, t) t = 2 Y X γ 0.13 Y X α 0.83 W 0.15 X γ 0.42 Y X γ 0.13 X α 1.74 W X α X Y X 1 ( ) W 2 Y X X z p P t t β(z p, t) β(z p, t) def = T +1 exp λ i z Y t+1:t +1 t =t+1 i:f i (x,z p +z,t ) (35) t + 1 T + 1 z p z β(z, T + 1) = 1 f i (x, z p + z, t ) f i (x, t+1 zp + z p + 21

27 z, t ) z p P t t T δ(z p, t) δ(z p, t) def = T +1 exp λ i z Y t+1:t +1 t =t+1 i:f i (x,z p +z,t )=1 T +1 exp λ i t =t+1 i:f i (x,s(z p,p t)+z,t )=1 (36) t + 1 T + 1 z p z p δ(ϵ, T + 1) = 1 δ(z p, t) = β(z p, t) = z P t+1,z 1: z 1 =z p (β(z, t + 1) exp(w (z, t + 1)) β(s(z, P t+1 ), t + 1) exp(w (s(z, P t+1 ), t + 1))) (37) δ(z, t) z P t,z sz p (38) (37) (36) T +1 exp λ i exp λ i z Y t+2:t +1 z Y t+1:t+1 t =t+2 i:f i (x,z p +z +z,t )=1 i:f i (x,z p +z,t+1) T +1 exp λ i exp λ i (39) t =t+2 i:f i (x,s(z p,p t )+z +z,t )=1 i:f i (x,s(z p,p t )+z +z,t) (21) (39) exp T +1 exp λ i t =t+2 i:f i (x,s(z p +z,p t+1 )+z,t )=1 T +1 λ i t =t+2 i:f i (x,s(s(z p,p t)+z,p t+1 )+z,t )=1 exp exp z Y t+2:t +1 z Y t+1:t+1 i:f i (x,s(z p +z,p t+1 ),t+1) i:f i (x,s(s(z p,p t)+z,p t+1 ),t) λ i λ i (40) s(z p + z, P t+1 ) s s(z p, P t ) + z (41) 22

28 (20) s(z p + z, P t+1 ) < s z p + z, s(z p, P t ) + z < s z p + z (19) s(z p, P t ) + z < s s(z p +z, P t+1 ) z = s(z p +z, P t+1 ) z P t+1 z ϵ (26) z 1: z 1 P t (18) s(z p, P t ) < s z 1: z 1 < s z p (20) (20) t + 1 s(z p + z, P t+1 ) < s z < s z p + z z S(s(z p, P t ) + z, P t+1 ) = s(z p + z, P t+1 ) (42) z p + z P t+1 (21) S(z p + z, P t+1 ) z p + z S(z p + z, P t+1 ) = s(z p + z, P t+1 ) (43) (40) z p + z P t+1 (40) T +1 exp λ i exp λ i z Y t+2:t +1 z P t+1,z z 1 =zp t =t+2 i:f i (x,z +z,t )=1 i:f i (x,z,t+1) T +1 exp λ i exp λ i t =t+2 i:f i (x,s(z,p t+1 )+z,t )=1 i:f i (x,s(z,p t+1 )+z,p t+1 ),t) (35) β (29) W (37) (44) 1 t = 3 X δ 0.08 XE β 0.48 E β(1.0) z p P t t θ(z p, t), σ(z p, t) θ(z p, t) σ(z p, t) def = α(z p, t) exp(w (z p, t))β(z p, t) (45) def = θ(z, t) (46) z:z P t,z s z p 23

29 θ(z p, t) t z p z p t t = 0 T t = 2 X θ 0.46 (θ ) t = 2 X Y X ( t = 1, 2 XX ZX ) σ(z p, t) t z p t = 0 T t = 2 X σ 0.66 t = 2 X σ(z p, t) = σ(s(z p, P t ), t) + θ(z p, t) Z(x) γ Z(x) = γ(ϵ, T + 1) (47) y t z p P t P (z p s y 1:t x) = σ(z p, t)/z(x) (48) t = 1,, T + 1 P t t z F z,t Algorithm 5 t z ϵ P t t 1 z 1: z 1 P t 1 ( ) t z s(z, P t ) z 1 < s z 2 z 1 z 2 24

30 Algorithm 5 Make paths for t = 1 to T + 1 do F {f k b k (x, t) = 1} for all f i F do F z i,t F z i,t {f i } for j = 0 to z i do P t j P t j {z i 1: z j } (ascending order) (descending order) ( ) Algorithm 6 w(z, t) 0 (33) z i α(z i, t) = γ(z 1: z 1, t) (33) z i 2 (33) z P t α(z, t) 1 1 T +1 T +1 O F z p,t + P t (49) t=1 z p :z p P t t=1 γ, δ t 1 t + 1 exp 25

31 Algorithm 6 Sum-difference for t = 1 to T + 1 do for all z ϵ P t in ascending order do for all f i F z,t do w(z, t) w(z, t) + λ i W (z, t) W (s(z, P t ), t) + w(z, t) γ(ϵ, 0) 1, γ(l BOS, 0) 1 for t = 1 to T + 1 do for all z ϵ P t in descending order do α(s(z, P t ), t) α(s(z, P t ), t) γ(z 1: z 1, t 1) α(z, t) α(z, t) + γ(z 1: z 1, t 1) α(ϵ, t) 0 for all z ϵ P t in descending order do γ(z, t) γ(z, t) + α(z, t) exp(w (z, t)) γ(s(z, P t ), t) γ(s(z, P t ), t) + γ(z, t) Z(x) γ(ϵ, T + 1) δ(ϵ, T + 1) 1 for t = T + 1 downto 1 do β(ϵ, t) δ(ϵ, t) for all z ϵ P t in ascending order do β(z, t) β(s(z, P t ), t) + δ(z, t) for all z ϵ P t do δ(z 1: z 1, t 1) δ(z 1: z 1, t 1)+β(z, t) exp(w (z, t)) β(s(z, P t ), t) exp(w (s(z, P t ))) for t = 1 to T + 1 do for all z ϵ P t in descending order do θ(z p, t) α(z p, t) exp(w (z p, t))β(z p, t) σ(z, t) σ(z, t) + θ(z, t) σ(s(z, P t ), t) σ(s(z, P t ), t) + σ(z, t) for all f i F z,t do E[f i ] E[f i ] + σ(z, t)/z(x) 26

32 (Algorithm 6 α, β ) 5.3 Linear-Chain CRF y ( ) y 1 = arg max y Z(x) exp λ i f i (x, y, t) t i ( ) = arg max exp λ i f i (x, y, t) y t O ( T +1 i:b(x,t)=1 1 + L ( T +1 z p :z p P t 1 )) O ( T +1 t=1 t=1 i:b(x,t)=1 1 + log L ( T +1 t=1 i t=1 (50) z p :z p P t 1 )) W Algorithm z p P t p(z p, t), q(z p, t) p(z p, t) q(z p, t) def = max z:z P t 1,z p 1: z p 1 sz, (z :z P t 1,z p sz )z sz (p(z, t 1)) exp(w (z p, t)) (51) def = arg max z:z P t 1,z p 1: z p 1 sz, (z :z P t 1,z p sz)z sz (p(z, t 1)) (52) 27

33 Algorithm 7 Calculate weights for t = 1 to T + 1 do for all z ϵ P t in ascending order do for all f i F z,t do w(z, t) w(z, t) + λ i W (z, t) W (s(z, P t ), t) + w(z, t) p(z p, t) z p z p t t 1 z p q(z p, t) p(z p, t) t 1 Algorithm 8 p, q r, u Algorithm 5 Algorithm 7 Algorithm 9 q y Algorithm 8 O(L) 28

34 Algorithm 8 Decode-forward(1) p(ϵ, 0) 1, p(l BOS, 0) 1 for t = 1 to T + 1 do for all z ϵ P t in descending order do if this is the first iteration or the last label in z has changed then z the first z P t 1 in descending order for all z P t 1 do r(z, t 1) p(z, t 1) u(z, t 1) z end if while z z 1: z 1 do if r(z, t 1) > r(s(z, t 1), t 1) then r(s(z, t 1), t 1) r(z, t 1) u(s(z, t 1), t 1) u(z, t 1) end if z next z P t 1 in descending order end while p(z, t) r(z, t) exp(w (z, t)) q(z, t) u(z, t) Algorithm 9 Decode-forward(2) z arg max z PT +1 p(z, T + 1) for t = T + 1 to 1 do y t z z z q(z, t) y 0 l BOS 29

35 5.3.2 z p P t p(z p, t), q(z p, t) p(z p, t) q(z p, t) def = max z:z P t+1,z 1: z 1 s z p, (z :z P t+1,z s z )z 1: z 1 sz p (p(z, t + 1)) exp(w (z p, t)) (53) def = arg max z:z P t+1,z 1: z 1 s z p, (z :z P t+1,z s z )z 1: z 1 sz p (p(z, t + 1)) (54) p(z p, t) z p z p t t + 1 z p q(z p, t) p(z p, t) t + 1 Algorithm 10 p, q r, u, v, l, H H Heap-insert, Heap-delete, Heap-max Heapinsert Heap-delete Heap-max O(log L) Heap-max O(1) t Q t Q t = {z 1: z 1 z P t } (55) z t z t z 1 z z 2 z 2 = z 1 + z Algorithm 5 Algorithm 7 Algorithm 11 q y 30

36 Algorithm 10 Decode-backward(1) v(ϵ, T + 1) 1 for t = T + 1 downto 1 do for all z ϵ P t in ascending order do if v(z, t) = 0 then v(z, t) = v(s(z, P t ), t) q(z, t) = q(s(z, P t ), t) end if p(z, t) = v(z, t) exp(w (z, t)) H empty heap for all z : z P t, z = 1 do Heap-insert(H, z 1, t)) u(z 1 ) p(z, t) z ϵ for all z ϵ Q t in ascending order do while z s(z, Q t ) do for all z : z P t, z 1: z 1 = z do Heap-delete(H, z z 1 ) Heap-insert(H, z z 1, r(z )) z s(z, Q t ) end while for all z : z P t, z 1: z 1 = z do r(z ) u(z z 1 ) u(z z 1 ) p(z, t) Heap-delete(H, z z 1 ) Heap-insert(H, z z 1, p(z, t)) l Heap-max(H) q(z p, t) s(z p + l, P t+1 ) v(z p, t) u(l) z z 31

37 Algorithm 11 Decode-backward(2) if v(l BOS, 0) = 0 then q(l BOS, 0) q(ϵ, 0) end if z l BOS for t = 0 to T do y t z z z q(z, t) y T +1 l EOS O(L) O(log L) Q t 32

38 6 CRFSuite[6] Penn Treebank 3.0 Wall Street Journal , ,344 5, ,654 (Stanford Tagger[3] ) 3 1 Linear-Chain CRF 2, 3, GHz CPU 128GiB exp log CRFSuite[6] Stanford Tagger[3] MEMM[1] Stanford Tagger

39 y i y i, x i y i, x i 1 y i, x i+1 y i, x i 1, x i y i, x i, x i+1 (1 ) y i, x i 2, x i 1 y i, x i 2, x i 1, x i y i, x i 3, x i 2, x i 1 y i, x i+1 (10 ) y i, x i+1 (10 ) y i, x i+1 y i, x i+1 y i, x i+1 y i, y i 1 2 y i, y i 1, x i y i, y i 1, x i 1 y i, y i 1, y i 2, x i 1 3 y i, y i 1, y i 2, x i, x i 1 y i, y i 1, y i 2, x i 1, x i 2 y i, y i 1, y i 2, y i 3, x i 1, x i 2 4 y i, y i 1, y i 2, y i 3, x i, x i 1, x i 2 y i, y i 1, y i 2, y i 3, x i 1, x i 2, x i 3 y i, y i 1, y i 2, y i 3, y i 4, x i 1, x i 2, x i 3 1: 34

40 ( ) baseline (1 CRF) % 1 CRF( ) % % % % Stanford Tagger[3] % 2: WSJ POS-tagging 35

41 7 Linear-Chain CRF Ye [4] Linear-Chain CRF Stanford Tagger[3] Linear-Chain CRF Linear-Chain CRF POS-tagging 36

42 Linear-Chain CRF ( ) 3 37

43 [1] McCallum, A., Freitag, D. and Pereira, F. C. N.: Maximum Entropy Markov Models for Information Extraction and Segmentation, Proceedings of the Seventeenth International Conference on Machine Learning, ICML 00, San Francisco, CA, USA, Morgan Kaufmann Publishers Inc., pp (2000). [2] LAFFERTY, J.: Conditional Random Fields : Probabilistic Models for Segmenting and Labeling Sequence Data, Proceedings of ICML, 2001 (2001). [3] Toutanova, K., Klein, D., Manning, C. D. and Singer, Y.: Feature-Rich Part-of-Speech Tagging with a Cyclic Dependency Network, In Proceedings of HLT-NAACL 2003, pp (2003). [4] Ye, N., Lee, W. S., Chieu, H. L. and Wu, D.: Conditional Random Fields with High-Order Features for Sequence Labeling, Advances in Neural Information Processing Systems 22 (Bengio, Y., Schuurmans, D., Lafferty, J., Williams, C. K. I. and Culotta, A.(eds.)), pp (2009). [5] Liu, D. and Nocedal, J.: On the Limited Memory Method for Large Scale Optimization, Mathematical Programming B, Vol. 45(3), pp (1989). [6] Okazaki, N.: CRFsuite: a fast implementation of Conditional Random Fields (CRFs) (2007). [7] Qian, X., Jiang, X., Zhang, Q., Huang, X. and Wu, L.: Sparse higher order conditional random fields for improved sequence labeling, ICML, p. 107 (2009). 38

x i 2 x x i i 1 i xi+ 1xi+ 2x i+ 3 健康児に本剤を接種し ( 窓幅 3 n-gram 長の上限 3 の場合 ) 文字 ( 種 )1-gram: -3/ 児 (K) -2/ に (H) -1/ 本 (K) 1/ 剤 (K) 2/ を (H) 3/ 接 (K) 文字 (

x i 2 x x i i 1 i xi+ 1xi+ 2x i+ 3 健康児に本剤を接種し ( 窓幅 3 n-gram 長の上限 3 の場合 ) 文字 ( 種 )1-gram: -3/ 児 (K) -2/ に (H) -1/ 本 (K) 1/ 剤 (K) 2/ を (H) 3/ 接 (K) 文字 ( 1. 2 1 NEUBIG Graham 1 1 1 Improving Part-of-Speech Tagging by Combining Pointwise and Sequence-based Predictors Yosuke NAKATA, 1 Graham NEUBIG, 1 Shinsuke MORI 1 and Tatsuya KAWAHARA 1 This paper proposes

More information

( : A9TB2096)

( : A9TB2096) 2012 2013 3 31 ( : A9TB2096) Twitter i 1 1 1.1........................................... 1 1.2........................................... 1 2 4 2.1................................ 4 2.2...............................

More information

A(6, 13) B(1, 1) 65 y C 2 A(2, 1) B( 3, 2) C 66 x + 2y 1 = 0 2 A(1, 1) B(3, 0) P 67 3 A(3, 3) B(1, 2) C(4, 0) (1) ABC G (2) 3 A B C P 6

A(6, 13) B(1, 1) 65 y C 2 A(2, 1) B( 3, 2) C 66 x + 2y 1 = 0 2 A(1, 1) B(3, 0) P 67 3 A(3, 3) B(1, 2) C(4, 0) (1) ABC G (2) 3 A B C P 6 1 1 1.1 64 A6, 1) B1, 1) 65 C A, 1) B, ) C 66 + 1 = 0 A1, 1) B, 0) P 67 A, ) B1, ) C4, 0) 1) ABC G ) A B C P 64 A 1, 1) B, ) AB AB = 1) + 1) A 1, 1) 1 B, ) 1 65 66 65 C0, k) 66 1 p, p) 1 1 A B AB A 67

More information

,. Black-Scholes u t t, x c u 0 t, x x u t t, x c u t, x x u t t, x + σ x u t, x + rx ut, x rux, t 0 x x,,.,. Step 3, 7,,, Step 6., Step 4,. Step 5,,.

,. Black-Scholes u t t, x c u 0 t, x x u t t, x c u t, x x u t t, x + σ x u t, x + rx ut, x rux, t 0 x x,,.,. Step 3, 7,,, Step 6., Step 4,. Step 5,,. 9 α ν β Ξ ξ Γ γ o δ Π π ε ρ ζ Σ σ η τ Θ θ Υ υ ι Φ φ κ χ Λ λ Ψ ψ µ Ω ω Def, Prop, Th, Lem, Note, Remark, Ex,, Proof, R, N, Q, C [a, b {x R : a x b} : a, b {x R : a < x < b} : [a, b {x R : a x < b} : a,

More information

2017 (413812)

2017 (413812) 2017 (413812) Deep Learning ( NN) 2012 Google ASIC(Application Specific Integrated Circuit: IC) 10 ASIC Deep Learning TPU(Tensor Processing Unit) NN 12 20 30 Abstract Multi-layered neural network(nn) has

More information

zz + 3i(z z) + 5 = 0 + i z + i = z 2i z z z y zz + 3i (z z) + 5 = 0 (z 3i) (z + 3i) = 9 5 = 4 z 3i = 2 (3i) zz i (z z) + 1 = a 2 {

zz + 3i(z z) + 5 = 0 + i z + i = z 2i z z z y zz + 3i (z z) + 5 = 0 (z 3i) (z + 3i) = 9 5 = 4 z 3i = 2 (3i) zz i (z z) + 1 = a 2 { 04 zz + iz z) + 5 = 0 + i z + i = z i z z z 970 0 y zz + i z z) + 5 = 0 z i) z + i) = 9 5 = 4 z i = i) zz i z z) + = a {zz + i z z) + 4} a ) zz + a + ) z z) + 4a = 0 4a a = 5 a = x i) i) : c Darumafactory

More information

ii 3.,. 4. F. (), ,,. 8.,. 1. (75%) (25%) =7 20, =7 21 (. ). 1.,, (). 3.,. 1. ().,.,.,.,.,. () (12 )., (), 0. 2., 1., 0,.

ii 3.,. 4. F. (), ,,. 8.,. 1. (75%) (25%) =7 20, =7 21 (. ). 1.,, (). 3.,. 1. ().,.,.,.,.,. () (12 )., (), 0. 2., 1., 0,. 24(2012) (1 C106) 4 11 (2 C206) 4 12 http://www.math.is.tohoku.ac.jp/~obata,.,,,.. 1. 2. 3. 4. 5. 6. 7.,,. 1., 2007 (). 2. P. G. Hoel, 1995. 3... 1... 2.,,. ii 3.,. 4. F. (),.. 5... 6.. 7.,,. 8.,. 1. (75%)

More information

.3. (x, x = (, u = = 4 (, x x = 4 x, x 0 x = 0 x = 4 x.4. ( z + z = 8 z, z 0 (z, z = (0, 8, (,, (8, 0 3 (0, 8, (,, (8, 0 z = z 4 z (g f(x = g(

.3. (x, x = (, u = = 4 (, x x = 4 x, x 0 x = 0 x = 4 x.4. ( z + z = 8 z, z 0 (z, z = (0, 8, (,, (8, 0 3 (0, 8, (,, (8, 0 z = z 4 z (g f(x = g( 06 5.. ( y = x x y 5 y 5 = (x y = x + ( y = x + y = x y.. ( Y = C + I = 50 + 0.5Y + 50 r r = 00 0.5Y ( L = M Y r = 00 r = 0.5Y 50 (3 00 0.5Y = 0.5Y 50 Y = 50, r = 5 .3. (x, x = (, u = = 4 (, x x = 4 x,

More information

(3) (2),,. ( 20) ( s200103) 0.7 x C,, x 2 + y 2 + ax = 0 a.. D,. D, y C, C (x, y) (y 0) C m. (2) D y = y(x) (x ± y 0), (x, y) D, m, m = 1., D. (x 2 y

(3) (2),,. ( 20) ( s200103) 0.7 x C,, x 2 + y 2 + ax = 0 a.. D,. D, y C, C (x, y) (y 0) C m. (2) D y = y(x) (x ± y 0), (x, y) D, m, m = 1., D. (x 2 y [ ] 7 0.1 2 2 + y = t sin t IC ( 9) ( s090101) 0.2 y = d2 y 2, y = x 3 y + y 2 = 0 (2) y + 2y 3y = e 2x 0.3 1 ( y ) = f x C u = y x ( 15) ( s150102) [ ] y/x du x = Cexp f(u) u (2) x y = xey/x ( 16) ( s160101)

More information

ii 3.,. 4. F. ( ), ,,. 8.,. 1. (75% ) (25% ) =7 24, =7 25, =7 26 (. ). 1.,, ( ). 3.,...,.,.,.,.,. ( ) (1 2 )., ( ), 0., 1., 0,.

ii 3.,. 4. F. ( ), ,,. 8.,. 1. (75% ) (25% ) =7 24, =7 25, =7 26 (. ). 1.,, ( ). 3.,...,.,.,.,.,. ( ) (1 2 )., ( ), 0., 1., 0,. (1 C205) 4 10 (2 C206) 4 11 (2 B202) 4 12 25(2013) http://www.math.is.tohoku.ac.jp/~obata,.,,,..,,. 1. 2. 3. 4. 5. 6. 7. 8. 1., 2007 ( ).,. 2. P. G., 1995. 3. J. C., 1988. 1... 2.,,. ii 3.,. 4. F. ( ),..

More information

all.dvi

all.dvi 38 5 Cauchy.,,,,., σ.,, 3,,. 5.1 Cauchy (a) (b) (a) (b) 5.1: 5.1. Cauchy 39 F Q Newton F F F Q F Q 5.2: n n ds df n ( 5.1). df n n df(n) df n, t n. t n = df n (5.1) ds 40 5 Cauchy t l n mds df n 5.3: t

More information

x, y x 3 y xy 3 x 2 y + xy 2 x 3 + y 3 = x 3 y xy 3 x 2 y + xy 2 x 3 + y 3 = 15 xy (x y) (x + y) xy (x y) (x y) ( x 2 + xy + y 2) = 15 (x y)

x, y x 3 y xy 3 x 2 y + xy 2 x 3 + y 3 = x 3 y xy 3 x 2 y + xy 2 x 3 + y 3 = 15 xy (x y) (x + y) xy (x y) (x y) ( x 2 + xy + y 2) = 15 (x y) x, y x 3 y xy 3 x 2 y + xy 2 x 3 + y 3 = 15 1 1977 x 3 y xy 3 x 2 y + xy 2 x 3 + y 3 = 15 xy (x y) (x + y) xy (x y) (x y) ( x 2 + xy + y 2) = 15 (x y) ( x 2 y + xy 2 x 2 2xy y 2) = 15 (x y) (x + y) (xy

More information

No δs δs = r + δr r = δr (3) δs δs = r r = δr + u(r + δr, t) u(r, t) (4) δr = (δx, δy, δz) u i (r + δr, t) u i (r, t) = u i x j δx j (5) δs 2

No δs δs = r + δr r = δr (3) δs δs = r r = δr + u(r + δr, t) u(r, t) (4) δr = (δx, δy, δz) u i (r + δr, t) u i (r, t) = u i x j δx j (5) δs 2 No.2 1 2 2 δs δs = r + δr r = δr (3) δs δs = r r = δr + u(r + δr, t) u(r, t) (4) δr = (δx, δy, δz) u i (r + δr, t) u i (r, t) = u i δx j (5) δs 2 = δx i δx i + 2 u i δx i δx j = δs 2 + 2s ij δx i δx j

More information

Computational Semantics 1 category specificity Warrington (1975); Warrington & Shallice (1979, 1984) 2 basic level superiority 3 super-ordinate catego

Computational Semantics 1 category specificity Warrington (1975); Warrington & Shallice (1979, 1984) 2 basic level superiority 3 super-ordinate catego Computational Semantics 1 category specificity Warrington (1975); Warrington & Shallice (1979, 1984) 2 basic level superiority 3 super-ordinate category preservation 1 / 13 analogy by vector space Figure

More information

& 3 3 ' ' (., (Pixel), (Light Intensity) (Random Variable). (Joint Probability). V., V = {,,, V }. i x i x = (x, x,, x V ) T. x i i (State Variable),

& 3 3 ' ' (., (Pixel), (Light Intensity) (Random Variable). (Joint Probability). V., V = {,,, V }. i x i x = (x, x,, x V ) T. x i i (State Variable), .... Deeping and Expansion of Large-Scale Random Fields and Probabilistic Image Processing Kazuyuki Tanaka The mathematical frameworks of probabilistic image processing are formulated by means of Markov

More information

IPSJ SIG Technical Report Vol.2010-CVIM-170 No /1/ Visual Recognition of Wire Harnesses for Automated Wiring Masaki Yoneda, 1 Ta

IPSJ SIG Technical Report Vol.2010-CVIM-170 No /1/ Visual Recognition of Wire Harnesses for Automated Wiring Masaki Yoneda, 1 Ta 1 1 1 1 2 1. Visual Recognition of Wire Harnesses for Automated Wiring Masaki Yoneda, 1 Takayuki Okatani 1 and Koichiro Deguchi 1 This paper presents a method for recognizing the pose of a wire harness

More information

1 Abstract 2 3 n a ax 2 + bx + c = 0 (a 0) (1) ( x + b ) 2 = b2 4ac 2a 4a 2 D = b 2 4ac > 0 (1) 2 D = 0 D < 0 x + b 2a = ± b2 4ac 2a b ± b 2

1 Abstract 2 3 n a ax 2 + bx + c = 0 (a 0) (1) ( x + b ) 2 = b2 4ac 2a 4a 2 D = b 2 4ac > 0 (1) 2 D = 0 D < 0 x + b 2a = ± b2 4ac 2a b ± b 2 1 Abstract n 1 1.1 a ax + bx + c = 0 (a 0) (1) ( x + b ) = b 4ac a 4a D = b 4ac > 0 (1) D = 0 D < 0 x + b a = ± b 4ac a b ± b 4ac a b a b ± 4ac b i a D (1) ax + bx + c D 0 () () (015 8 1 ) 1. D = b 4ac

More information

gengo.dvi

gengo.dvi 4 97.52% tri-gram 92.76% 98.49% : Japanese word segmentation by Adaboost using the decision list as the weak learner Hiroyuki Shinnou In this paper, we propose the new method of Japanese word segmentation

More information

x () g(x) = f(t) dt f(x), F (x) 3x () g(x) g (x) f(x), F (x) (3) h(x) = x 3x tf(t) dt.9 = {(x, y) ; x, y, x + y } f(x, y) = xy( x y). h (x) f(x), F (x

x () g(x) = f(t) dt f(x), F (x) 3x () g(x) g (x) f(x), F (x) (3) h(x) = x 3x tf(t) dt.9 = {(x, y) ; x, y, x + y } f(x, y) = xy( x y). h (x) f(x), F (x [ ] IC. f(x) = e x () f(x) f (x) () lim f(x) lim f(x) x + x (3) lim f(x) lim f(x) x + x (4) y = f(x) ( ) ( s46). < a < () a () lim a log xdx a log xdx ( ) n (3) lim log k log n n n k=.3 z = log(x + y ),

More information

73

73 73 74 ( u w + bw) d = Ɣ t tw dɣ u = N u + N u + N 3 u 3 + N 4 u 4 + [K ] {u = {F 75 u δu L σ (L) σ dx σ + dσ x δu b δu + d(δu) ALW W = L b δu dv + Aσ (L)δu(L) δu = (= ) W = A L b δu dx + Aσ (L)δu(L) Aσ

More information

I

I I 6 4 10 1 1 1.1............... 1 1................ 1 1.3.................... 1.4............... 1.4.1.............. 1.4................. 1.4.3........... 3 1.4.4.. 3 1.5.......... 3 1.5.1..............

More information

II A A441 : October 02, 2014 Version : Kawahira, Tomoki TA (Kondo, Hirotaka )

II A A441 : October 02, 2014 Version : Kawahira, Tomoki TA (Kondo, Hirotaka ) II 214-1 : October 2, 214 Version : 1.1 Kawahira, Tomoki TA (Kondo, Hirotaka ) http://www.math.nagoya-u.ac.jp/~kawahira/courses/14w-biseki.html pdf 1 2 1 9 1 16 1 23 1 3 11 6 11 13 11 2 11 27 12 4 12 11

More information

1 [1, 2, 3, 4, 5, 8, 9, 10, 12, 15] The Boston Public Schools system, BPS (Deferred Acceptance system, DA) (Top Trading Cycles system, TTC) cf. [13] [

1 [1, 2, 3, 4, 5, 8, 9, 10, 12, 15] The Boston Public Schools system, BPS (Deferred Acceptance system, DA) (Top Trading Cycles system, TTC) cf. [13] [ Vol.2, No.x, April 2015, pp.xx-xx ISSN xxxx-xxxx 2015 4 30 2015 5 25 253-8550 1100 Tel 0467-53-2111( ) Fax 0467-54-3734 http://www.bunkyo.ac.jp/faculty/business/ 1 [1, 2, 3, 4, 5, 8, 9, 10, 12, 15] The

More information

数学の基礎訓練I

数学の基礎訓練I I 9 6 13 1 1 1.1............... 1 1................ 1 1.3.................... 1.4............... 1.4.1.............. 1.4................. 3 1.4.3........... 3 1.4.4.. 3 1.5.......... 3 1.5.1..............

More information

さくらの個別指導 ( さくら教育研究所 ) 1 φ = φ 1 : φ [ ] a [ ] 1 a : b a b b(a + b) b a 2 a 2 = b(a + b). b 2 ( a b ) 2 = a b a/b X 2 X 1 = 0 a/b > 0 2 a

さくらの個別指導 ( さくら教育研究所 ) 1 φ = φ 1 : φ [ ] a [ ] 1 a : b a b b(a + b) b a 2 a 2 = b(a + b). b 2 ( a b ) 2 = a b a/b X 2 X 1 = 0 a/b > 0 2 a φ + 5 2 φ : φ [ ] a [ ] a : b a b b(a + b) b a 2 a 2 b(a + b). b 2 ( a b ) 2 a b + a/b X 2 X 0 a/b > 0 2 a b + 5 2 φ φ : 2 5 5 [ ] [ ] x x x : x : x x : x x : x x 2 x 2 x 0 x ± 5 2 x x φ : φ 2 : φ ( )

More information

( 9 1 ) 1 2 1.1................................... 2 1.2................................................. 3 1.3............................................... 4 1.4...........................................

More information

Convolutional Neural Network A Graduation Thesis of College of Engineering, Chubu University Investigation of feature extraction by Convolution

Convolutional Neural Network A Graduation Thesis of College of Engineering, Chubu University Investigation of feature extraction by Convolution Convolutional Neural Network 2014 3 A Graduation Thesis of College of Engineering, Chubu University Investigation of feature extraction by Convolutional Neural Network Fukui Hiroshi 1940 1980 [1] 90 3

More information

2011 8 26 3 I 5 1 7 1.1 Markov................................ 7 2 Gau 13 2.1.................................. 13 2.2............................... 18 2.3............................ 23 3 Gau (Le vy

More information

6kg 1.1m 1.m.1m.1 l λ ϵ λ l + λ l l l dl dl + dλ ϵ dλ dl dl + dλ dl dl 3 1. JIS 1 6kg 1% 66kg 1 13 σ a1 σ m σ a1 σ m σ m σ a1 f f σ a1 σ a1 σ m f 4

6kg 1.1m 1.m.1m.1 l λ ϵ λ l + λ l l l dl dl + dλ ϵ dλ dl dl + dλ dl dl 3 1. JIS 1 6kg 1% 66kg 1 13 σ a1 σ m σ a1 σ m σ m σ a1 f f σ a1 σ a1 σ m f 4 35-8585 7 8 1 I I 1 1.1 6kg 1m P σ σ P 1 l l λ λ l 1.m 1 6kg 1.1m 1.m.1m.1 l λ ϵ λ l + λ l l l dl dl + dλ ϵ dλ dl dl + dλ dl dl 3 1. JIS 1 6kg 1% 66kg 1 13 σ a1 σ m σ a1 σ m σ m σ a1 f f σ a1 σ a1 σ m

More information

F = 0 F α, β F = t 2 + at + b (t α)(t β) = t 2 (α + β)t + αβ G : α + β = a, αβ = b F = 0 F (t) = 0 t α, β G t F = 0 α, β G. α β a b α β α β a b (α β)

F = 0 F α, β F = t 2 + at + b (t α)(t β) = t 2 (α + β)t + αβ G : α + β = a, αβ = b F = 0 F (t) = 0 t α, β G t F = 0 α, β G. α β a b α β α β a b (α β) 19 7 12 1 t F := t 2 + at + b D := a 2 4b F = 0 a, b 1.1 F = 0 α, β α β a, b /stlasadisc.tex, cusp.tex, toileta.eps, toiletb.eps, fromatob.tex 1 F = 0 F α, β F = t 2 + at + b (t α)(t β) = t 2 (α + β)t

More information

R R 16 ( 3 )

R R 16   ( 3 ) (017 ) 9 4 7 ( ) ( 3 ) ( 010 ) 1 (P3) 1 11 (P4) 1 1 (P4) 1 (P15) 1 (P16) (P0) 3 (P18) 3 4 (P3) 4 3 4 31 1 5 3 5 4 6 5 9 51 9 5 9 6 9 61 9 6 α β 9 63 û 11 64 R 1 65 13 66 14 7 14 71 15 7 R R 16 http://wwwecoosaka-uacjp/~tazak/class/017

More information

‚åŁÎ“·„´Šš‡ðŠp‡¢‡½‹âfi`fiI…A…‰…S…−…Y…•‡ÌMarkovŸA“½fiI›ð’Í

‚åŁÎ“·„´Šš‡ðŠp‡¢‡½‹âfi`fiI…A…‰…S…−…Y…•‡ÌMarkovŸA“½fiI›ð’Í Markov 2009 10 2 Markov 2009 10 2 1 / 25 1 (GA) 2 GA 3 4 Markov 2009 10 2 2 / 25 (GA) (GA) L ( 1) I := {0, 1} L f : I (0, ) M( 2) S := I M GA (GA) f (i) i I Markov 2009 10 2 3 / 25 (GA) ρ(i, j), i, j I

More information

熊本県数学問題正解

熊本県数学問題正解 00 y O x Typed by L A TEX ε ( ) (00 ) 5 4 4 ( ) http://www.ocn.ne.jp/ oboetene/plan/. ( ) (009 ) ( ).. http://www.ocn.ne.jp/ oboetene/plan/eng.html 8 i i..................................... ( )0... (

More information

(1.2) T D = 0 T = D = 30 kn 1.2 (1.4) 2F W = 0 F = W/2 = 300 kn/2 = 150 kn 1.3 (1.9) R = W 1 + W 2 = = 1100 N. (1.9) W 2 b W 1 a = 0

(1.2) T D = 0 T = D = 30 kn 1.2 (1.4) 2F W = 0 F = W/2 = 300 kn/2 = 150 kn 1.3 (1.9) R = W 1 + W 2 = = 1100 N. (1.9) W 2 b W 1 a = 0 1 1 1.1 1.) T D = T = D = kn 1. 1.4) F W = F = W/ = kn/ = 15 kn 1. 1.9) R = W 1 + W = 6 + 5 = 11 N. 1.9) W b W 1 a = a = W /W 1 )b = 5/6) = 5 cm 1.4 AB AC P 1, P x, y x, y y x 1.4.) P sin 6 + P 1 sin 45

More information

i I II I II II IC IIC I II ii 5 8 5 3 7 8 iii I 3........................... 5......................... 7........................... 4........................ 8.3......................... 33.4...................

More information

kut-paper-template.dvi

kut-paper-template.dvi 26 Discrimination of abnormal breath sound by using the features of breath sound 1150313 ,,,,,,,,,,,,, i Abstract Discrimination of abnormal breath sound by using the features of breath sound SATO Ryo

More information

renshumondai-kaito.dvi

renshumondai-kaito.dvi 3 1 13 14 1.1 1 44.5 39.5 49.5 2 0.10 2 0.10 54.5 49.5 59.5 5 0.25 7 0.35 64.5 59.5 69.5 8 0.40 15 0.75 74.5 69.5 79.5 3 0.15 18 0.90 84.5 79.5 89.5 2 0.10 20 1.00 20 1.00 2 1.2 1 16.5 20.5 12.5 2 0.10

More information

80 X 1, X 2,, X n ( λ ) λ P(X = x) = f (x; λ) = λx e λ, x = 0, 1, 2, x! l(λ) = n f (x i ; λ) = i=1 i=1 n λ x i e λ i=1 x i! = λ n i=1 x i e nλ n i=1 x

80 X 1, X 2,, X n ( λ ) λ P(X = x) = f (x; λ) = λx e λ, x = 0, 1, 2, x! l(λ) = n f (x i ; λ) = i=1 i=1 n λ x i e λ i=1 x i! = λ n i=1 x i e nλ n i=1 x 80 X 1, X 2,, X n ( λ ) λ P(X = x) = f (x; λ) = λx e λ, x = 0, 1, 2, x! l(λ) = n f (x i ; λ) = n λ x i e λ x i! = λ n x i e nλ n x i! n n log l(λ) = log(λ) x i nλ log( x i!) log l(λ) λ = 1 λ n x i n =

More information

数学Ⅱ演習(足助・09夏)

数学Ⅱ演習(足助・09夏) II I 9/4/4 9/4/2 z C z z z z, z 2 z, w C zw z w 3 z, w C z + w z + w 4 t R t C t t t t t z z z 2 z C re z z + z z z, im z 2 2 3 z C e z + z + 2 z2 + 3! z3 + z!, I 4 x R e x cos x + sin x 2 z, w C e z+w

More information

kiyo5_1-masuzawa.indd

kiyo5_1-masuzawa.indd .pp. A Study on Wind Forecast using Self-Organizing Map FUJIMATSU Seiichiro, SUMI Yasuaki, UETA Takuya, KOBAYASHI Asuka, TSUKUTANI Takao, FUKUI Yutaka SOM SOM Elman SOM SOM Elman SOM Abstract : Now a small

More information

IPSJ SIG Technical Report Vol.2016-CE-137 No /12/ e β /α α β β / α A judgment method of difficulty of task for a learner using simple

IPSJ SIG Technical Report Vol.2016-CE-137 No /12/ e β /α α β β / α A judgment method of difficulty of task for a learner using simple 1 2 3 4 5 e β /α α β β / α A judgment method of difficulty of task for a learner using simple electroencephalograph Katsuyuki Umezawa 1 Takashi Ishida 2 Tomohiko Saito 3 Makoto Nakazawa 4 Shigeichi Hirasawa

More information

it-ken_open.key

it-ken_open.key 深層学習技術の進展 ImageNet Classification 画像認識 音声認識 自然言語処理 機械翻訳 深層学習技術は これらの分野において 特に圧倒的な強みを見せている Figure (Left) Eight ILSVRC-2010 test Deep images and the cited4: from: ``ImageNet Classification with Networks et

More information

通信容量制約を考慮したフィードバック制御 - 電子情報通信学会 情報理論研究会(IT) 若手研究者のための講演会

通信容量制約を考慮したフィードバック制御 -  電子情報通信学会 情報理論研究会(IT)  若手研究者のための講演会 IT 1 2 1 2 27 11 24 15:20 16:05 ( ) 27 11 24 1 / 49 1 1940 Witsenhausen 2 3 ( ) 27 11 24 2 / 49 1940 2 gun director Warren Weaver, NDRC (National Defence Research Committee) Final report D-2 project #2,

More information

201711grade1ouyou.pdf

201711grade1ouyou.pdf 2017 11 26 1 2 52 3 12 13 22 23 32 33 42 3 5 3 4 90 5 6 A 1 2 Web Web 3 4 1 2... 5 6 7 7 44 8 9 1 2 3 1 p p >2 2 A 1 2 0.6 0.4 0.52... (a) 0.6 0.4...... B 1 2 0.8-0.2 0.52..... (b) 0.6 0.52.... 1 A B 2

More information

1 IDC Wo rldwide Business Analytics Technology and Services 2013-2017 Forecast 2 24 http://www.soumu.go.jp/johotsusintokei/whitepaper/ja/h24/pdf/n2010000.pdf 3 Manyika, J., Chui, M., Brown, B., Bughin,

More information

II

II II 16 16.0 2 1 15 x α 16 x n 1 17 (x α) 2 16.1 16.1.1 2 x P (x) P (x) = 3x 3 4x + 4 369 Q(x) = x 4 ax + b ( ) 1 P (x) x Q(x) x P (x) x P (x) x = a P (a) P (x) = x 3 7x + 4 P (2) = 2 3 7 2 + 4 = 8 14 +

More information

newmain.dvi

newmain.dvi 数論 サンプルページ この本の定価 判型などは, 以下の URL からご覧いただけます. http://www.morikita.co.jp/books/mid/008142 このサンプルページの内容は, 第 2 版 1 刷発行当時のものです. Daniel DUVERNEY: THÉORIE DES NOMBRES c Dunod, Paris, 1998, This book is published

More information

(u(x)v(x)) = u (x)v(x) + u(x)v (x) ( ) u(x) = u (x)v(x) u(x)v (x) v(x) v(x) 2 y = g(t), t = f(x) y = g(f(x)) dy dx dy dx = dy dt dt dx., y, f, g y = f (g(x))g (x). ( (f(g(x)). ). [ ] y = e ax+b (a, b )

More information

Deep Learning Deep Learning GPU GPU FPGA %

Deep Learning Deep Learning GPU GPU FPGA % 2016 (412825) Deep Learning Deep Learning GPU GPU FPGA 16 1 16 69% Abstract Recognition by DeepLearning attracts attention, because of its high recognition accuracy. Lots of learning is necessary for Deep

More information

( ) ( )

( ) ( ) 20 21 2 8 1 2 2 3 21 3 22 3 23 4 24 5 25 5 26 6 27 8 28 ( ) 9 3 10 31 10 32 ( ) 12 4 13 41 0 13 42 14 43 0 15 44 17 5 18 6 18 1 1 2 2 1 2 1 0 2 0 3 0 4 0 2 2 21 t (x(t) y(t)) 2 x(t) y(t) γ(t) (x(t) y(t))

More information

1 Fig. 1 Extraction of motion,.,,, 4,,, 3., 1, 2. 2.,. CHLAC,. 2.1,. (256 ).,., CHLAC. CHLAC, HLAC. 2.3 (HLAC ) r,.,. HLAC. N. 2 HLAC Fig. 2

1 Fig. 1 Extraction of motion,.,,, 4,,, 3., 1, 2. 2.,. CHLAC,. 2.1,. (256 ).,., CHLAC. CHLAC, HLAC. 2.3 (HLAC ) r,.,. HLAC. N. 2 HLAC Fig. 2 CHLAC 1 2 3 3,. (CHLAC), 1).,.,, CHLAC,.,. Suspicious Behavior Detection based on CHLAC Method Hideaki Imanishi, 1 Toyohiro Hayashi, 2 Shuichi Enokida 3 and Toshiaki Ejima 3 We have proposed a method for

More information

149 (Newell [5]) Newell [5], [1], [1], [11] Li,Ryu, and Song [2], [11] Li,Ryu, and Song [2], [1] 1) 2) ( ) ( ) 3) T : 2 a : 3 a 1 :

149 (Newell [5]) Newell [5], [1], [1], [11] Li,Ryu, and Song [2], [11] Li,Ryu, and Song [2], [1] 1) 2) ( ) ( ) 3) T : 2 a : 3 a 1 : Transactions of the Operations Research Society of Japan Vol. 58, 215, pp. 148 165 c ( 215 1 2 ; 215 9 3 ) 1) 2) :,,,,, 1. [9] 3 12 Darroch,Newell, and Morris [1] Mcneil [3] Miller [4] Newell [5, 6], [1]

More information

IPSJ SIG Technical Report Pitman-Yor 1 1 Pitman-Yor n-gram A proposal of the melody generation method using hierarchical pitman-yor language model Aki

IPSJ SIG Technical Report Pitman-Yor 1 1 Pitman-Yor n-gram A proposal of the melody generation method using hierarchical pitman-yor language model Aki Pitman-Yor Pitman-Yor n-gram A proposal of the melody generation method using hierarchical pitman-yor language model Akira Shirai and Tadahiro Taniguchi Although a lot of melody generation method has been

More information

1. (8) (1) (x + y) + (x + y) = 0 () (x + y ) 5xy = 0 (3) (x y + 3y 3 ) (x 3 + xy ) = 0 (4) x tan y x y + x = 0 (5) x = y + x + y (6) = x + y 1 x y 3 (

1. (8) (1) (x + y) + (x + y) = 0 () (x + y ) 5xy = 0 (3) (x y + 3y 3 ) (x 3 + xy ) = 0 (4) x tan y x y + x = 0 (5) x = y + x + y (6) = x + y 1 x y 3 ( 1 1.1 (1) (1 + x) + (1 + y) = 0 () x + y = 0 (3) xy = x (4) x(y + 3) + y(y + 3) = 0 (5) (a + y ) = x ax a (6) x y 1 + y x 1 = 0 (7) cos x + sin x cos y = 0 (8) = tan y tan x (9) = (y 1) tan x (10) (1 +

More information

Fermat s Last Theorem Hajime Mashima November 19, 2018 Abstract About 380 years ago, Pierre de Fermat wrote the following idea to Diophantus s Arithme

Fermat s Last Theorem Hajime Mashima November 19, 2018 Abstract About 380 years ago, Pierre de Fermat wrote the following idea to Diophantus s Arithme Fermat s Last Theorem Hajime Mashima November 19, 2018 Abstract About 380 years ago, Pierre de Fermat wrote the following idea to Diophantus s Arithmetica. Cubum autem in duos cubos, aut quadratoquadratum

More information

() (, y) E(, y) () E(, y) (3) q ( ) () E(, y) = k q q (, y) () E(, y) = k r r (3).3 [.7 ] f y = f y () f(, y) = y () f(, y) = tan y y ( ) () f y = f y

() (, y) E(, y) () E(, y) (3) q ( ) () E(, y) = k q q (, y) () E(, y) = k r r (3).3 [.7 ] f y = f y () f(, y) = y () f(, y) = tan y y ( ) () f y = f y 5. [. ] z = f(, y) () z = 3 4 y + y + 3y () z = y (3) z = sin( y) (4) z = cos y (5) z = 4y (6) z = tan y (7) z = log( + y ) (8) z = tan y + + y ( ) () z = 3 8y + y z y = 4 + + 6y () z = y z y = (3) z =

More information

Abstract This paper concerns with a method of dynamic image cognition. Our image cognition method has two distinguished features. One is that the imag

Abstract This paper concerns with a method of dynamic image cognition. Our image cognition method has two distinguished features. One is that the imag 2004 RGB A STUDY OF RGB COLOR INFORMATION AND ITS APPLICATION 03R3237 Abstract This paper concerns with a method of dynamic image cognition. Our image cognition method has two distinguished features. One

More information

変 位 変位とは 物体中のある点が変形後に 別の点に異動したときの位置の変化で あり ベクトル量である 変位には 物体の変形の他に剛体運動 剛体変位 が含まれている 剛体変位 P(x, y, z) 平行移動と回転 P! (x + u, y + v, z + w) Q(x + d x, y + dy,

変 位 変位とは 物体中のある点が変形後に 別の点に異動したときの位置の変化で あり ベクトル量である 変位には 物体の変形の他に剛体運動 剛体変位 が含まれている 剛体変位 P(x, y, z) 平行移動と回転 P! (x + u, y + v, z + w) Q(x + d x, y + dy, 変 位 変位とは 物体中のある点が変形後に 別の点に異動したときの位置の変化で あり ベクトル量である 変位には 物体の変形の他に剛体運動 剛体変位 が含まれている 剛体変位 P(x, y, z) 平行移動と回転 P! (x + u, y + v, z + w) Q(x + d x, y + dy, z + dz) Q! (x + d x + u + du, y + dy + v + dv, z +

More information

ad bc A A A = ad bc ( d ) b c a n A n A n A A det A A ( ) a b A = c d det A = ad bc σ {,,,, n} {,,, } {,,, } {,,, } ( ) σ = σ() = σ() = n sign σ sign(

ad bc A A A = ad bc ( d ) b c a n A n A n A A det A A ( ) a b A = c d det A = ad bc σ {,,,, n} {,,, } {,,, } {,,, } ( ) σ = σ() = σ() = n sign σ sign( I n n A AX = I, YA = I () n XY A () X = IX = (YA)X = Y(AX) = YI = Y X Y () XY A A AB AB BA (AB)(B A ) = A(BB )A = AA = I (BA)(A B ) = B(AA )B = BB = I (AB) = B A (BA) = A B A B A = B = 5 5 A B AB BA A

More information

I A A441 : April 15, 2013 Version : 1.1 I Kawahira, Tomoki TA (Shigehiro, Yoshida )

I A A441 : April 15, 2013 Version : 1.1 I   Kawahira, Tomoki TA (Shigehiro, Yoshida ) I013 00-1 : April 15, 013 Version : 1.1 I Kawahira, Tomoki TA (Shigehiro, Yoshida) http://www.math.nagoya-u.ac.jp/~kawahira/courses/13s-tenbou.html pdf * 4 15 4 5 13 e πi = 1 5 0 5 7 3 4 6 3 6 10 6 17

More information

1. 2 P 2 (x, y) 2 x y (0, 0) R 2 = {(x, y) x, y R} x, y R P = (x, y) O = (0, 0) OP ( ) OP x x, y y ( ) x v = y ( ) x 2 1 v = P = (x, y) y ( x y ) 2 (x

1. 2 P 2 (x, y) 2 x y (0, 0) R 2 = {(x, y) x, y R} x, y R P = (x, y) O = (0, 0) OP ( ) OP x x, y y ( ) x v = y ( ) x 2 1 v = P = (x, y) y ( x y ) 2 (x . P (, (0, 0 R {(,, R}, R P (, O (0, 0 OP OP, v v P (, ( (, (, { R, R} v (, (, (,, z 3 w z R 3,, z R z n R n.,..., n R n n w, t w ( z z Ke Words:. A P 3 0 B P 0 a. A P b B P 3. A π/90 B a + b c π/ 3. +

More information

29

29 9 .,,, 3 () C k k C k C + C + C + + C 8 + C 9 + C k C + C + C + C 3 + C 4 + C 5 + + 45 + + + 5 + + 9 + 4 + 4 + 5 4 C k k k ( + ) 4 C k k ( k) 3 n( ) n n n ( ) n ( ) n 3 ( ) 3 3 3 n 4 ( ) 4 4 4 ( ) n n

More information

1 Web [2] Web [3] [4] [5], [6] [7] [8] S.W. [9] 3. MeetingShelf Web MeetingShelf MeetingShelf (1) (2) (3) (4) (5) Web MeetingShelf

1 Web [2] Web [3] [4] [5], [6] [7] [8] S.W. [9] 3. MeetingShelf Web MeetingShelf MeetingShelf (1) (2) (3) (4) (5) Web MeetingShelf 1,a) 2,b) 4,c) 3,d) 4,e) Web A Review Supporting System for Whiteboard Logging Movies Based on Notes Timeline Taniguchi Yoshihide 1,a) Horiguchi Satoshi 2,b) Inoue Akifumi 4,c) Igaki Hiroshi 3,d) Hoshi

More information

A_chapter3.dvi

A_chapter3.dvi : a b c d 2: x x y y 3: x y w 3.. 3.2 2. 3.3 3. 3.4 (x, y,, w) = (,,, )xy w (,,, )xȳ w (,,, ) xy w (,,, )xy w (,,, )xȳ w (,,, ) xy w (,,, )xy w (,,, ) xȳw (,,, )xȳw (,,, ) xyw, F F = xy w x w xy w xy w

More information

(MIRU2008) HOG Histograms of Oriented Gradients (HOG)

(MIRU2008) HOG Histograms of Oriented Gradients (HOG) (MIRU2008) 2008 7 HOG - - E-mail: katsu0920@me.cs.scitec.kobe-u.ac.jp, {takigu,ariki}@kobe-u.ac.jp Histograms of Oriented Gradients (HOG) HOG Shape Contexts HOG 5.5 Histograms of Oriented Gradients D Human

More information

4.6: 3 sin 5 sin θ θ t θ 2t θ 4t : sin ωt ω sin θ θ ωt sin ωt 1 ω ω [rad/sec] 1 [sec] ω[rad] [rad/sec] 5.3 ω [rad/sec] 5.7: 2t 4t sin 2t sin 4t

4.6: 3 sin 5 sin θ θ t θ 2t θ 4t : sin ωt ω sin θ θ ωt sin ωt 1 ω ω [rad/sec] 1 [sec] ω[rad] [rad/sec] 5.3 ω [rad/sec] 5.7: 2t 4t sin 2t sin 4t 1 1.1 sin 2π [rad] 3 ft 3 sin 2t π 4 3.1 2 1.1: sin θ 2.2 sin θ ft t t [sec] t sin 2t π 4 [rad] sin 3.1 3 sin θ θ t θ 2t π 4 3.2 3.1 3.4 3.4: 2.2: sin θ θ θ [rad] 2.3 0 [rad] 4 sin θ sin 2t π 4 sin 1 1

More information

? (EM),, EM? (, 2004/ 2002) von Mises-Fisher ( 2004) HMM (MacKay 1997) LDA (Blei et al. 2001) PCFG ( 2004)... Variational Bayesian methods for Natural

? (EM),, EM? (, 2004/ 2002) von Mises-Fisher ( 2004) HMM (MacKay 1997) LDA (Blei et al. 2001) PCFG ( 2004)... Variational Bayesian methods for Natural SLC Internal tutorial Daichi Mochihashi daichi.mochihashi@atr.jp ATR SLC 2005.6.21 (Tue) 13:15 15:00@Meeting Room 1 Variational Bayesian methods for Natural Language Processing p.1/30 ? (EM),, EM? (, 2004/

More information

, = = 7 6 = 42, =

, = = 7 6 = 42, = http://www.ss.u-tokai.ac.jp/~mahoro/2016autumn/alg_intro/ 1 1 2016.9.26, http://www.ss.u-tokai.ac.jp/~mahoro/2016autumn/alg_intro/ 1.1 1 214 132 = 28258 2 + 1 + 4 1 + 3 + 2 = 7 6 = 42, 4 + 2 = 6 2 + 8

More information

dvi

dvi 2017 65 2 185 200 2017 1 2 2016 12 28 2017 5 17 5 24 PITCHf/x PITCHf/x PITCHf/x MLB 2014 PITCHf/x 1. 1 223 8522 3 14 1 2 223 8522 3 14 1 186 65 2 2017 PITCHf/x 1.1 PITCHf/x PITCHf/x SPORTVISION MLB 30

More information

25 Removal of the fricative sounds that occur in the electronic stethoscope

25 Removal of the fricative sounds that occur in the electronic stethoscope 25 Removal of the fricative sounds that occur in the electronic stethoscope 1140311 2014 3 7 ,.,.,.,.,.,.,.,,.,.,.,.,,. i Abstract Removal of the fricative sounds that occur in the electronic stethoscope

More information

..,,,, , ( ) 3.,., 3.,., 500, 233.,, 3,,.,, i

..,,,, , ( ) 3.,., 3.,., 500, 233.,, 3,,.,, i 25 Feature Selection for Prediction of Stock Price Time Series 1140357 2014 2 28 ..,,,,. 2013 1 1 12 31, ( ) 3.,., 3.,., 500, 233.,, 3,,.,, i Abstract Feature Selection for Prediction of Stock Price Time

More information

2 A id A : A A A A id A def = {(a, a) A A a A} 1 { } 1 1 id 1 = α: A B β : B C α β αβ : A C αβ def = {(a, c) A C b B.((a, b) α (b, c) β)} 2.3 α

2 A id A : A A A A id A def = {(a, a) A A a A} 1 { } 1 1 id 1 = α: A B β : B C α β αβ : A C αβ def = {(a, c) A C b B.((a, b) α (b, c) β)} 2.3 α 20 6 18 1 2 2.1 A B α A B α: A B A B Rel(A, B) A B (A B) A B 0 AB A B AB α, β : A B α β α β def (a, b) A B.((a, b) α (a, b) β) 0 AB AB Rel(A, B) 1 2 A id A : A A A A id A def = {(a, a) A A a A} 1 { } 1

More information

A Feasibility Study of Direct-Mapping-Type Parallel Processing Method to Solve Linear Equations in Load Flow Calculations Hiroaki Inayoshi, Non-member

A Feasibility Study of Direct-Mapping-Type Parallel Processing Method to Solve Linear Equations in Load Flow Calculations Hiroaki Inayoshi, Non-member A Feasibility Study of Direct-Mapping-Type Parallel Processing Method to Solve Linear Equations in Load Flow Calculations Hiroaki Inayoshi, Non-member (University of Tsukuba), Yasuharu Ohsawa, Member (Kobe

More information

JFE.dvi

JFE.dvi ,, Department of Civil Engineering, Chuo University Kasuga 1-13-27, Bunkyo-ku, Tokyo 112 8551, JAPAN E-mail : atsu1005@kc.chuo-u.ac.jp E-mail : kawa@civil.chuo-u.ac.jp SATO KOGYO CO., LTD. 12-20, Nihonbashi-Honcho

More information

x T = (x 1,, x M ) x T x M K C 1,, C K 22 x w y 1: 2 2

x T = (x 1,, x M ) x T x M K C 1,, C K 22 x w y 1: 2 2 Takio Kurita Neurosceince Research Institute, National Institute of Advanced Indastrial Science and Technology takio-kurita@aistgojp (Support Vector Machine, SVM) 1 (Support Vector Machine, SVM) ( ) 2

More information

50 2 I SI MKSA r q r q F F = 1 qq 4πε 0 r r 2 r r r r (2.2 ε 0 = 1 c 2 µ 0 c = m/s q 2.1 r q' F r = 0 µ 0 = 4π 10 7 N/A 2 k = 1/(4πε 0 qq

50 2 I SI MKSA r q r q F F = 1 qq 4πε 0 r r 2 r r r r (2.2 ε 0 = 1 c 2 µ 0 c = m/s q 2.1 r q' F r = 0 µ 0 = 4π 10 7 N/A 2 k = 1/(4πε 0 qq 49 2 I II 2.1 3 e e = 1.602 10 19 A s (2.1 50 2 I SI MKSA 2.1.1 r q r q F F = 1 qq 4πε 0 r r 2 r r r r (2.2 ε 0 = 1 c 2 µ 0 c = 3 10 8 m/s q 2.1 r q' F r = 0 µ 0 = 4π 10 7 N/A 2 k = 1/(4πε 0 qq F = k r

More information

JKR Point loading of an elastic half-space 2 3 Pressure applied to a circular region Boussinesq, n =

JKR Point loading of an elastic half-space 2 3 Pressure applied to a circular region Boussinesq, n = JKR 17 9 15 1 Point loading of an elastic half-space Pressure applied to a circular region 4.1 Boussinesq, n = 1.............................. 4. Hertz, n = 1.................................. 6 4 Hertz

More information

ばらつき抑制のための確率最適制御

ばらつき抑制のための確率最適制御 ( ) http://wwwhayanuemnagoya-uacjp/ fujimoto/ 2011 3 9 11 ( ) 2011/03/09-11 1 / 46 Outline 1 2 3 4 5 ( ) 2011/03/09-11 2 / 46 Outline 1 2 3 4 5 ( ) 2011/03/09-11 3 / 46 (1/2) r + Controller - u Plant y

More information

Sport and the Media: The Close Relationship between Sport and Broadcasting SUDO, Haruo1) Abstract This report tries to demonstrate the relationship be

Sport and the Media: The Close Relationship between Sport and Broadcasting SUDO, Haruo1) Abstract This report tries to demonstrate the relationship be Sport and the Media: The Close Relationship between Sport and Broadcasting SUDO, Haruo1) Abstract This report tries to demonstrate the relationship between broadcasting and sport (major sport and professional

More information

GPGPU

GPGPU GPGPU 2013 1008 2015 1 23 Abstract In recent years, with the advance of microscope technology, the alive cells have been able to observe. On the other hand, from the standpoint of image processing, the

More information

IA hara@math.kyushu-u.ac.jp Last updated: January,......................................................................................................................................................................................

More information

1 variation 1.1 imension unit L m M kg T s Q C QT 1 A = C s 1 MKSA F = ma N N = kg m s 1.1 J E = 1 mv W = F x J = kg m s 1 = N m 1.

1 variation 1.1 imension unit L m M kg T s Q C QT 1 A = C s 1 MKSA F = ma N N = kg m s 1.1 J E = 1 mv W = F x J = kg m s 1 = N m 1. 1.1 1. 1.3.1..3.4 3.1 3. 3.3 4.1 4. 4.3 5.1 5. 5.3 6.1 6. 6.3 7.1 7. 7.3 1 1 variation 1.1 imension unit L m M kg T s Q C QT 1 A = C s 1 MKSA F = ma N N = kg m s 1.1 J E = 1 mv W = F x J = kg m s 1 = N

More information

meiji_resume_1.PDF

meiji_resume_1.PDF β β β (q 1,q,..., q n ; p 1, p,..., p n ) H(q 1,q,..., q n ; p 1, p,..., p n ) Hψ = εψ ε k = k +1/ ε k = k(k 1) (x, y, z; p x, p y, p z ) (r; p r ), (θ; p θ ), (ϕ; p ϕ ) ε k = 1/ k p i dq i E total = E

More information

17 Proposal of an Algorithm of Image Extraction and Research on Improvement of a Man-machine Interface of Food Intake Measuring System

17 Proposal of an Algorithm of Image Extraction and Research on Improvement of a Man-machine Interface of Food Intake Measuring System 1. (1) ( MMI ) 2. 3. MMI Personal Computer(PC) MMI PC 1 1 2 (%) (%) 100.0 95.2 100.0 80.1 2 % 31.3% 2 PC (3 ) (2) MMI 2 ( ),,,, 49,,p531-532,2005 ( ),,,,,2005,p66-p67,2005 17 Proposal of an Algorithm of

More information

18 ( ) I II III A B C(100 ) 1, 2, 3, 5 I II A B (100 ) 1, 2, 3 I II A B (80 ) 6 8 I II III A B C(80 ) 1 n (1 + x) n (1) n C 1 + n C

18 ( ) I II III A B C(100 ) 1, 2, 3, 5 I II A B (100 ) 1, 2, 3 I II A B (80 ) 6 8 I II III A B C(80 ) 1 n (1 + x) n (1) n C 1 + n C 8 ( ) 8 5 4 I II III A B C( ),,, 5 I II A B ( ),, I II A B (8 ) 6 8 I II III A B C(8 ) n ( + x) n () n C + n C + + n C n = 7 n () 7 9 C : y = x x A(, 6) () A C () C P AP Q () () () 4 A(,, ) B(,, ) C(,,

More information

23 7 28 i i 1 1 1.1................................... 2 1.2............................... 3 1.2.1.................................... 3 1.2.2............................... 4 1.2.3 SI..............................

More information

., White-Box, White-Box. White-Box.,, White-Box., Maple [11], 2. 1, QE, QE, 1 Redlog [7], QEPCAD [9], SyNRAC [8] 3 QE., 2 Brown White-Box. 3 White-Box

., White-Box, White-Box. White-Box.,, White-Box., Maple [11], 2. 1, QE, QE, 1 Redlog [7], QEPCAD [9], SyNRAC [8] 3 QE., 2 Brown White-Box. 3 White-Box White-Box Takayuki Kunihiro Graduate School of Pure and Applied Sciences, University of Tsukuba Hidenao Iwane ( ) / Fujitsu Laboratories Ltd. / National Institute of Informatics. Yumi Wada Graduate School

More information

情報理論 第5回 情報量とエントロピー

情報理論  第5回 情報量とエントロピー 5 () ( ) ( ) ( ) p(a) a I(a) p(a) p(a) I(a) p(a) I(a) (2) (self information) p(a) = I(a) = 0 I(a) = 0 I(a) a I(a) = log 2 p(a) = log 2 p(a) bit 2 (log 2 ) (3) I(a) 7 6 5 4 3 2 0 0.5 p(a) p(a) = /2 I(a)

More information

e a b a b b a a a 1 a a 1 = a 1 a = e G G G : x ( x =, 8, 1 ) x 1,, 60 θ, ϕ ψ θ G G H H G x. n n 1 n 1 n σ = (σ 1, σ,..., σ N ) i σ i i n S n n = 1,,

e a b a b b a a a 1 a a 1 = a 1 a = e G G G : x ( x =, 8, 1 ) x 1,, 60 θ, ϕ ψ θ G G H H G x. n n 1 n 1 n σ = (σ 1, σ,..., σ N ) i σ i i n S n n = 1,, 01 10 18 ( ) 1 6 6 1 8 8 1 6 1 0 0 0 0 1 Table 1: 10 0 8 180 1 1 1. ( : 60 60 ) : 1. 1 e a b a b b a a a 1 a a 1 = a 1 a = e G G G : x ( x =, 8, 1 ) x 1,, 60 θ, ϕ ψ θ G G H H G x. n n 1 n 1 n σ = (σ 1,

More information

x = a 1 f (a r, a + r) f(a) r a f f(a) 2 2. (a, b) 2 f (a, b) r f(a, b) r (a, b) f f(a, b)

x = a 1 f (a r, a + r) f(a) r a f f(a) 2 2. (a, b) 2 f (a, b) r f(a, b) r (a, b) f f(a, b) 2011 I 2 II III 17, 18, 19 7 7 1 2 2 2 1 2 1 1 1.1.............................. 2 1.2 : 1.................... 4 1.2.1 2............................... 5 1.3 : 2.................... 5 1.3.1 2.....................................

More information

Dirichlet process mixture Dirichlet process mixture 2 /40 MIRU2008 :

Dirichlet process mixture Dirichlet process mixture 2 /40 MIRU2008 : Dirichlet Process : joint work with: Max Welling (UC Irvine), Yee Whye Teh (UCL, Gatsby) http://kenichi.kurihara.googlepages.com/miru_workshop.pdf 1 /40 MIRU2008 : Dirichlet process mixture Dirichlet process

More information

振動と波動

振動と波動 Report JS0.5 J Simplicity February 4, 2012 1 J Simplicity HOME http://www.jsimplicity.com/ Preface 2 Report 2 Contents I 5 1 6 1.1..................................... 6 1.2 1 1:................ 7 1.3

More information

*2.5mm ”ŒŠá‡ÆfiÁ™¥‡Ì…Z†[…t…X…N…−†[…j…fi…O

*2.5mm ”ŒŠá‡ÆfiÁ™¥‡Ì…Z†[…t…X…N…−†[…j…fi…O I. Takeuchi, Nagoya Institute of Technology 1/38 f(x) = w 1 x 1 + w 2 x 2 +... + w d x d f(x) = α 1 K(x, x 1 ) + α 2 K(x, x 2 ) +... + α n K(x, x n ) {wj } d j=1 f {αi } n i=1 f I. Takeuchi, Nagoya Institute

More information

29 jjencode JavaScript

29 jjencode JavaScript Kochi University of Technology Aca Title jjencode で難読化された JavaScript の検知 Author(s) 中村, 弘亮 Citation Date of 2018-03 issue URL http://hdl.handle.net/10173/1975 Rights Text version author Kochi, JAPAN http://kutarr.lib.kochi-tech.ac.jp/dspa

More information

IPSJ SIG Technical Report Vol.2012-MUS-96 No /8/10 MIDI Modeling Performance Indeterminacies for Polyphonic Midi Score Following and

IPSJ SIG Technical Report Vol.2012-MUS-96 No /8/10 MIDI Modeling Performance Indeterminacies for Polyphonic Midi Score Following and MIDI 1 2 3 2 1 Modeling Performance Indeterminacies for Polyphonic Midi Score Following and Its Application to Automatic Accompaniment Nakamura Eita 1 Yamamoto Ryuichi 2 Saito Yasuyuki 3 Sako Shinji 2

More information

漸化式のすべてのパターンを解説しましたー高校数学の達人・河見賢司のサイト

漸化式のすべてのパターンを解説しましたー高校数学の達人・河見賢司のサイト https://www.hmg-gen.com/tuusin.html https://www.hmg-gen.com/tuusin1.html 1 2 OK 3 4 {a n } (1) a 1 = 1, a n+1 a n = 2 (2) a 1 = 3, a n+1 a n = 2n a n a n+1 a n = ( ) a n+1 a n = ( ) a n+1 a n {a n } 1,

More information

DVIOUT-HYOU

DVIOUT-HYOU () P. () AB () AB ³ ³, BA, BA ³ ³ P. A B B A IA (B B)A B (BA) B A ³, A ³ ³ B ³ ³ x z ³ A AA w ³ AA ³ x z ³ x + z +w ³ w x + z +w ½ x + ½ z +w x + z +w x,,z,w ³ A ³ AA I x,, z, w ³ A ³ ³ + + A ³ A A P.

More information

1: A/B/C/D Fig. 1 Modeling Based on Difference in Agitation Method artisoc[7] A D 2017 Information Processing

1: A/B/C/D Fig. 1 Modeling Based on Difference in Agitation Method artisoc[7] A D 2017 Information Processing 1,a) 2,b) 3 Modeling of Agitation Method in Automatic Mahjong Table using Multi-Agent Simulation Hiroyasu Ide 1,a) Takashi Okuda 2,b) Abstract: Automatic mahjong table refers to mahjong table which automatically

More information

2.2 ( y = y(x ( (x 0, y 0 y (x 0 (y 0 = y(x 0 y = y(x ( y (x 0 = F (x 0, y(x 0 = F (x 0, y 0 (x 0, y 0 ( (x 0, y 0 F (x 0, y 0 xy (x, y (, F (x, y ( (

2.2 ( y = y(x ( (x 0, y 0 y (x 0 (y 0 = y(x 0 y = y(x ( y (x 0 = F (x 0, y(x 0 = F (x 0, y 0 (x 0, y 0 ( (x 0, y 0 F (x 0, y 0 xy (x, y (, F (x, y ( ( (. x y y x f y = f(x y x y = y(x y x y dx = d dx y(x = y (x = f (x y = y(x x ( (differential equation ( + y 2 dx + xy = 0 dx = xy + y 2 2 2 x y 2 F (x, y = xy + y 2 y = y(x x x xy(x = F (x, y(x + y(x 2

More information