例題ではじめる部分空間法 - パターン認識へのいざない -

Size: px
Start display at page:

Download "例題ではじめる部分空間法 - パターン認識へのいざない -"

Transcription

1 - - ( ) (1)

2 ( ) MATLAB/Octave 3 download s-hotta/rsj2012 (2)

3 ( ) [1] 対応付け 未知パターン ( クラスが未知 ) 利用 クラス ( 概念 ) 9 訓練パターン ( クラスが既知 ) (3)

4 [1] 識別演算部 未知パターン 前処理部 特徴抽出部 照合識別辞書 5 出力 識別部 (4)

5 x 2 0 x x 1 x 1 ( ) (5)

6 (Bayes decision rule) max j P (ω j x) = P (ω k x) x ω k P (ω j x) = P (ω j)p(x ω j ) j P (ω j)p(x ω j ) P (ω j ): ω j ( ) P (ω j x): x ω j ( ) p(x ω j ): ω j x ( ) p(x ω j ) ( ) (6)

7 [2, 3, 4] : ( ) ( ) (7)

8 : ( ) : ( ) 学習 クラスごとに訓練標本を部分空間で圧縮 エントロピー最小化 顔 5 猫 バイク 認識 未知標本 ~ c j 1 + c j 2 + c jr c + c k1 k 2 + ckr = = 認識結果顔 バイクで顔を表現できない (8)

9 : min u i d p(u i ) log p(u i ) i=1 : + r i=1 w i(u i x) 2 x x u u 2 1 T U j x x (9)

10 (1925- ) OCR ASPET/71 & ( ) ( ) ( ) ( ) (10)

11 1/3 [5] 0 subspace sub () ( super) ( ) ( ) (11)

12 2/ (12)

13 3/3 線型部分空間でなければ同じ直線上に存在する点同士の足し算 引き算 スカラー倍が同じ直線上にのらない (affine subspace, linear manifold, linear variety) (13)

14 d : x = x 1.. x d, x = (x 1,..., x d ) x x = d i=1 x2 i = x 2 x : x = x x : cos θ = x y x y : x y 2 = 2(1 cos θ) x y 半径 1 n + 1 n (n 3 ) (14)

15 r d u 1, u 2,..., u r d r ( ) U = (u 1 u 2 u r ) (U U = I) x c 2 u 2 u 1 0 x~ c 1 R r x c = U x = (u 1 x u 2 x u r x) R d x x = Uc = UU x (x U ) UU (15)

16 x x i x i ( ) x = x 1 x d (1) f = x x = d i=1 x2 i x ( ) x f = f = f x 1 f x d = (2x1 2x d ) = 2x (2) d d A f = x Ax x x f = 2Ax (3) f = y Ax x x f = A y (16)

17 arg max subject to arg max subject to max f(x) f(x) max f(x) x =1 x = 1 x f(x) argmax x f(x) f(x) x arg argmin f(x) x subject to s.t. max f(x), s.t. x x f(x) (17)

18 [6] SVM g(x) = 0 f(x) λ L = f(x) λg(x) 1 x L = f λ g = 0, L λ = 0 d + 1 x 1,...,x d, λ d (18)

19 ( ) A u u = 1 u Au max u Au u s.t. u u 1 = 0 λ L = u Au λ(u u 1) L/ u = 0, L/ λ = 0 L/ u = 2Au 2λu = 0 Au = λu 2 A ( u u u = 1 ) 2 2 (19)

20 (subspace method) [2, 4] [7] [8] ( ) (20)

21 n d x i (i = 1,..., n) C j U j x x max j=1,...,c { U j x } = U k x x class k (21)

22 class j class j x U T j d j x x U j 0 x 部分空間法 0 x 最小距離法 x d 2 j = x 2 U j x 2 x 2 U j x 2 (22)

23 cos θ 1/3 U d r 1 x U Uc c c = 1 : max x Uc = cos θ c s.t. c c 1 = 0 (1) x = 1 Uc = (Uc) (Uc) = c Ic = 1 j x u u 2 1 Uc (23)

24 cos θ 2/3 (1) L = x Uc λ 2 (c c 1) L/ c = U x λc = 0 c = 1 λ U x (2) c c = 1 λ 2 (U x) (U x) = 1 λ 2 U x 2 = 1 λ = U x λ (2) c = U x U x x U 1 (24)

25 cos θ 3/3 c = (U x)/ U x x Uc cos θ = x Uc = x UU x U x = U x 2 U x = U x = λ cos θ (U x)/ U x R r (UU x)/ U x R d cos θ U x x u u 2 1 T UU x Uc = T U x (25)

26 U 1/2 0 (26)

27 U 2/2 n x 1,..., x n d i = x i 2 (u 1 x i ) 2 u 1 x i 2 u 1 (u 1 x i ) 2 u 1 : ( n n ) max (u 1 x i ) 2 = u 1 x i x i u 1 u 1 i=1 i=1 s.t. u 1 u 1 1 = 0 D = n i=1 x ix i λ 1 r u r D r λ r r U j = (u 1 u r ) (27)

28 d d n X = (x 1 x n ) d d d D = XX n d n n N = X X U X X n ( n ) (28)

29 N = X X R n n r 0 λ 1,..., λ r (D = XX ) v 1,..., v r R n i d u i [9] u i = ±Xv i λi ( EVD.m ) (29)

30 : : [10, 11] P (ω j x) = P (ω j)p(x ω j ) j P (ω j)p(x ω j ) (30)

31 ω j N (x µ j, Σ j ) = 1 (2π) d 2 Σ j 1 2 { exp 1 } 2 (x µ j) Σ 1 j (x µ j ) µ j : d 1 Σ j : d d x i x i (31)

32 () (32)

33 ˆµ j = 0, ˆΣj = R j R j = 1 n j nj i=1 x ix i : N (x ˆµ j, ˆΣ j ) = ( 1 exp 1 ) (2π) d/2 R j 1/2 2 x R 1 j x (33)

34 N (x ˆµ j, ˆΣ j ) = ( 1 (2π) d/2 exp 1 ) R j 1/2 2 x R 1 j x g j (x) = Λ 1 2 j U j x 2 d i=1 ln λ ji R j = U j Λ j U j (34)

35 g j (x) = Λ 1 2 j U j x 2 d i=1 ln λ ji x i x i ( ) (35)

36 w 1 w 2 w d > 0 g j (x) = W 1 2 U j x 2 = d i=1 1 w i (u ji x)2 x x x 2 ( u T x) ji 2 0 u ji x u T ji (36)

37 1/w 1 1/w 2 1/w d > 0 w 1 w 2 w d > 0 ( ) w i : S j (x) = r i=1 w i(u ji x)2 = W 1 2 U j x 2 = w c j : S j (x) x ( ) (37)

38 CLAFIC ( ): w i = 1 (i = 1,..., r) ( ): w i = λ ji/λ j1 : w i = r i + 1 (i = 1,..., r) CLAFIC weight value proposed multiple similarity dimensionality r (38)

39 r S j (x) = (r i + 1)(u jix) 2 i=1 = r(u j1x) 2 + (r 1)(u j2x) (u jrx) 2 r r 1 = (u jix) 2 + (u jix) (u j1x) 2 i=1 i=1 (39)

40 1 USPS makedata.m WSC.m (40)

41 makedata.m makedata.m USPS [ 1, +1] 0 pair-wise makedata.m [0, +1] 0 9 usps.mat (41)

42 WSC.m WSC.m WSC.m Figure 1 10 Figure 2 imgnum U j (r = 13) 1 r imgnum (42)

43 : i j ij class total total (43)

44 C nclass d d n ndata i x i trai(:,ii) i trai_label(ii) i x test(:,ii) i test_label(ii) j U j U j C(j).U x y x y x *y x x = x x norm(x) (44)

45 Figure 1 (45)

46 Figure 2 test sample class 0 class 1 class 2 class 3 class 4 class 5 class 6 class 7 class 8 class 9 U j (U j U j x) (46)

47 λ = r i=1 λ i/ rank(d) j=1 λ j r ( ) 95 test validation accuracy [%] dimensionality of each subspace r (47)

48 r 95 linear weight ] [% c y ra 90 c u a CLAFIC 85 multiple similarity dimensionality r (48)

49 CPU 1.86GHz 2GB 32bit Windows MATLAB (R14) (r = 13) () SVM SVM () CLAFIC CLAFIC one-against-all one-against-all, RBF Kernel (49)

50 (%) (s) (KB) (r = 13) ( λ = 0.95) SVM SVM (50)

51 単体のパターンを観測しても何であるかはわからない 犬 同じクラスから由来する複数のパターン (CSM): (MSM): (51)

52 Compound Subspace Method (CSM) [11] P (ω X) = P (ω)p(x ω) ω P (ω)p(x ω) n T r k=1 f=1 i=1 f w i ( f u i f x k ) 2 Multiple Kernel Learning (52)

53 (compound Bayesian decision problem) [12] P (ω X) = P (ω)p(x ω) ω P (ω)p(x ω) n : X = (x 1 x 2 x n ) n (context): ω = (ω(1),..., ω(n)) x i ω(i) ひらめカレイカレイひらめ c (53)

54 (compound Bayesian decision problem) P (ω X) = P (ω)p(x ω) ω P (ω)p(x ω) ω c n p(x ω) CSM x i T (54)

55 CSM n x 1,..., x n i.i.d. P (ω j X ) = n P (ω j ) p(x i ω j ) i=1 c n P (ω j ) p(x i ω j ) j=1 i=1 n r n n S j(x ) = w l (u jlx i) 2 = W 1 2 U j x i 2 = S j(x i) i=1 l=1 i=1 i=1 (55)

56 CSM x T P (ω j F) = T P (ω j ) p f ( f x ω j ) f=1 c P (ω j ) j=1 f=1 T p f ( f x ω j ) (f ) F j (F) def = T r f f=1 l=1 T T f w l ( f u jl f x) 2 = f W 1 2 f U j f x 2 = f S j ( f x) f=1 f=1 (56)

57 CSM n X T P (ω j X ) = T n P (ω j ) p f ( f x i ω j ) c P (ω j ) f=1 i=1 T j=1 f=1 i=1 n p f ( f x i ω j ) C j ( X ) def = T f=1 i=1 n f S j ( f x i ) (57)

58 (mutual subspace method, MSM) [13] (cos θ ) (58)

59 V R d r d : r d < d U R d r i : r i < d Vb Uc max b,c (Vb) (Uc) = cos θ s.t. b b 1 = 0, c c 1 = 0 (3) (59)

60 1/2 (3) L = (Vb) (Uc) λ 2 (b b 1) µ 2 (c c 1) L/ b = 0, L/ c = 0 V Uc = λb (4) U Vb = µc (5) (4) c (5) b V UU Vb = λµb, U VV Uc = λµc (6) (60)

61 2/2 (4) b (5) c λ = µ b V Uc = b λb = λ (7) c U Vb = c µc = µ (8) (6) λ(= µ) V UU V (= U VV U ) cos θ V UU V U VV U (61)

62 2 ETH-80 dataset [14] ( ) (30 ) (62)

63 2 dog1 dog png dog png dog png dog png dog png dog png dog png dog png dog png dog png dog png dog png dog png dog png dog png dog png dog png dog png dog png dog png dog png dog png dog png dog png dog png dog png dog png dog png dog png dog png dog png dog png dog png dog png do g png dog png dog png dog png dog png dog png dog png (63)

64 2 dog png (64)

65 make_data.m Y = R G B (65)

66 21 ( ) 10 ( ) 22 ( ) (1 11 ) (66)

67 (doubutsu.mat) IMG(:) x= IMG reshape(x, [3 3]) IMG(:): reshape: (67)

68 CSM.m MSM.m ri rd nclass ntrai ntest trai(ii).x test(ii).x EVD trai(ii).u test(ii).u trai(ii).label test(ii).label CONF ii ( ) ii ( ) ii ii ii ii (68)

69 CPU 2.8GHz 3.5GB 32bit Windows MATLAB (R14) WSC: MSM: CSM: (%) (s) WSC (r d = 20) MSM (r i = 5,r d = 6) CSM (r d = 20) (69)

70 r r ( ) tangent distance k-subspace clustering (fuzzy) k-varieties clustering (70)

71 ( d = 2) local subspace classifier ( ) subspace/ (71)

72 I [1],,,,,, Aug [2] S. Watanabe, P.F. Lambert, C.A. Kulikowski, J.L. Buxton, and R. Walker, Evaluation and selection of variables in pattern recognition, Comp. & Info. Sciences, vol. 2, (Julius Tou, ed.). New York: Academic Press, pp , [3] T. Iijima, H. Genchi, and K. Mori, A theory of character recognition by pattern matching method, Proc. of 1st Int l J. Conf. on Pattern Recognition, pp , [4] E. Oja, Subspace methods of pattern recognition, Research Studies Press, 1983., [5], 1, [6], - -,, [7] S. Watanabe, Knowing and guessing : A quantitative study of inference and information, John Wiley & Sons, New York, 1969.,, :,,, (72)

73 II [8] [9], - -,, [10],,, PRMU2010, vol. 110, no. 296, pp , Nov [11],,, PRMU2010, vol. 110, no. 330, pp , Dec [12] J.F. Hannan and H. Robbins, Asumptotic solutions of the compound decision problem for two completely specified distributions, Annals of Mathematical Statistcs, vol. 26, no. 1, pp , [13] (D), vol. J68-D, no. 3, pp , [14] B. Leibe and B. Schiele, Analyzing appearance and contour based methods for object categorization, Proc. of CVPR, pp , (73)

74 1 1: f ( x) = const. f g(x) = 0 g g(x) = 0 f(x) f = λ g (74)

75 2 2: Au = λu (u 0) u A λ x x ( ) 1/5 1/4 A A = x Ax 1/4 1/5 ( ) (75)

On the Limited Sample Effect of the Optimum Classifier by Bayesian Approach he Case of Independent Sample Size for Each Class Xuexian HA, etsushi WAKA

On the Limited Sample Effect of the Optimum Classifier by Bayesian Approach he Case of Independent Sample Size for Each Class Xuexian HA, etsushi WAKA Journal Article / 学術雑誌論文 ベイズアプローチによる最適識別系の有限 標本効果に関する考察 : 学習標本の大きさ がクラス間で異なる場合 (< 論文小特集 > パ ターン認識のための学習 : 基礎と応用 On the limited sample effect of bayesian approach : the case of each class 韓, 雪仙 ; 若林, 哲史

More information

Microsoft PowerPoint - SSII_harada pptx

Microsoft PowerPoint - SSII_harada pptx The state of the world The gathered data The processed data w d r I( W; D) I( W; R) The data processing theorem states that data processing can only destroy information. David J.C. MacKay. Information

More information

1 4 1 ( ) ( ) ( ) ( ) () 1 4 2

1 4 1 ( ) ( ) ( ) ( ) () 1 4 2 7 1995, 2017 7 21 1 2 2 3 3 4 4 6 (1).................................... 6 (2)..................................... 6 (3) t................. 9 5 11 (1)......................................... 11 (2)

More information

x T = (x 1,, x M ) x T x M K C 1,, C K 22 x w y 1: 2 2

x T = (x 1,, x M ) x T x M K C 1,, C K 22 x w y 1: 2 2 Takio Kurita Neurosceince Research Institute, National Institute of Advanced Indastrial Science and Technology takio-kurita@aistgojp (Support Vector Machine, SVM) 1 (Support Vector Machine, SVM) ( ) 2

More information

t χ 2 F Q t χ 2 F 1 2 µ, σ 2 N(µ, σ 2 ) f(x µ, σ 2 ) = 1 ( exp (x ) µ)2 2πσ 2 2σ 2 0, N(0, 1) (100 α) z(α) t χ 2 *1 2.1 t (i)x N(µ, σ 2 ) x µ σ N(0, 1

t χ 2 F Q t χ 2 F 1 2 µ, σ 2 N(µ, σ 2 ) f(x µ, σ 2 ) = 1 ( exp (x ) µ)2 2πσ 2 2σ 2 0, N(0, 1) (100 α) z(α) t χ 2 *1 2.1 t (i)x N(µ, σ 2 ) x µ σ N(0, 1 t χ F Q t χ F µ, σ N(µ, σ ) f(x µ, σ ) = ( exp (x ) µ) πσ σ 0, N(0, ) (00 α) z(α) t χ *. t (i)x N(µ, σ ) x µ σ N(0, ) (ii)x,, x N(µ, σ ) x = x+ +x N(µ, σ ) (iii) (i),(ii) z = x µ N(0, ) σ N(0, ) ( 9 97.

More information

‚åŁÎ“·„´Šš‡ðŠp‡¢‡½‹âfi`fiI…A…‰…S…−…Y…•‡ÌMarkovŸA“½fiI›ð’Í

‚åŁÎ“·„´Šš‡ðŠp‡¢‡½‹âfi`fiI…A…‰…S…−…Y…•‡ÌMarkovŸA“½fiI›ð’Í Markov 2009 10 2 Markov 2009 10 2 1 / 25 1 (GA) 2 GA 3 4 Markov 2009 10 2 2 / 25 (GA) (GA) L ( 1) I := {0, 1} L f : I (0, ) M( 2) S := I M GA (GA) f (i) i I Markov 2009 10 2 3 / 25 (GA) ρ(i, j), i, j I

More information

1 n 1 1 2 2 3 3 3.1............................ 3 3.2............................. 6 3.2.1.............. 6 3.2.2................. 7 3.2.3........................... 10 4 11 4.1..........................

More information

4. C i k = 2 k-means C 1 i, C 2 i 5. C i x i p [ f(θ i ; x) = (2π) p 2 Vi 1 2 exp (x µ ] i) t V 1 i (x µ i ) 2 BIC BIC = 2 log L( ˆθ i ; x i C i ) + q

4. C i k = 2 k-means C 1 i, C 2 i 5. C i x i p [ f(θ i ; x) = (2π) p 2 Vi 1 2 exp (x µ ] i) t V 1 i (x µ i ) 2 BIC BIC = 2 log L( ˆθ i ; x i C i ) + q x-means 1 2 2 x-means, x-means k-means Bayesian Information Criterion BIC Watershed x-means Moving Object Extraction Using the Number of Clusters Determined by X-means Clustering Naoki Kubo, 1 Kousuke

More information

9. 05 L x P(x) P(0) P(x) u(x) u(x) (0 < = x < = L) P(x) E(x) A(x) P(L) f ( d EA du ) = 0 (9.) dx dx u(0) = 0 (9.2) E(L)A(L) du (L) = f (9.3) dx (9.) P

9. 05 L x P(x) P(0) P(x) u(x) u(x) (0 < = x < = L) P(x) E(x) A(x) P(L) f ( d EA du ) = 0 (9.) dx dx u(0) = 0 (9.2) E(L)A(L) du (L) = f (9.3) dx (9.) P 9 (Finite Element Method; FEM) 9. 9. P(0) P(x) u(x) (a) P(L) f P(0) P(x) (b) 9. P(L) 9. 05 L x P(x) P(0) P(x) u(x) u(x) (0 < = x < = L) P(x) E(x) A(x) P(L) f ( d EA du ) = 0 (9.) dx dx u(0) = 0 (9.2) E(L)A(L)

More information

IPSJ SIG Technical Report Vol.2010-CVIM-170 No /1/ Visual Recognition of Wire Harnesses for Automated Wiring Masaki Yoneda, 1 Ta

IPSJ SIG Technical Report Vol.2010-CVIM-170 No /1/ Visual Recognition of Wire Harnesses for Automated Wiring Masaki Yoneda, 1 Ta 1 1 1 1 2 1. Visual Recognition of Wire Harnesses for Automated Wiring Masaki Yoneda, 1 Takayuki Okatani 1 and Koichiro Deguchi 1 This paper presents a method for recognizing the pose of a wire harness

More information

³ÎΨÏÀ

³ÎΨÏÀ 2017 12 12 Makoto Nakashima 2017 12 12 1 / 22 2.1. C, D π- C, D. A 1, A 2 C A 1 A 2 C A 3, A 4 D A 1 A 2 D Makoto Nakashima 2017 12 12 2 / 22 . (,, L p - ). Makoto Nakashima 2017 12 12 3 / 22 . (,, L p

More information

waseda2010a-jukaiki1-main.dvi

waseda2010a-jukaiki1-main.dvi November, 2 Contents 6 2 8 3 3 3 32 32 33 5 34 34 6 35 35 7 4 R 2 7 4 4 9 42 42 2 43 44 2 5 : 2 5 5 23 52 52 23 53 53 23 54 24 6 24 6 6 26 62 62 26 63 t 27 7 27 7 7 28 72 72 28 73 36) 29 8 29 8 29 82 3

More information

f(x) = e x2 25 d f(x) 0 x d2 dx f(x) 0 x dx2 f(x) (1 + ax 2 ) 2 lim x 0 x 4 a 3 2 a g(x) = 1 + ax 2 f(x) g(x) 1/2 f(x)dx n n A f(x) = Ax (x R

f(x) = e x2 25 d f(x) 0 x d2 dx f(x) 0 x dx2 f(x) (1 + ax 2 ) 2 lim x 0 x 4 a 3 2 a g(x) = 1 + ax 2 f(x) g(x) 1/2 f(x)dx n n A f(x) = Ax (x R 29 ( ) 90 1 2 2 2 1 3 4 1 5 1 4 3 3 4 2 1 4 5 6 3 7 8 9 f(x) = e x2 25 d f(x) 0 x d2 dx f(x) 0 x dx2 f(x) (1 + ax 2 ) 2 lim x 0 x 4 a 3 2 a g(x) = 1 + ax 2 f(x) g(x) 1/2 f(x)dx 11 0 24 n n A f(x) = Ax

More information

v v = v 1 v 2 v 3 (1) R = (R ij ) (2) R (R 1 ) ij = R ji (3) 3 R ij R ik = δ jk (4) i=1 δ ij Kronecker δ ij = { 1 (i = j) 0 (i

v v = v 1 v 2 v 3 (1) R = (R ij ) (2) R (R 1 ) ij = R ji (3) 3 R ij R ik = δ jk (4) i=1 δ ij Kronecker δ ij = { 1 (i = j) 0 (i 1. 1 1.1 1.1.1 1.1.1.1 v v = v 1 v 2 v 3 (1) R = (R ij ) (2) R (R 1 ) ij = R ji (3) R ij R ik = δ jk (4) δ ij Kronecker δ ij = { 1 (i = j) 0 (i j) (5) 1 1.1. v1.1 2011/04/10 1. 1 2 v i = R ij v j (6) [

More information

5 Armitage x 1,, x n y i = 10x i + 3 y i = log x i {x i } {y i } 1.2 n i i x ij i j y ij, z ij i j 2 1 y = a x + b ( cm) x ij (i j )

5 Armitage x 1,, x n y i = 10x i + 3 y i = log x i {x i } {y i } 1.2 n i i x ij i j y ij, z ij i j 2 1 y = a x + b ( cm) x ij (i j ) 5 Armitage. x,, x n y i = 0x i + 3 y i = log x i x i y i.2 n i i x ij i j y ij, z ij i j 2 y = a x + b 2 2. ( cm) x ij (i j ) (i) x, x 2 σ 2 x,, σ 2 x,2 σ x,, σ x,2 t t x * (ii) (i) m y ij = x ij /00 y

More information

24 201170068 1 4 2 6 2.1....................... 6 2.1.1................... 6 2.1.2................... 7 2.1.3................... 8 2.2..................... 8 2.3................. 9 2.3.1........... 12

More information

Accuracy Improvement by Compound Discriminant Functions for Resembling Character Recognition Takashi NAKAJIMA, Tetsushi WAKABAYASHI, Fumitaka KIMURA,

Accuracy Improvement by Compound Discriminant Functions for Resembling Character Recognition Takashi NAKAJIMA, Tetsushi WAKABAYASHI, Fumitaka KIMURA, Journal Article / 学 術 雑 誌 論 文 混 合 識 別 関 数 による 類 似 文 字 認 識 の 高 精 度 化 Accuracy improvement by compoun for resembling character recogn 中 嶋, 孝 ; 若 林, 哲 史 ; 木 村, 文 隆 ; 三 宅, 康 二 Nakajima, Takashi; Wakabayashi,

More information

a) Extraction of Similarities and Differences in Human Behavior Using Singular Value Decomposition Kenichi MISHIMA, Sayaka KANATA, Hiroaki NAKANISHI a

a) Extraction of Similarities and Differences in Human Behavior Using Singular Value Decomposition Kenichi MISHIMA, Sayaka KANATA, Hiroaki NAKANISHI a a) Extraction of Similarities and Differences in Human Behavior Using Singular Value Decomposition Kenichi MISHIMA, Sayaka KANATA, Hiroaki NAKANISHI a), Tetsuo SAWARAGI, and Yukio HORIGUCHI 1. Johansson

More information

情報理論 第5回 情報量とエントロピー

情報理論  第5回 情報量とエントロピー 5 () ( ) ( ) ( ) p(a) a I(a) p(a) p(a) I(a) p(a) I(a) (2) (self information) p(a) = I(a) = 0 I(a) = 0 I(a) a I(a) = log 2 p(a) = log 2 p(a) bit 2 (log 2 ) (3) I(a) 7 6 5 4 3 2 0 0.5 p(a) p(a) = /2 I(a)

More information

/ n (M1) M (M2) n Λ A = {ϕ λ : U λ R n } λ Λ M (atlas) A (a) {U λ } λ Λ M (open covering) U λ M λ Λ U λ = M (b) λ Λ ϕ λ : U λ ϕ λ (U λ ) R n ϕ

/ n (M1) M (M2) n Λ A = {ϕ λ : U λ R n } λ Λ M (atlas) A (a) {U λ } λ Λ M (open covering) U λ M λ Λ U λ = M (b) λ Λ ϕ λ : U λ ϕ λ (U λ ) R n ϕ 4 4.1 1 2 1 4 2 1 / 2 4.1.1 n (M1) M (M2) n Λ A = {ϕ λ : U λ R n } λ Λ M (atlas) A (a) {U λ } λ Λ M (open covering) U λ M λ Λ U λ = M (b) λ Λ ϕ λ : U λ ϕ λ (U λ ) R n ϕ λ U λ (local chart, local coordinate)

More information

c 2009 i

c 2009 i I 2009 c 2009 i 0 1 0.0................................... 1 0.1.............................. 3 0.2.............................. 5 1 7 1.1................................. 7 1.2..............................

More information

2 (2016 3Q N) c = o (11) Ax = b A x = c A n I n n n 2n (A I n ) (I n X) A A X A n A A A (1) (2) c 0 c (3) c A A i j n 1 ( 1) i+j A (i, j) A (i, j) ã i

2 (2016 3Q N) c = o (11) Ax = b A x = c A n I n n n 2n (A I n ) (I n X) A A X A n A A A (1) (2) c 0 c (3) c A A i j n 1 ( 1) i+j A (i, j) A (i, j) ã i [ ] (2016 3Q N) a 11 a 1n m n A A = a m1 a mn A a 1 A A = a n (1) A (a i a j, i j ) (2) A (a i ca i, c 0, i ) (3) A (a i a i + ca j, j i, i ) A 1 A 11 0 A 12 0 0 A 1k 0 1 A 22 0 0 A 2k 0 1 0 A 3k 1 A rk

More information

Kalman ( ) 1) (Kalman filter) ( ) t y 0,, y t x ˆx 3) 10) t x Y [y 0,, y ] ) x ( > ) ˆx (prediction) ) x ( ) ˆx (filtering) )

Kalman ( ) 1) (Kalman filter) ( ) t y 0,, y t x ˆx 3) 10) t x Y [y 0,, y ] ) x ( > ) ˆx (prediction) ) x ( ) ˆx (filtering) ) 1 -- 5 6 2009 3 R.E. Kalman ( ) H 6-1 6-2 6-3 H Rudolf Emil Kalman IBM IEEE Medal of Honor(1974) (1985) c 2011 1/(23) 1 -- 5 -- 6 6--1 2009 3 Kalman ( ) 1) (Kalman filter) ( ) t y 0,, y t x ˆx 3) 10) t

More information

Trapezoidal Rule θ = 1/ x n x n 1 t = 1 [f(t n 1, x n 1 ) + f(t n, x n )] (6) 1. dx dt = f(t, x), x(t 0) = x 0 (7) t [t 0, t 1 ] f t [t 0, t 1 ], x x

Trapezoidal Rule θ = 1/ x n x n 1 t = 1 [f(t n 1, x n 1 ) + f(t n, x n )] (6) 1. dx dt = f(t, x), x(t 0) = x 0 (7) t [t 0, t 1 ] f t [t 0, t 1 ], x x University of Hyogo 8 8 1 d x(t) =f(t, x(t)), dt (1) x(t 0 ) =x 0 () t n = t 0 + n t x x n n x n x 0 x i i = 0,..., n 1 x n x(t) 1 1.1 1 1 1 0 θ 1 θ x n x n 1 t = θf(t n 1, x n 1 ) + (1 θ)f(t n, x n )

More information

untitled

untitled K-Means 1 5 2 K-Means 7 2.1 K-Means.............................. 7 2.2 K-Means.......................... 8 2.3................... 9 3 K-Means 11 3.1.................................. 11 3.2..................................

More information

3 2 2 (1) (2) (3) (4) 4 4 AdaBoost 2. [11] Onishi&Yoda [8] Iwashita&Stoica [5] 4 [3] 3. 3 (1) (2) (3)

3 2 2 (1) (2) (3) (4) 4 4 AdaBoost 2. [11] Onishi&Yoda [8] Iwashita&Stoica [5] 4 [3] 3. 3 (1) (2) (3) (MIRU2012) 2012 8 820-8502 680-4 E-mail: {d kouno,shimada,endo}@pluto.ai.kyutech.ac.jp (1) (2) (3) (4) 4 AdaBoost 1. Kanade [6] CLAFIC [12] EigenFace [10] 1 1 2 1 [7] 3 2 2 (1) (2) (3) (4) 4 4 AdaBoost

More information

ばらつき抑制のための確率最適制御

ばらつき抑制のための確率最適制御 ( ) http://wwwhayanuemnagoya-uacjp/ fujimoto/ 2011 3 9 11 ( ) 2011/03/09-11 1 / 46 Outline 1 2 3 4 5 ( ) 2011/03/09-11 2 / 46 Outline 1 2 3 4 5 ( ) 2011/03/09-11 3 / 46 (1/2) r + Controller - u Plant y

More information

数値計算:有限要素法

数値計算:有限要素法 ( ) 1 / 61 1 2 3 4 ( ) 2 / 61 ( ) 3 / 61 P(0) P(x) u(x) P(L) f P(0) P(x) P(L) ( ) 4 / 61 L P(x) E(x) A(x) x P(x) P(x) u(x) P(x) u(x) (0 x L) ( ) 5 / 61 u(x) 0 L x ( ) 6 / 61 P(0) P(L) f d dx ( EA du dx

More information

all.dvi

all.dvi 5,, Euclid.,..,... Euclid,.,.,, e i (i =,, ). 6 x a x e e e x.:,,. a,,. a a = a e + a e + a e = {e, e, e } a (.) = a i e i = a i e i (.) i= {a,a,a } T ( T ),.,,,,. (.),.,...,,. a 0 0 a = a 0 + a + a 0

More information

IS1-09 第 回画像センシングシンポジウム, 横浜,14 年 6 月 2 Hough Forest Hough Forest[6] Random Forest( [5]) Random Forest Hough Forest Hough Forest 2.1 Hough Forest 1 2.2

IS1-09 第 回画像センシングシンポジウム, 横浜,14 年 6 月 2 Hough Forest Hough Forest[6] Random Forest( [5]) Random Forest Hough Forest Hough Forest 2.1 Hough Forest 1 2.2 IS1-09 第 回画像センシングシンポジウム, 横浜,14 年 6 月 MI-Hough Forest () E-mail: ym@vision.cs.chubu.ac.jphf@cs.chubu.ac.jp Abstract Hough Forest Random Forest MI-Hough Forest Multiple Instance Learning Bag Hough Forest

More information

& 3 3 ' ' (., (Pixel), (Light Intensity) (Random Variable). (Joint Probability). V., V = {,,, V }. i x i x = (x, x,, x V ) T. x i i (State Variable),

& 3 3 ' ' (., (Pixel), (Light Intensity) (Random Variable). (Joint Probability). V., V = {,,, V }. i x i x = (x, x,, x V ) T. x i i (State Variable), .... Deeping and Expansion of Large-Scale Random Fields and Probabilistic Image Processing Kazuyuki Tanaka The mathematical frameworks of probabilistic image processing are formulated by means of Markov

More information

main.dvi

main.dvi SGC - 70 2, 3 23 ɛ-δ 2.12.8 3 2.92.13 4 2 3 1 2.1 2.102.12 [8][14] [1],[2] [4][7] 2 [4] 1 2009 8 1 1 1.1... 1 1.2... 4 1.3 1... 8 1.4 2... 9 1.5... 12 1.6 1... 16 1.7... 18 1.8... 21 1.9... 23 2 27 2.1

More information

知能科学:ニューラルネットワーク

知能科学:ニューラルネットワーク 2 3 4 (Neural Network) (Deep Learning) (Deep Learning) ( x x = ax + b x x x ? x x x w σ b = σ(wx + b) x w b w b .2.8.6 σ(x) = + e x.4.2 -.2 - -5 5 x w x2 w2 σ x3 w3 b = σ(w x + w 2 x 2 + w 3 x 3 + b) x,

More information

知能科学:ニューラルネットワーク

知能科学:ニューラルネットワーク 2 3 4 (Neural Network) (Deep Learning) (Deep Learning) ( x x = ax + b x x x ? x x x w σ b = σ(wx + b) x w b w b .2.8.6 σ(x) = + e x.4.2 -.2 - -5 5 x w x2 w2 σ x3 w3 b = σ(w x + w 2 x 2 + w 3 x 3 + b) x,

More information

meiji_resume_1.PDF

meiji_resume_1.PDF β β β (q 1,q,..., q n ; p 1, p,..., p n ) H(q 1,q,..., q n ; p 1, p,..., p n ) Hψ = εψ ε k = k +1/ ε k = k(k 1) (x, y, z; p x, p y, p z ) (r; p r ), (θ; p θ ), (ϕ; p ϕ ) ε k = 1/ k p i dq i E total = E

More information

II Karel Švadlenka * [1] 1.1* 5 23 m d2 x dt 2 = cdx kx + mg dt. c, g, k, m 1.2* u = au + bv v = cu + dv v u a, b, c, d R

II Karel Švadlenka * [1] 1.1* 5 23 m d2 x dt 2 = cdx kx + mg dt. c, g, k, m 1.2* u = au + bv v = cu + dv v u a, b, c, d R II Karel Švadlenka 2018 5 26 * [1] 1.1* 5 23 m d2 x dt 2 = cdx kx + mg dt. c, g, k, m 1.2* 5 23 1 u = au + bv v = cu + dv v u a, b, c, d R 1.3 14 14 60% 1.4 5 23 a, b R a 2 4b < 0 λ 2 + aλ + b = 0 λ =

More information

ohpmain.dvi

ohpmain.dvi fujisawa@ism.ac.jp 1 Contents 1. 2. 3. 4. γ- 2 1. 3 10 5.6, 5.7, 5.4, 5.5, 5.8, 5.5, 5.3, 5.6, 5.4, 5.2. 5.5 5.6 +5.7 +5.4 +5.5 +5.8 +5.5 +5.3 +5.6 +5.4 +5.2 =5.5. 10 outlier 5 5.6, 5.7, 5.4, 5.5, 5.8,

More information

fiš„v3.dvi

fiš„v3.dvi (2001) 49 1 23 42 2000 10 16 2001 4 23 NTT * 1. 1.1 1998 * 104 0033 1 21 2 7F 24 49 1 2001 1999 70 91 MIT M. Turk Recognition Using Eigenface (Turk and Pentland (1991)). 1998 IC 1 CPU (Jain and Waller

More information

untitled

untitled ,, 2 2.,, A, PC/AT, MB, 5GB,,,, ( ) MB, GB 2,5,, 8MB, A, MB, GB 2 A,,,? x MB, y GB, A (), x + 2y () 4 (,, ) (hanba@eee.u-ryukyu.ac.jp), A, x + 2y() x y, A, MB ( ) 8 MB ( ) 5GB ( ) ( ), x x x 8 (2) y y

More information

(iii) x, x N(µ, ) z = x µ () N(0, ) () 0 (y,, y 0 ) (σ = 6) *3 0 y y 2 y 3 y 4 y 5 y 6 y 7 y 8 y 9 y ( ) *4 H 0 : µ

(iii) x, x N(µ, ) z = x µ () N(0, ) () 0 (y,, y 0 ) (σ = 6) *3 0 y y 2 y 3 y 4 y 5 y 6 y 7 y 8 y 9 y ( ) *4 H 0 : µ t 2 Armitage t t t χ 2 F χ 2 F 2 µ, N(µ, ) f(x µ, ) = ( ) exp (x µ)2 2πσ 2 2 0, N(0, ) (00 α) z(α) t * 2. t (i)x N(µ, ) x µ σ N(0, ) 2 (ii)x,, x N(µ, ) x = x + +x ( N µ, σ2 ) (iii) (i),(ii) x,, x N(µ,

More information

Optical Flow t t + δt 1 Motion Field 3 3 1) 2) 3) Lucas-Kanade 4) 1 t (x, y) I(x, y, t)

Optical Flow t t + δt 1 Motion Field 3 3 1) 2) 3) Lucas-Kanade 4) 1 t (x, y) I(x, y, t) http://wwwieice-hbkborg/ 2 2 4 2 -- 2 4 2010 9 3 3 4-1 Lucas-Kanade 4-2 Mean Shift 3 4-3 2 c 2013 1/(18) http://wwwieice-hbkborg/ 2 2 4 2 -- 2 -- 4 4--1 2010 9 4--1--1 Optical Flow t t + δt 1 Motion Field

More information

1 n A a 11 a 1n A =.. a m1 a mn Ax = λx (1) x n λ (eigenvalue problem) x = 0 ( x 0 ) λ A ( ) λ Ax = λx x Ax = λx y T A = λy T x Ax = λx cx ( 1) 1.1 Th

1 n A a 11 a 1n A =.. a m1 a mn Ax = λx (1) x n λ (eigenvalue problem) x = 0 ( x 0 ) λ A ( ) λ Ax = λx x Ax = λx y T A = λy T x Ax = λx cx ( 1) 1.1 Th 1 n A a 11 a 1n A = a m1 a mn Ax = λx (1) x n λ (eigenvalue problem) x = ( x ) λ A ( ) λ Ax = λx x Ax = λx y T A = λy T x Ax = λx cx ( 1) 11 Th9-1 Ax = λx λe n A = λ a 11 a 12 a 1n a 21 λ a 22 a n1 a n2

More information

p = mv p x > h/4π λ = h p m v Ψ 2 Ψ

p = mv p x > h/4π λ = h p m v Ψ 2 Ψ II p = mv p x > h/4π λ = h p m v Ψ 2 Ψ Ψ Ψ 2 0 x P'(x) m d 2 x = mω 2 x = kx = F(x) dt 2 x = cos(ωt + φ) mω 2 = k ω = m k v = dx = -ωsin(ωt + φ) dt = d 2 x dt 2 0 y v θ P(x,y) θ = ωt + φ ν = ω [Hz] 2π

More information

II 2 3.,, A(B + C) = AB + AC, (A + B)C = AC + BC. 4. m m A, m m B,, m m B, AB = BA, A,, I. 5. m m A, m n B, AB = B, A I E, 4 4 I, J, K

II 2 3.,, A(B + C) = AB + AC, (A + B)C = AC + BC. 4. m m A, m m B,, m m B, AB = BA, A,, I. 5. m m A, m n B, AB = B, A I E, 4 4 I, J, K II. () 7 F 7 = { 0,, 2, 3, 4, 5, 6 }., F 7 a, b F 7, a b, F 7,. (a) a, b,,. (b) 7., 4 5 = 20 = 2 7 + 6, 4 5 = 6 F 7., F 7,., 0 a F 7, ab = F 7 b F 7. (2) 7, 6 F 6 = { 0,, 2, 3, 4, 5 },,., F 6., 0 0 a F

More information

I

I I io@hiroshima-u.ac.jp 27 6 A A. /a δx = lim a + a exp π x2 a 2 = lim a + a = lim a + a exp a 2 π 2 x 2 + a 2 2 x a x = lim a + a Sic a x = lim a + a Rect a Gaussia Loretzia Bilateral expoetial Normalized

More information

Twist knot orbifold Chern-Simons

Twist knot orbifold Chern-Simons Twist knot orbifold Chern-Simons 1 3 M π F : F (M) M ω = {ω ij }, Ω = {Ω ij }, cs := 1 4π 2 (ω 12 ω 13 ω 23 + ω 12 Ω 12 + ω 13 Ω 13 + ω 23 Ω 23 ) M Chern-Simons., S. Chern J. Simons, F (M) Pontrjagin 2.,

More information

2016 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 1 16 2 1 () X O 3 (O1) X O, O (O2) O O (O3) O O O X (X, O) O X X (O1), (O2), (O3) (O2) (O3) n (O2) U 1,..., U n O U k O k=1 (O3) U λ O( λ Λ) λ Λ U λ O 0 X 0 (O2) n =

More information

L. S. Abstract. Date: last revised on 9 Feb translated to Japanese by Kazumoto Iguchi. Original papers: Received May 13, L. Onsager and S.

L. S. Abstract. Date: last revised on 9 Feb translated to Japanese by Kazumoto Iguchi. Original papers: Received May 13, L. Onsager and S. L. S. Abstract. Date: last revised on 9 Feb 01. translated to Japanese by Kazumoto Iguchi. Original papers: Received May 13, 1953. L. Onsager and S. Machlup, Fluctuations and Irreversibel Processes, Physical

More information

本文6(599) (Page 601)

本文6(599) (Page 601) (MIRU2008) 2008 7 525 8577 1 1 1 E-mail: matsuzaki@i.ci.ritsumei.ac.jp, shimada@ci.ritsumei.ac.jp Object Recognition by Observing Grasping Scene from Image Sequence Hironori KASAHARA, Jun MATSUZAKI, Nobutaka

More information

18 2 20 W/C W/C W/C 4-4-1 0.05 1.0 1000 1. 1 1.1 1 1.2 3 2. 4 2.1 4 (1) 4 (2) 4 2.2 5 (1) 5 (2) 5 2.3 7 3. 8 3.1 8 3.2 ( ) 11 3.3 11 (1) 12 (2) 12 4. 14 4.1 14 4.2 14 (1) 15 (2) 16 (3) 17 4.3 17 5. 19

More information

( ) ( )

( ) ( ) 20 21 2 8 1 2 2 3 21 3 22 3 23 4 24 5 25 5 26 6 27 8 28 ( ) 9 3 10 31 10 32 ( ) 12 4 13 41 0 13 42 14 43 0 15 44 17 5 18 6 18 1 1 2 2 1 2 1 0 2 0 3 0 4 0 2 2 21 t (x(t) y(t)) 2 x(t) y(t) γ(t) (x(t) y(t))

More information

Microsoft Word doc

Microsoft Word doc . 正規線形モデルのベイズ推定翠川 大竹距離減衰式 (PGA(Midorikawa, S., and Ohtake, Y. (, Attenuation relationships of peak ground acceleration and velocity considering attenuation characteristics for shallow and deeper earthquakes,

More information

st.dvi

st.dvi 9 3 5................................... 5............................. 5....................................... 5.................................. 7.........................................................................

More information

…p…^†[…fiflF”¯ Pattern Recognition

…p…^†[…fiflF”¯   Pattern Recognition Pattern Recognition Shin ichi Satoh National Institute of Informatics June 11, 2019 (Support Vector Machines) (Support Vector Machines: SVM) SVM Vladimir N. Vapnik and Alexey Ya. Chervonenkis 1963 SVM

More information

弾性定数の対称性について

弾性定数の対称性について () by T. oyama () ij C ij = () () C, C, C () ij ji ij ijlk ij ij () C C C C C C * C C C C C * * C C C C = * * * C C C * * * * C C * * * * * C () * P (,, ) P (,, ) lij = () P (,, ) P(,, ) (,, ) P (, 00,

More information

III 1 (X, d) d U d X (X, d). 1. (X, d).. (i) d(x, y) d(z, y) d(x, z) (ii) d(x, y) d(z, w) d(x, z) + d(y, w) 2. (X, d). F X.. (1), X F, (2) F 1, F 2 F

III 1 (X, d) d U d X (X, d). 1. (X, d).. (i) d(x, y) d(z, y) d(x, z) (ii) d(x, y) d(z, w) d(x, z) + d(y, w) 2. (X, d). F X.. (1), X F, (2) F 1, F 2 F III 1 (X, d) d U d X (X, d). 1. (X, d).. (i) d(x, y) d(z, y) d(x, z) (ii) d(x, y) d(z, w) d(x, z) + d(y, w) 2. (X, d). F X.. (1), X F, (2) F 1, F 2 F F 1 F 2 F, (3) F λ F λ F λ F. 3., A λ λ A λ. B λ λ

More information

Real AdaBoost HOG 2009 3 A Graduation Thesis of College of Engineering, Chubu University Efficient Reducing Method of HOG Features for Human Detection based on Real AdaBoost Chika Matsushima ITS Graphics

More information

II ( ) (7/31) II ( [ (3.4)] Navier Stokes [ (6/29)] Navier Stokes 3 [ (6/19)] Re

II ( ) (7/31) II (  [ (3.4)] Navier Stokes [ (6/29)] Navier Stokes 3 [ (6/19)] Re II 29 7 29-7-27 ( ) (7/31) II (http://www.damp.tottori-u.ac.jp/~ooshida/edu/fluid/) [ (3.4)] Navier Stokes [ (6/29)] Navier Stokes 3 [ (6/19)] Reynolds [ (4.6), (45.8)] [ p.186] Navier Stokes I Euler Navier

More information

ii 3.,. 4. F. (), ,,. 8.,. 1. (75%) (25%) =7 20, =7 21 (. ). 1.,, (). 3.,. 1. ().,.,.,.,.,. () (12 )., (), 0. 2., 1., 0,.

ii 3.,. 4. F. (), ,,. 8.,. 1. (75%) (25%) =7 20, =7 21 (. ). 1.,, (). 3.,. 1. ().,.,.,.,.,. () (12 )., (), 0. 2., 1., 0,. 24(2012) (1 C106) 4 11 (2 C206) 4 12 http://www.math.is.tohoku.ac.jp/~obata,.,,,.. 1. 2. 3. 4. 5. 6. 7.,,. 1., 2007 (). 2. P. G. Hoel, 1995. 3... 1... 2.,,. ii 3.,. 4. F. (),.. 5... 6.. 7.,,. 8.,. 1. (75%)

More information

untitled

untitled c 645 2 1. GM 1959 Lindsey [1] 1960 Howard [2] Howard 1 25 (Markov Decision Process) 3 3 2 3 +1=25 9 Bellman [3] 1 Bellman 1 k 980 8576 27 1 015 0055 84 4 1977 D Esopo and Lefkowitz [4] 1 (SI) Cover and

More information

Part () () Γ Part ,

Part () () Γ Part , Contents a 6 6 6 6 6 6 6 7 7. 8.. 8.. 8.3. 8 Part. 9. 9.. 9.. 3. 3.. 3.. 3 4. 5 4.. 5 4.. 9 4.3. 3 Part. 6 5. () 6 5.. () 7 5.. 9 5.3. Γ 3 6. 3 6.. 3 6.. 3 6.3. 33 Part 3. 34 7. 34 7.. 34 7.. 34 8. 35

More information

フリーソフトではじめる機械学習入門 サンプルページ この本の定価 判型などは, 以下の URL からご覧いただけます. このサンプルページの内容は, 初版 1 刷発行時のものです.

フリーソフトではじめる機械学習入門 サンプルページ この本の定価 判型などは, 以下の URL からご覧いただけます.   このサンプルページの内容は, 初版 1 刷発行時のものです. フリーソフトではじめる機械学習入門 サンプルページ この本の定価 判型などは, 以下の URL からご覧いただけます. http://www.morikita.co.jp/books/mid/085211 このサンプルページの内容は, 初版 1 刷発行時のものです. Weka Weka 2014 2 i 1 1 1.1... 1 1.2... 3 1.3... 6 1.3.1 7 1.3.2 11

More information

1 Fig. 1 Extraction of motion,.,,, 4,,, 3., 1, 2. 2.,. CHLAC,. 2.1,. (256 ).,., CHLAC. CHLAC, HLAC. 2.3 (HLAC ) r,.,. HLAC. N. 2 HLAC Fig. 2

1 Fig. 1 Extraction of motion,.,,, 4,,, 3., 1, 2. 2.,. CHLAC,. 2.1,. (256 ).,., CHLAC. CHLAC, HLAC. 2.3 (HLAC ) r,.,. HLAC. N. 2 HLAC Fig. 2 CHLAC 1 2 3 3,. (CHLAC), 1).,.,, CHLAC,.,. Suspicious Behavior Detection based on CHLAC Method Hideaki Imanishi, 1 Toyohiro Hayashi, 2 Shuichi Enokida 3 and Toshiaki Ejima 3 We have proposed a method for

More information

CVaR

CVaR CVaR 20 4 24 3 24 1 31 ,.,.,. Markowitz,., (Value-at-Risk, VaR) (Conditional Value-at-Risk, CVaR). VaR, CVaR VaR. CVaR, CVaR. CVaR,,.,.,,,.,,. 1 5 2 VaR CVaR 6 2.1................................................

More information

24 I ( ) 1. R 3 (i) C : x 2 + y 2 1 = 0 (ii) C : y = ± 1 x 2 ( 1 x 1) (iii) C : x = cos t, y = sin t (0 t 2π) 1.1. γ : [a, b] R n ; t γ(t) = (x

24 I ( ) 1. R 3 (i) C : x 2 + y 2 1 = 0 (ii) C : y = ± 1 x 2 ( 1 x 1) (iii) C : x = cos t, y = sin t (0 t 2π) 1.1. γ : [a, b] R n ; t γ(t) = (x 24 I 1.1.. ( ) 1. R 3 (i) C : x 2 + y 2 1 = 0 (ii) C : y = ± 1 x 2 ( 1 x 1) (iii) C : x = cos t, y = sin t (0 t 2π) 1.1. γ : [a, b] R n ; t γ(t) = (x 1 (t), x 2 (t),, x n (t)) ( ) ( ), γ : (i) x 1 (t),

More information

医系の統計入門第 2 版 サンプルページ この本の定価 判型などは, 以下の URL からご覧いただけます. このサンプルページの内容は, 第 2 版 1 刷発行時のものです.

医系の統計入門第 2 版 サンプルページ この本の定価 判型などは, 以下の URL からご覧いただけます.   このサンプルページの内容は, 第 2 版 1 刷発行時のものです. 医系の統計入門第 2 版 サンプルページ この本の定価 判型などは, 以下の URL からご覧いただけます. http://www.morikita.co.jp/books/mid/009192 このサンプルページの内容は, 第 2 版 1 刷発行時のものです. i 2 t 1. 2. 3 2 3. 6 4. 7 5. n 2 ν 6. 2 7. 2003 ii 2 2013 10 iii 1987

More information

18 ( ) I II III A B C(100 ) 1, 2, 3, 5 I II A B (100 ) 1, 2, 3 I II A B (80 ) 6 8 I II III A B C(80 ) 1 n (1 + x) n (1) n C 1 + n C

18 ( ) I II III A B C(100 ) 1, 2, 3, 5 I II A B (100 ) 1, 2, 3 I II A B (80 ) 6 8 I II III A B C(80 ) 1 n (1 + x) n (1) n C 1 + n C 8 ( ) 8 5 4 I II III A B C( ),,, 5 I II A B ( ),, I II A B (8 ) 6 8 I II III A B C(8 ) n ( + x) n () n C + n C + + n C n = 7 n () 7 9 C : y = x x A(, 6) () A C () C P AP Q () () () 4 A(,, ) B(,, ) C(,,

More information

X X X Y R Y R Y R MCAR MAR MNAR Figure 1: MCAR, MAR, MNAR Y R X 1.2 Missing At Random (MAR) MAR MCAR MCAR Y X X Y MCAR 2 1 R X Y Table 1 3 IQ MCAR Y I

X X X Y R Y R Y R MCAR MAR MNAR Figure 1: MCAR, MAR, MNAR Y R X 1.2 Missing At Random (MAR) MAR MCAR MCAR Y X X Y MCAR 2 1 R X Y Table 1 3 IQ MCAR Y I (missing data analysis) - - 1/16/2011 (missing data, missing value) (list-wise deletion) (pair-wise deletion) (full information maximum likelihood method, FIML) (multiple imputation method) 1 missing completely

More information

1 1 sin cos P (primary) S (secondly) 2 P S A sin(ω2πt + α) A ω 1 ω α V T m T m 1 100Hz m 2 36km 500Hz. 36km 1

1 1 sin cos P (primary) S (secondly) 2 P S A sin(ω2πt + α) A ω 1 ω α V T m T m 1 100Hz m 2 36km 500Hz. 36km 1 sin cos P (primary) S (secondly) 2 P S A sin(ω2πt + α) A ω ω α 3 3 2 2V 3 33+.6T m T 5 34m Hz. 34 3.4m 2 36km 5Hz. 36km m 34 m 5 34 + m 5 33 5 =.66m 34m 34 x =.66 55Hz, 35 5 =.7 485.7Hz 2 V 5Hz.5V.5V V

More information

20 9 19 1 3 11 1 3 111 3 112 1 4 12 6 121 6 122 7 13 7 131 8 132 10 133 10 134 12 14 13 141 13 142 13 143 15 144 16 145 17 15 19 151 1 19 152 20 2 21 21 21 211 21 212 1 23 213 1 23 214 25 215 31 22 33

More information

第5章 偏微分方程式の境界値問題

第5章 偏微分方程式の境界値問題 October 5, 2018 1 / 113 4 ( ) 2 / 113 Poisson 5.1 Poisson ( A.7.1) Poisson Poisson 1 (A.6 ) Γ p p N u D Γ D b 5.1.1: = Γ D Γ N 3 / 113 Poisson 5.1.1 d {2, 3} Lipschitz (A.5 ) Γ D Γ N = \ Γ D Γ p Γ N Γ

More information

populatio sample II, B II? [1] I. [2] 1 [3] David J. Had [4] 2 [5] 3 2

populatio sample II, B II?  [1] I. [2] 1 [3] David J. Had [4] 2 [5] 3 2 (2015 ) 1 NHK 2012 5 28 2013 7 3 2014 9 17 2015 4 8!? New York Times 2009 8 5 For Today s Graduate, Just Oe Word: Statistics Google Hal Varia I keep sayig that the sexy job i the ext 10 years will be statisticias.

More information

わが国企業による資金調達方法の選択問題

わが国企業による資金調達方法の選択問題 * takeshi.shimatani@boj.or.jp ** kawai@ml.me.titech.ac.jp *** naohiko.baba@boj.or.jp No.05-J-3 2005 3 103-8660 30 No.05-J-3 2005 3 1990 * E-mailtakeshi.shimatani@boj.or.jp ** E-mailkawai@ml.me.titech.ac.jp

More information

Mantel-Haenszelの方法

Mantel-Haenszelの方法 Mantel-Haenszel 2008 6 12 ) 2008 6 12 1 / 39 Mantel & Haenzel 1959) Mantel N, Haenszel W. Statistical aspects of the analysis of data from retrospective studies of disease. J. Nat. Cancer Inst. 1959; 224):

More information

x () g(x) = f(t) dt f(x), F (x) 3x () g(x) g (x) f(x), F (x) (3) h(x) = x 3x tf(t) dt.9 = {(x, y) ; x, y, x + y } f(x, y) = xy( x y). h (x) f(x), F (x

x () g(x) = f(t) dt f(x), F (x) 3x () g(x) g (x) f(x), F (x) (3) h(x) = x 3x tf(t) dt.9 = {(x, y) ; x, y, x + y } f(x, y) = xy( x y). h (x) f(x), F (x [ ] IC. f(x) = e x () f(x) f (x) () lim f(x) lim f(x) x + x (3) lim f(x) lim f(x) x + x (4) y = f(x) ( ) ( s46). < a < () a () lim a log xdx a log xdx ( ) n (3) lim log k log n n n k=.3 z = log(x + y ),

More information

x V x x V x, x V x = x + = x +(x+x )=(x +x)+x = +x = x x = x x = x =x =(+)x =x +x = x +x x = x ( )x = x =x =(+( ))x =x +( )x = x +( )x ( )x = x x x R

x V x x V x, x V x = x + = x +(x+x )=(x +x)+x = +x = x x = x x = x =x =(+)x =x +x = x +x x = x ( )x = x =x =(+( ))x =x +( )x = x +( )x ( )x = x x x R V (I) () (4) (II) () (4) V K vector space V vector K scalor K C K R (I) x, y V x + y V () (x + y)+z = x +(y + z) (2) x + y = y + x (3) V x V x + = x (4) x V x + x = x V x x (II) x V, α K αx V () (α + β)x

More information

I A A441 : April 15, 2013 Version : 1.1 I Kawahira, Tomoki TA (Shigehiro, Yoshida )

I A A441 : April 15, 2013 Version : 1.1 I   Kawahira, Tomoki TA (Shigehiro, Yoshida ) I013 00-1 : April 15, 013 Version : 1.1 I Kawahira, Tomoki TA (Shigehiro, Yoshida) http://www.math.nagoya-u.ac.jp/~kawahira/courses/13s-tenbou.html pdf * 4 15 4 5 13 e πi = 1 5 0 5 7 3 4 6 3 6 10 6 17

More information

1 Abstract 2 3 n a ax 2 + bx + c = 0 (a 0) (1) ( x + b ) 2 = b2 4ac 2a 4a 2 D = b 2 4ac > 0 (1) 2 D = 0 D < 0 x + b 2a = ± b2 4ac 2a b ± b 2

1 Abstract 2 3 n a ax 2 + bx + c = 0 (a 0) (1) ( x + b ) 2 = b2 4ac 2a 4a 2 D = b 2 4ac > 0 (1) 2 D = 0 D < 0 x + b 2a = ± b2 4ac 2a b ± b 2 1 Abstract n 1 1.1 a ax + bx + c = 0 (a 0) (1) ( x + b ) = b 4ac a 4a D = b 4ac > 0 (1) D = 0 D < 0 x + b a = ± b 4ac a b ± b 4ac a b a b ± 4ac b i a D (1) ax + bx + c D 0 () () (015 8 1 ) 1. D = b 4ac

More information

r d 2r d l d (a) (b) (c) 1: I(x,t) I(x+ x,t) I(0,t) I(l,t) V in V(x,t) V(x+ x,t) V(0,t) l V(l,t) 2: 0 x x+ x 3: V in 3 V in x V (x, t) I(x, t

r d 2r d l d (a) (b) (c) 1: I(x,t) I(x+ x,t) I(0,t) I(l,t) V in V(x,t) V(x+ x,t) V(0,t) l V(l,t) 2: 0 x x+ x 3: V in 3 V in x V (x, t) I(x, t 1 1 2 2 2r d 2r d l d (a) (b) (c) 1: I(x,t) I(x+ x,t) I(0,t) I(l,t) V in V(x,t) V(x+ x,t) V(0,t) l V(l,t) 2: 0 x x+ x 3: V in 3 V in x V (x, t) I(x, t) V (x, t) I(x, t) V in x t 3 4 1 L R 2 C G L 0 R 0

More information

数学の基礎訓練I

数学の基礎訓練I I 9 6 13 1 1 1.1............... 1 1................ 1 1.3.................... 1.4............... 1.4.1.............. 1.4................. 3 1.4.3........... 3 1.4.4.. 3 1.5.......... 3 1.5.1..............

More information

1. 1 A : l l : (1) l m (m 3) (2) m (3) n (n 3) (4) A α, β γ α β + γ = 2 m l lm n nα nα = lm. α = lm n. m lm 2β 2β = lm β = lm 2. γ l 2. 3

1. 1 A : l l : (1) l m (m 3) (2) m (3) n (n 3) (4) A α, β γ α β + γ = 2 m l lm n nα nα = lm. α = lm n. m lm 2β 2β = lm β = lm 2. γ l 2. 3 1. 1 A : l l : (1) l m (m 3) (2) m (3) n (n 3) (4) A 2 1 2 1 2 3 α, β γ α β + γ = 2 m l lm n nα nα = lm. α = lm n. m lm 2β 2β = lm β = lm 2. γ l 2. 3 4 P, Q R n = {(x 1, x 2,, x n ) ; x 1, x 2,, x n R}

More information

.2 ρ dv dt = ρk grad p + 3 η grad (divv) + η 2 v.3 divh = 0, rote + c H t = 0 dive = ρ, H = 0, E = ρ, roth c E t = c ρv E + H c t = 0 H c E t = c ρv T

.2 ρ dv dt = ρk grad p + 3 η grad (divv) + η 2 v.3 divh = 0, rote + c H t = 0 dive = ρ, H = 0, E = ρ, roth c E t = c ρv E + H c t = 0 H c E t = c ρv T NHK 204 2 0 203 2 24 ( ) 7 00 7 50 203 2 25 ( ) 7 00 7 50 203 2 26 ( ) 7 00 7 50 203 2 27 ( ) 7 00 7 50 I. ( ν R n 2 ) m 2 n m, R = e 2 8πε 0 hca B =.09737 0 7 m ( ν = ) λ a B = 4πε 0ħ 2 m e e 2 = 5.2977

More information

t = h x z z = h z = t (x, z) (v x (x, z, t), v z (x, z, t)) ρ v x x + v z z = 0 (1) 2-2. (v x, v z ) φ(x, z, t) v x = φ x, v z

t = h x z z = h z = t (x, z) (v x (x, z, t), v z (x, z, t)) ρ v x x + v z z = 0 (1) 2-2. (v x, v z ) φ(x, z, t) v x = φ x, v z I 1 m 2 l k 2 x = 0 x 1 x 1 2 x 2 g x x 2 x 1 m k m 1-1. L x 1, x 2, ẋ 1, ẋ 2 ẋ 1 x = 0 1-2. 2 Q = x 1 + x 2 2 q = x 2 x 1 l L Q, q, Q, q M = 2m µ = m 2 1-3. Q q 1-4. 2 x 2 = h 1 x 1 t = 0 2 1 t x 1 (t)

More information

I

I I 6 4 10 1 1 1.1............... 1 1................ 1 1.3.................... 1.4............... 1.4.1.............. 1.4................. 1.4.3........... 3 1.4.4.. 3 1.5.......... 3 1.5.1..............

More information

S I. dy fx x fx y fx + C 3 C dy fx 4 x, y dy v C xt y C v e kt k > xt yt gt [ v dt dt v e kt xt v e kt + C k x v + C C k xt v k 3 r r + dr e kt S dt d

S I. dy fx x fx y fx + C 3 C dy fx 4 x, y dy v C xt y C v e kt k > xt yt gt [ v dt dt v e kt xt v e kt + C k x v + C C k xt v k 3 r r + dr e kt S dt d S I.. http://ayapin.film.s.dendai.ac.jp/~matuda /TeX/lecture.html PDF PS.................................... 3.3.................... 9.4................5.............. 3 5. Laplace................. 5....

More information

A 99% MS-Free Presentation

A 99% MS-Free Presentation A 99% MS-Free Presentation 2 Galactic Dynamics (Binney & Tremaine 1987, 2008) Dynamics of Galaxies (Bertin 2000) Dynamical Evolution of Globular Clusters (Spitzer 1987) The Gravitational Million-Body Problem

More information

φ s i = m j=1 f x j ξ j s i (1)? φ i = φ s i f j = f x j x ji = ξ j s i (1) φ 1 φ 2. φ n = m j=1 f jx j1 m j=1 f jx j2. m

φ s i = m j=1 f x j ξ j s i (1)? φ i = φ s i f j = f x j x ji = ξ j s i (1) φ 1 φ 2. φ n = m j=1 f jx j1 m j=1 f jx j2. m 2009 10 6 23 7.5 7.5.1 7.2.5 φ s i m j1 x j ξ j s i (1)? φ i φ s i f j x j x ji ξ j s i (1) φ 1 φ 2. φ n m j1 f jx j1 m j1 f jx j2. m j1 f jx jn x 11 x 21 x m1 x 12 x 22 x m2...... m j1 x j1f j m j1 x

More information

note1.dvi

note1.dvi (1) 1996 11 7 1 (1) 1. 1 dx dy d x τ xx x x, stress x + dx x τ xx x+dx dyd x x τ xx x dyd y τ xx x τ xx x+dx d dx y x dy 1. dx dy d x τ xy x τ x ρdxdyd x dx dy d ρdxdyd u x t = τ xx x+dx dyd τ xx x dyd

More information

Design of highly accurate formulas for numerical integration in weighted Hardy spaces with the aid of potential theory 1 Ken ichiro Tanaka 1 Ω R m F I = F (t) dt (1.1) Ω m m 1 m = 1 1 Newton-Cotes Gauss

More information

(2 X Poisso P (λ ϕ X (t = E[e itx ] = k= itk λk e k! e λ = (e it λ k e λ = e eitλ e λ = e λ(eit 1. k! k= 6.7 X N(, 1 ϕ X (t = e 1 2 t2 : Cauchy ϕ X (t

(2 X Poisso P (λ ϕ X (t = E[e itx ] = k= itk λk e k! e λ = (e it λ k e λ = e eitλ e λ = e λ(eit 1. k! k= 6.7 X N(, 1 ϕ X (t = e 1 2 t2 : Cauchy ϕ X (t 6 6.1 6.1 (1 Z ( X = e Z, Y = Im Z ( Z = X + iy, i = 1 (2 Z E[ e Z ] < E[ Im Z ] < Z E[Z] = E[e Z] + ie[im Z] 6.2 Z E[Z] E[ Z ] : E[ Z ] < e Z Z, Im Z Z E[Z] α = E[Z], Z = Z Z 1 {Z } E[Z] = α = α [ α ]

More information

* n x 11,, x 1n N(µ 1, σ 2 ) x 21,, x 2n N(µ 2, σ 2 ) H 0 µ 1 = µ 2 (= µ ) H 1 µ 1 µ 2 H 0, H 1 *2 σ 2 σ 2 0, σ 2 1 *1 *2 H 0 H

* n x 11,, x 1n N(µ 1, σ 2 ) x 21,, x 2n N(µ 2, σ 2 ) H 0 µ 1 = µ 2 (= µ ) H 1 µ 1 µ 2 H 0, H 1 *2 σ 2 σ 2 0, σ 2 1 *1 *2 H 0 H 1 1 1.1 *1 1. 1.3.1 n x 11,, x 1n Nµ 1, σ x 1,, x n Nµ, σ H 0 µ 1 = µ = µ H 1 µ 1 µ H 0, H 1 * σ σ 0, σ 1 *1 * H 0 H 0, H 1 H 1 1 H 0 µ, σ 0 H 1 µ 1, µ, σ 1 L 0 µ, σ x L 1 µ 1, µ, σ x x H 0 L 0 µ, σ 0

More information

untitled

untitled . x2.0 0.5 0 0.5.0 x 2 t= 0: : x α ij β j O x2 u I = α x j ij i i= 0 y j = + exp( u ) j v J = β y j= 0 j j o = + exp( v ) 0 0 e x p e x p J j I j ij i i o x β α = = = + +.. 2 3 8 x 75 58 28 36 x2 3 3 4

More information

f(x) = f(x ) + α(x)(x x ) α(x) x = x. x = f (y), x = f (y ) y = f f (y) = f f (y ) + α(f (y))(f (y) f (y )) f (y) = f (y ) + α(f (y)) (y y ) ( (2) ) f

f(x) = f(x ) + α(x)(x x ) α(x) x = x. x = f (y), x = f (y ) y = f f (y) = f f (y ) + α(f (y))(f (y) f (y )) f (y) = f (y ) + α(f (y)) (y y ) ( (2) ) f 22 A 3,4 No.3 () (2) (3) (4), (5) (6) (7) (8) () n x = (x,, x n ), = (,, n ), x = ( (x i i ) 2 ) /2 f(x) R n f(x) = f() + i α i (x ) i + o( x ) α,, α n g(x) = o( x )) lim x g(x) x = y = f() + i α i(x )

More information

No δs δs = r + δr r = δr (3) δs δs = r r = δr + u(r + δr, t) u(r, t) (4) δr = (δx, δy, δz) u i (r + δr, t) u i (r, t) = u i x j δx j (5) δs 2

No δs δs = r + δr r = δr (3) δs δs = r r = δr + u(r + δr, t) u(r, t) (4) δr = (δx, δy, δz) u i (r + δr, t) u i (r, t) = u i x j δx j (5) δs 2 No.2 1 2 2 δs δs = r + δr r = δr (3) δs δs = r r = δr + u(r + δr, t) u(r, t) (4) δr = (δx, δy, δz) u i (r + δr, t) u i (r, t) = u i δx j (5) δs 2 = δx i δx i + 2 u i δx i δx j = δs 2 + 2s ij δx i δx j

More information

6 2 2 x y x y t P P = P t P = I P P P ( ) ( ) ,, ( ) ( ) cos θ sin θ cos θ sin θ, sin θ cos θ sin θ cos θ y x θ x θ P

6 2 2 x y x y t P P = P t P = I P P P ( ) ( ) ,, ( ) ( ) cos θ sin θ cos θ sin θ, sin θ cos θ sin θ cos θ y x θ x θ P 6 x x 6.1 t P P = P t P = I P P P 1 0 1 0,, 0 1 0 1 cos θ sin θ cos θ sin θ, sin θ cos θ sin θ cos θ x θ x θ P x P x, P ) = t P x)p ) = t x t P P ) = t x = x, ) 6.1) x = Figure 6.1 Px = x, P=, θ = θ P

More information

II (No.2) 2 4,.. (1) (cm) (2) (cm) , (

II (No.2) 2 4,.. (1) (cm) (2) (cm) , ( II (No.1) 1 x 1, x 2,..., x µ = 1 V = 1 k=1 x k (x k µ) 2 k=1 σ = V. V = σ 2 = 1 x 2 k µ 2 k=1 1 µ, V σ. (1) 4, 7, 3, 1, 9, 6 (2) 14, 17, 13, 11, 19, 16 (3) 12, 21, 9, 3, 27, 18 (4) 27.2, 29.3, 29.1, 26.0,

More information

三石貴志.indd

三石貴志.indd 流通科学大学論集 - 経済 情報 政策編 - 第 21 巻第 1 号,23-33(2012) SIRMs SIRMs Fuzzy fuzzyapproximate approximatereasoning reasoningusing using Lukasiewicz Łukasiewicz logical Logical operations Operations Takashi Mitsuishi

More information

6.1 (P (P (P (P (P (P (, P (, P.101

6.1 (P (P (P (P (P (P (, P (, P.101 (008 0 3 7 ( ( ( 00 1 (P.3 1 1.1 (P.3.................. 1 1. (P.4............... 1 (P.15.1 (P.15................. (P.18............3 (P.17......... 3.4 (P................ 4 3 (P.7 4 3.1 ( P.7...........

More information

2 7 V 7 {fx fx 3 } 8 P 3 {fx fx 3 } 9 V 9 {fx fx f x 2fx } V {fx fx f x 2fx + } V {{a n } {a n } a n+2 a n+ + a n n } 2 V 2 {{a n } {a n } a n+2 a n+

2 7 V 7 {fx fx 3 } 8 P 3 {fx fx 3 } 9 V 9 {fx fx f x 2fx } V {fx fx f x 2fx + } V {{a n } {a n } a n+2 a n+ + a n n } 2 V 2 {{a n } {a n } a n+2 a n+ R 3 R n C n V??,?? k, l K x, y, z K n, i x + y + z x + y + z iv x V, x + x o x V v kx + y kx + ky vi k + lx kx + lx vii klx klx viii x x ii x + y y + x, V iii o K n, x K n, x + o x iv x K n, x + x o x

More information