thesis.dvi
|
|
- よしお とべ
- 5 years ago
- Views:
Transcription
1 2007
2 Graph Cuts Graph Cuts Graph Cuts Graph Cuts t-link Interactive Graph Cuts 4.7% Mean Shift Segmentation
3 i
4 5 SIFT Bag of Keypoints Bag of Keypoints A Mean Shift 51 A.1 (Kernel Density Estimation) A.2 (Density Gradient Estimation) B Scale-Invariant Feature Transform 55 B B.1.1 LoG B.1.2 Difference-of-Gaussian B.1.3 σ B.1.4 k B.1.5 DoG B B ii
5 B B B B B.5 SIFT C 69 iii
6
7 s-t cut Lazy Snapping ( [1] ) Grabcut ( [2] ) n-link t-link λ Bag of Keypoints Affine Invariant keypoint SIFT (a)affine Invariant keypoint (b) (c) 8 ( [3] ) GMM Mean-Shift SIFT v
8 A.1 (A.13) A.2 Mean Shift ( [4] ) B.1 LoG B.2 DoG B.3 σ B.4 s =2 DoG B B.6 DoG B B B B B B.12 SIFT vi
9 [%] [%] [s] [%] [%] ( [5] ) [%] (α =0.1) vii
10
11 1 Snake [6] Level Sets[7], Graph Cuts [1, 2, 8, 9, 10, 11] Snake Level Sets Graph Cuts Graph Cuts Boykov Inreractive Graph Cuts [9, 10] Interactive Graph Cuts minimum cut/maximum flow algorithm Interactive Graph Cuts Lazy Snapping[1] Grab Cut[2] Graph Cuts n-link Graph Cuts t-link 1
12 Mean Shift SIFT 2
13 2 2.1 Boykov minimum cut/maximum flow algorithms [8] minimun cut minimum cut max flow algorithm minimum cut/maximum flow algorithms image restoration [12, 13, 14, 15], stereo and motion [16, 17, 18, 19], image segmentation [9, 20, 21], multi-camera reconstruction [22] P p L p depth 2.1 E(L) E(L) = D p (L p )+ p P V (p,q) (L p,l q ) (2.1) (p,q) N 3
14 2.1: N D p (L p ) V (p,q) (L p,l q ) E Potts Interaction Energy Model Liner Interaction Energy Model 2 Potts Interaction Energy Model Potts Interaction Energy Model E(I) = I p Ip + p P K (p,q) T (I p I q ) (2.2) (p,q) N I = {I p p P } I = {Ip p P } K(p, q) T (I p,i q ) I p I q I p I q 1, 0 K(p, q) p q T (I p,i q ) 0 Potts Energy 2 max flow NP hard 4
15 Liner Interaction Energy Model Liner Interaction Energy Model. E(I) = I p Ip + p P A (p,q) I p I q (2.3) (p,q) N Potts Model A (p,q) K (p,q) A (p,q) p q Potts Energy 2.2 E min-cut/max-flow algorithm Graph Cuts Algorithm min-cut/max-flow algorithm min-cut/max-flow algorithm 2.2: G =(E,V ) V (node) E (edge) (source)s V (sink)t V n-link s t t-link n-link 5
16 t-link n-link t-link (2.1) V p,q D p 2.3(a) 2.3: s S t T = V S 2 s-t S T s-t s-t s-t 2.4: s-t cut Ford-Fulkerson Method[23] Push- Relabel Method[24] Ford-Fulkerson Method 6
17 0 s t Push-Relabel Method s Potts Interaction Energy Model E(L) = λ R(L)+B(L) (2.4) R(L) = p P R p (L p ) (2.5) B(L) = δ(l p,l q ) = {p,q}inn { B {p,q} δ(l p,lq) (2.6) 1 if L p L q 0 otherwise (2.7) R(L) B(L) λ R(L) E(L) : edge weight(cost) for {p, q} B { p, q} {p, q} N λ R p ( bkg ) p P, p /O B {p, S} K p O 0 p B λ R p ( obj ) p P, p /O B {p, T } 0 p O K p B R p ( obj ) = ln Pr(I p O) (2.8) R p ( bkg ) = ln Pr(I p B) (2.9) 7
18 B {p,q} exp ( (I ) p I q ) 2 1 (2.10) 2σ 2 dist(p, q) K = 1 + max B {p,q} (2.11) p P q:{p,q} N O, B, I p p O, B O, B seed Pr(I p O), Pr(I p B) seed t-link dist(p, q) p, q mincut/max-flow algorithm Li Lazy Snapping [1] Lazy Snapping 2.5 Rother GrabCut [2] GrabCut 2.5: Lazy Snapping ( [1] ) GMM Graph Cuts 2.6 8
19 図 2.6: Grabcut によるセグメンテーション例 (文献 [2] より) 従来法の問題点 従来法である Interactive Graph Cuts[9], [20] では seed が与えられていない t-link の エッジコストには seed の色分布から得られる尤度を利用して計算するか 0 とする 尤 度の計算を行う場合 n-link に比べ t-link が大きくなると 色分布による影響が強くなり 突発的な誤検出が多くなる場合がある そのため このような誤検出を抑制するためには λ によって n-link の影響を強くする必要がある n-link の影響が強いと 画像中のエッジ 情報に対しての依存性が強くなる そのため 図 2.7 に示すように Interactive Graph Cuts では画像に複雑なエッジが存在する場合 局所的なエッジを乗り越えてセグメンテーショ ンすることが困難となる 図 2.7: 複雑なエッジを含む画像でのグラフカットによるセグメンテーション 9
20
21 3 Graph Cuts : seed σ l σ σ Graph Cuts Graph Cuts GMM(Gaussian Mixture Model) GMM t-link Graph Cuts σ<1 σ =0 σ = α σ Graph 11
22 Cut 0 <α<1 Step1. seed Step2. σ Step3. Step4. Graph Cuts Step5. Step6. σ<1 σ =0 σ = α σ(0 <α<1) Step : 12
23 3.2 I G(σ) L(σ) L(σ) =G(σ) I (3.1) σ σ I 1 σ 1 2 L 1 (σ) I I 2 I 2 σ =1 L 2 (σ) L 1 (σ) L 2 (σ) L 1 (2σ) =.. L 2 (σ) (3.2) σ 1 2 σ σ : 13
24 3.3 σ Graph Cuts Graph Cuts t-link (2.8), (2.9) 1 Graph Cuts t-link R p ( obj ) = ln Pr(O I p) (3.3) R p( bkg ) = ln Pr(B I p ) (3.4) Pr(O I p ) Pr(B I p ) (3.5), (3.6) Pr(O I p ) = Pr(O)Pr(I p O) Pr(I p ) Pr(B I p ) = Pr(B)Pr(I p B) Pr(I p ) (3.5) (3.6) Pr(I p ) Pr(O) Pr(B) Pr(I p O), Pr(I p B) Pr(O), Pr(B) t-link : Pr(I p O), Pr(I p B) GMM (Gaussian Mixture Model) [25] RGB 3 14
25 Pr(I p ) = p(i p µ, Σ) = K α i p i (I p µ i, Σ i ) (3.7) i=1 1 (2π) 3/2 Σ ( 1/2 ) 1 exp 2 (I p µ) T Σ 1 (I p µ) (3.8) GMM EM [26] GMM GMM (3.7) I p Pr(I p O) Pr(I p B) Graph Cuts Pr(O), Pr(B) d d obj d bkg { Pr(O) = d obj if d obj d bkg 1 d bkg if d obj <d bkg (3.9) Pr(B) = 1 Pr(O) (3.10) 3.4 GMM Pr(I p O), Pr(I p B) Pr(O), Pr(B) (3.3) Pr(O I p ) (3.4) Pr(B I p ) Pr(O I p ) (3.3) {p, T } t-link Pr(B I p ) (3.4) {p, S} t-link 3.2 Graph Cuts 15
26 (b) Interactive Graph Cuts[9] 3.5(c) n-link σ =0 n-link 3.5(c) n-link t-link Pr(O), Pr(B) Pr(I p O), Pr(I p B) 1 3.5(d) 3.5(e) Pr(O), Pr(B) Pr(I p O), Pr(I p B) t-link GrabCut 1 50 seed Interactive Graph Cuts [9] GrabCut[2] O = {O 1,O 2,...,O p,...,o P }, B = {B 1,B 2,...,B p,...,b P } L = {L 1,L 2,...,L p,...,l P } over segmentation under segmentation over seg. = under seg. = p P δ(l p B p ) P p P δ(l p O p ) P (3.11) (3.12) Graph Cuts λ Interactive Graph Cuts λ =0.005 seed t-link Grabcut λ = i3l/segmentation/grabcut.htm 16
27 3.5: n-link t-link % 2% 2% % t-link 17
28 3.1: [%] Interactive GrabCut[2] Graph Cuts[9] over seg under seg total (26 ) (24 ) 3.2: [%] Interactive GrabCut[2] Graph Cuts[9] over seg under seg total over seg under seg total (err) (3.11) over segmentation (3.12) under segmentation λ (2.1) t-link n-link λ Interactive Graph Cuts Interactive Graph Cuts [9] λ λ =0.005 n-link Interactive Graph Cuts λ t-link Interactive Graph Cuts 18
29 図 3.6: セグメンテーション例と誤検出率 は誤検出領域が多いが 提案手法では安定して物体領域を抽出できていることがわかる これは 従来法では t-link に色情報のみが使われているため 物体領域の色に似ている色 の背景領域が誤検出される 提案手法では繰り返し処理における 1 つ前の Graph Cuts の 結果から 大まかな物体領域と背景領域の形を捉えた t-link が得られるため λ を大きく した場合でも突発的な誤検出を抑制することができる このことから 提案手法は λ を変 化させても 安定したセグメンテーション結果を得ることができる 処理時間について 従来法と提案手法の処理時間の計測を行う 処理時間計測には画像サイズを 150x113, 300x225, 600x450 の 3 種類を用いる 使用した PC は Intel(R) Xeon 2.66GHz 8 メモ リ 4.0GB である 表 3.3 に各手法での処理時間と繰り返し回数を示す 表 3.3 より 提案 手法は繰り返し処理が入るため 処理時間が大幅に増加する 解決策として 繰り返し回 数の軽減や Lazy Snapping [1] のようにスーパーピクセルからグラフを作成することによ るグラフカットの高速化が考えられる 19
30 3.7: λ 3.3: [s] Interactive Graph Cuts[9] GrabCut[2] 150x (4 ) 9.81 (8 ) 300x (3 ) (10 ) 600x (12 ) (12 ) % 4.79% λ 20
31 4 4.1 Interactive Graph Cuts [9] n-link t-link Graph Cuts Mean Shift Segmentation [4] x i = {x s i, x t i, x r i } z i L i {y j } j=1,2,... ( n i=1 x x x ig i 2) h y j+1 = ( n i=1 g x x i 2) (4.1) h g(x) = ( C x s 2) ( h 2 sh t h p k x t 2) ( k x r 2) k (4.2) r h s h t h r 21
32 4.1: h s,h t,h r C k(x) Mean Shift Segmentation 1. ean Shift z i = y i,c 2. z i {C p } p=1,...,m 3. L i = {p z i C p } 4. M pixel Mean Shift Segmentation h s
33 4.2: 4.3: 4.3 seed 10 ( ) Mean Shift ( ) : [%] over segmentation under segmentation total % Mean Shift Segmentation 23
34 4.4: % Mean Shift Segmentation 24
35 5 SIFT SIFT(Scale Invariant Feature Transform) 5.1 Bag of Keypoints Bag of Keypoints [3] Bag of Keypoints Bag of Keypoints Bag of Word Bag of Word Bag of Keypoints (keypoint, visual work, visual term) Bag of Keypoints 5.1 Bag of Keypoints Bag of Keypoints 25
36 5.1: Bag of Keypoints Bag of Keypoints SIFT SIFT SIFT 26
37 SIFT SIFT SIFT 2 Interest point detector SIFT Quelhas [27] SIFT [28] DoG Csurka [3] Affine Invariant keypoint [29] SIFT : Affine Invariant keypoint SIFT (a)affine Invariant keypoint (b) (c) 8 ( [3] ) Regular grid SIFT Fei-Fei [5] SIFT 13 DoG 5.1: [%] ( [5] ) Descriptor Grid Random DoG 11x11 pixel N/A 128-dim SIFT Bag of Keypoints 27
38 SVM Naive Bayes plsa LDA plsa LDA Bag-of-Word Naive Bayes Naive Bayes w = {w 1,w 2,...,w n }, c Naive Bayes c c = arg max p(c w) p(c)p(w c) =p(c) c n p(w n c) (5.1) plsa plsa (Probabilistic Latent Semantic Analysis) [30] 2 d D = {d 1,...,d N }, w W = {w 1,...,w M }, z Z = {z 1,...,z K } p(d, w) =p(d) p(w z)p(z d) (5.2) z Z p(d, w) = z Z p(z)p(d z)p(z w) (5.3) LDA plsa Sivic [31] plsa Translation and Scale invariant plsa (TSI-pLSA) 28
39 5.3: 5.2 SIFT [32, 33] (u, v) I x i = {u i,v i,i i } T Φ = {α j, φ j =(µ j, Σ j )} c j=1 x (5.4) EM(DAEM:Deterministic Annealing EM) [34] Φ ML Φ ML = arg max Φ c (α j p j (x µ j, Σ j )) β j=1 1 p(x µ j, Σ j ) = (2π)3 Σ j { exp 1 } 2 (x µ j) T Σ 1 j (x µ j ) (5.4) p j (x µ j, Σ j ) µ j Σ j φ j = {µ j, Σ j } β DAEM β EM α j α j > 0 c j=1 α j = Φ ML 3 2 (u, v) 29
40 5.4: GMM Φ ML x φ i C i = arg max p i (x φ i ) (5.5) i 5.4(c) Mean-Shift [4] 5.5 x i = {u i,v i,i i } T 3 30
41 5.5: Mean-Shift SIFT descriptor [28] SIFT Descriptor SIFT descriptor L(x, y) θ(x, y) m(x, y) m(x, y) = f x (x, y) 2 + f y (x, y) 2 (5.6) ( ) θ(x, y) = tan 1 fy (x, y) (5.7) f x (x, y) f x (x, y) = L(x +1,y) L(x 1,y) (5.8) f y (x, y) = L(x, y +1) L(x, y 1) (5.9) m θ w(x, y) = G(x, y, σ) m(x, y) (5.10) h θ = w(x, y) δ[θ, θ(x, y)] (5.11) x y G(x, y, σ) θ
42 5.6: 5.7: SIFT 4x4 8 4x SIFT SIFT SIFT LBG SIFT
43 5.8: 5.3 k-nn :
44 5.10: N = {n 1,...,n 4 } T E = {e 1,...,e 6 } T T = {N 1,...,N 4, e 1,...,e 6 } T } T X cost(t, X) = 4 n t i nx i + i=1 6 j=1 e t j ex j (5.12) T X T X T X Cost(T, X) = min i {cost(t, X i )} (5.13) Cost l Cost g α Cost = α Cost l +(1 α) Cost g (5.14) (0 α 1) knn 34
45 (SH) (HG) (BK) (VH) : α % 5.3 α =0.1 (α =0) (a)(b) (e)(f) (c)(d) (g)(h)
46 5.2: [%] α SH HG Class BK VH : (α =0.1) out SH HG BK VH correct rate[%] SH HG in BK VH Caltech database bok1 [3] Bag of keypoints SIFT bok2 [35] proposed method GMM 1 Datasets/Caltech256/ 36
47 bok1 17.6% bok2 5.6% 45 bok1 bok GMM 5.16 GMM bok2 bok2 5.5 SIFT 3.2% Bag of keypoints 17.6% 37
48 5.12: 38
49 5.13: 5.14: 39
50 5.15: 5.16: 40
51 % λ 4 Mean Shift Segmentation 4.23% 5 3.2% Bag of keypoints 17.6% Mean Shift Segmentation 41
52
53 43
54
55 [1] Y. Li, J. Sun, C.-K. Tang and H.-Y. Shum: Lazy snapping, ACM Trans. Graph., 23, 3, pp (2004). [2] C. Rother, V. Kolmogorov and A. Blake: grabcut : interactive foreground extraction using iterated graph cuts, ACM Trans. Graph., 23, 3, pp (2004). [3] C. Dance, J. Willamowski, L. Fan, C. Bray and G. Csurka: Visual categorization with bags of keypoints, ECCV International Workshop on Statistical Learning in Computer Vision (2004). [4] D. Comaniciu and P. Meer: Mean shift: A robust approach toward feature space analysis, IEEE Transactions on Pattern Analysis and Machine Intelligence, 24, 5, pp (2002). [5] F.-F. Li and P. Perona: A bayesian hierarchical model for learning natural scene categories, CVPR 05: Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 05) - Volume 2, Washington, DC, USA, IEEE Computer Society, pp (2005). [6] A. W. Michael Kass and D. Terzopoulos: Snakes: Active contour models, Int. J. Computer Vision, 1, 4, pp (1988). [7] M. Sussman, P. Smereka and S. Osher: A level set approach for computing solutions to incompressible two-phase flow, J. Comput. Phys., 114, 1, pp (1994). [8] Y. Boykov and V. Kolmogorov: An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision, IEEE Transactions on Pattern Analysis and Machine Intelligence, 26, 9, pp (2004). [9] Y. Boykov and M.-P. Jolly: Interactive graph cuts for optimal boundary & region segmentation of objects in n-d images, ICCV2001, 01, p. 105 (2001). [10] Y. Boykov and G. Funka-Lea: Graph cuts and efficient n-d image segmentation, Int. J. Comput. Vision, 70, 2, pp (2006). 45
56 [11] ( ),. CVIM ( ), 31, pp (2007). [12] D. Greig, B. Porteous and A. Seheult: Exact maximum a posteriori estimation for binary images, J. Royal Statistical Soc., Series B, 51, 2, pp (1989). [13] Y. Boykov, O. Veksler and R. Zabih: Fast approximate energy minimization via graph cuts, Proc. IEEE Trans. Pattern Analysis and Machine Intelligence, 23, 11, pp (2001). [14] Y. Boykov, O. Veksler and R. Zabih: Markov random fields with efficient approximations, Technical Report TR (1997). [15] H. Ishikawa and D. Geiger: Segmentation by grouping junctions, CVPR 98: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Washington, DC, USA, IEEE Computer Society, p. 125 (1998). [16] H. Ishikawa and D. Geiger: Occlusions, discontinuities, and epipolar lines in stereo, ECCV 98: Proceedings of the 5th European Conference on Computer Vision-Volume I, London, UK, Springer-Verlag, pp (1998). [17] J. Kim, V. Kolmogorov and R. Zabih: Visual correspondence using energy minimization and mutual information, ICCV 03: Proceedings of the Ninth IEEE International Conference on Computer Vision, Washington, DC, USA, IEEE Computer Society, p (2003). [18] V. Kolmogorov and R. Zabih: Computing visual correspondence with occlusions via graph cuts, ICCV, pp (2001). [19] M. H. Lin and C. Tomasi: Surfaces with occlusions from layered stereo, cvpr, 01, p. 710 (2003). [20] Y. Boykov and G. Funka-Lea: Graph cuts and efficient n-d image segmentation, Int. J. Comput. Vision, 70, 2, pp (2006). [21] Y. Boykov and V. Kolmogorov: Computing geodesics and minimal surfaces via graph cuts, ICCV 03: Proceedings of the Ninth IEEE International Conference on Computer Vision, Washington, DC, USA, IEEE Computer Society, p. 26 (2003). [22] V. Kolmogorov and R. Zabih: Multi-camera scene reconstruction via graph cuts, ECCV 02: Proceedings of the 7th European Conference on Computer Vision-Part III, London, UK, Springer-Verlag, pp (2002). 46
57 [23] L. Ford and D. Fulkerson: Flow in Networks (1962). [24] A. V. Goldberg and R. E. Tarjan: A new approach to the maximum flow problem, STOC 86: Proceedings of the eighteenth annual ACM symposium on Theory of computing, New York, NY, USA, ACM, pp (1986). [25] C. Stauffer and W. E. L. Grimson: Adaptive background mixture models for realtime tracking, Proceedings of the IEEE Computer Science Conference on Computer Vision and Pattern Recognition (CVPR-99), Los Alamitos, IEEE, pp (1999). [26] A. P. Dempster, N. M. Laird and D. B. Rubin: Maximum likelihood from incomplete data via the em algorithm, Journal of the Royal Statistical Society. Series B (Methodological), 39, 1, pp (1977). [27] P. Quelhas, F. Monay, J.-M. Odobez, D. Gatica-Perez, T. Tuytelaars and L. V. Gool: Modeling scenes with local descriptors and latent aspects, ICCV 05: Proceedings of the Tenth IEEE International Conference on Computer Vision (ICCV 05) Volume 1, Washington, DC, USA, IEEE Computer Society, pp (2005). [28] D. G. Lowe: Distinctive image features from scale-invariant keypoints, Int. J. Comput. Vision, 60, 2, pp (2004). [29] K. Mikolajczyk and C. Schmid: An affine invariant interest point detector, ECCV (1), pp (2002). [30] T. Hofmann: Unsupervised learning by probabilistic latent semantic analysis, Mach. Learn., 42, 1/2, pp (2001). [31] J. Sivic and A. Zisserman: Video Google: A text retrieval approach to object matching in videos, Proceedings of the International Conference on Computer Vision, Vol. 2, pp (2003). [32] M. Seki, K. Sumi, H. Taniguchi and M. Hashimoto: Gaussian mixture model for object recognition, MIRU2004, 1, pp (2004). [33] H. Nami, S. Makito, O. Haruhisa and H. Manabu: Vehicle detection using gaussian mixture model from ir image, Technical report of IEICE. PRMU, 105, 62, pp (2005). [34] N. Ueda and R. Nakano: Deterministic annealing em algorithm, Neural Netw., 11, 2, pp (1998). 47
58 [35] S. Lazebnik, C. Schmid and J. Ponce: Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories, CVPR 06: Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Washington, DC, USA, IEEE Computer Society, pp (2006). [36] J. J. Koenderink: The structure of images, Proc. of Biological Cybernetics, 50, pp (1984). [37] T. Lindeberg: Scale-space theory: A basic tool for analysing structures at different scales, J. of Applied Statistics, 21(2), pp (1994). [38] D. G. Lowe: Object recognition from local scale-invariant features, Proc. of the International Conference on Computer Vision ICCV, Corfu, pp (1999). 48
59 [1],,.,, Vol 22, 2008( ) [1] S. Shimizu, T. Nagahashi, and H. Fujiyoshi. Robust and Accurate Detection of Object Orientation and ID without Color Segmentation, Proc. on ROBOCUP2005 SYMPOSIUM, [2] Tomoyuki Nagahashi, Hironobu Fujiyoshi, and Takeo Kanade. Object Type Classification Using Structure-based Feature Representation, MVA2007 IAPR Conference on Machine Vision Applications, pp , May, [3] Tomoyuki Nagahashi, Hironobu Fujiyoshi, and Takeo Kanade. Image Segmentation Using Iterated Graph Cuts Based on Multi-scale Smoothing, Asian Conference on Computer Vision 2007, Part II, LNCS 4844, pp , [1],,. ID, 21 SIG-Challenge, pp , May, [2],,., CVIM 154, pp ,
60 [3],,. SIFT, SC-07-8, pp39-44, Jan [4],,., 10 (MIRU2007), pp , Jul, [1],,.,, O-455, Sep, [1] MIRU2007 [2]
61 A Mean Shift A.1 (Kernel Density Estimation) n d {x i } i=1,...,n (Kernel Density Estimation Method) Parzen ˆf h,k (x) = 1 nh d n ( ) x xi K h i=1 (A.1) (A.1) K x ˆf h,k (x) h h K(x) (A.1) K(x) = c k,d k( x 2 ) (A.2) c k,d K(x) Normal Kernel k N (x) = exp ( 12 ) x x 0 (A.3) K N (x) = (2π) d/2 exp ( 12 ) x 2 (A.4) Epancechnikov Kernel k E (x) = K E (x) = { 1 x 0 x 1 0 x>1 { 1 2 c 1 d (d + 2)(1 x 2 ) x 1 0 otherwise (A.5) (A.6) 51
62 (A.1) ˆf h,k (x) = c k,d nh d ( n x x i 2) k (A.7) h i=1 A.2 (Density Gradient Estimation) ˆf(x) ˆf(x) f(x) 0 (A.7) ˆ f h,k (x) ˆ f h,k (x) ˆf h,k (x) = 2c k,d nh d+2 n i=1 (x x i )k ( x x i h k(x) k (x) 2) (A.8) g(x) = k (x) (A.9) G(x) = c g,d g ( x 2) (A.10) c g,d (A.9) (A.8) ( B ) ˆ f h,k (x) = 2c k,d nh d+2 = 2c k,d nh d+2 ( n x x i (x i x)g h i=1 ( [ n ( x x i )] n 2 i=1 x ig g h i=1 2 ) x x i h n i=1 g ( x x i h 2 ) x (A.11) 2 ) (A.7) G x ˆf h,g (x) ˆf h,g (x) = c g,d nh d Mean Shift Vector m h,g (x) m h,g (x) = ( n x x i 2) g (A.12) h i=1 n i=1 x ig ( x x i h n i=1 g ( x x i h 2 ) ) x (A.13) 2 52
63 (A.12), (A.13) (A.11) ˆ f h,k (x) = ˆf h,g (x) 2c k,d h 2 c g,d m h,g (x) (A.14) m h,g (x) = 1 2 h2 c ˆ f h,k (x) ˆf h,g (x) (A.15) c (A.15) G Mean Shift Vector m h,g (x) K Mean Shift Vector Mean Shift Vector Mean Shift Vector (A.13) Mean Shift Vector m h,g (x) A.1 A.1: (A.13) { y j }j=1,2,... y j+1 = n i=1 x ig ( y x j i h n i=1 g ( y j x i h 2 ) ) (A.16) 2 y j y j+1 Mean Shift Vector A.2 Mean Shift 53
64 A.2: Mean Shift ( [4] ) 54
65 B Scale-Invariant Feature Transform SIFT [28] SIFT ( ) (detection) { (description) 2 1. detection 2. { 3. description DoG B.1 1 DoG B.1.1 LoG Koenderink[36] Lindeberg[37] Lindeberg Scale-normalized Laplacian-of-Gaussian( LoG ) LoG σ (B.1) LoG ( B.1) 55
66 B.1: LoG ) LoG = f(σ) = x2 + y 2 2σ 2 exp ( x2 + y 2 2πσ 6 2σ 2 (B.1) σ x y LoG Lowe[38] Differenceof-Gaussian(DoG) DoG LoG G σ = σ 2 G (B.2) G 2 (LoG) (B.2) G σ G(x, y, kσ) G(x, y, σ) kσ σ (B.3) (B.2) (B.3) σ 2 G = G σ G(x, y, kσ) G(x, y, σ) kσ σ (B.4) (k 1)σ 2 2 G G(x, y, kσ) G(x, y, σ) (B.5) σ 2 2 G LoG (B.5) DoG LoG LoG DoG SIFT DoG 56
67 B.1.2 Difference-of-Gaussian G(x, y, σ) I(u, v) L(u, v, σ) (DoG ) L(u, v, σ) = G(x, y, σ) I(u, v) (B.6) ) 1 G(x, y, σ) = ( 2πσ exp x2 + y 2 (B.7) 2 2σ 2 DoG D(u, v, σ) DoG D(u, v, σ) = (G(x, y, kσ) G(x, y, σ)) I(u, v) = L(u, v, kσ) L(u, v, σ) (B.8) σ 0 k B.2 DoG σ SIFT σ B.2: DoG B.1.3 σ σ B.3 σ 0 L 1 (σ 0 ) σ 0 k kσ 0 57
68 L 1 (kσ 0 ) σ 1 1 2σ 0 L 1 (2σ 0 ) 1/2 1 k 1/2 L 2 (σ 0 ) 2σ 0 L 1 (2σ 0 ) L 1 (2σ 0 ) L 2 (σ 0 ) (B.9) σ B.3: σ 58
69 B.1.4 k σ k 1 s 1 σ 0 2σ 0 σ k k =2 1/s B.4 DoG s =2( 2) k =2 1/2 = 2 DoG 3 1 s s +2 DoG s +2 DoG s +3 1 s +3 [28] s=3 σ 0 = / B.4: s =2 DoG 59
70 B.1.5 DoG DoG DoG σ DoG B.5 DoG 3 DoG ( B.5 ) ( B.5 ) 26 ( B.5 ) σ DoG DoG DoG B.6 DoG ( ) B.6(a) 2 (b) DoG B.6(a) DoG σ 1 B.6(b) σ 2 σ 2 =2σ 1 2 DoG σ 2 SIFT σ B.5: B.2 B.1 DoG (low contrast) 60
71 B.6: DoG B H [ ] H = D xx D xy (B.10) D xy D yy DoG 2 1 α 2 β(α >β) Tr(H) Det(H) Tr(H) = D xx + D yy = α + β (B.11) Det(H) = D xx D yy (D xy ) 2 = αβ (B.12) γ 1 2 α = γβ Tr(H) 2 Det(H) = (α + β)2 αβ = (γβ + β)2 γβ 2 = (γ +1)2 γ (B.13) 61
72 B.7: α β Tr(H) 2 Det(H) < (γ th +1) 2 γ th (B.14) (B.14) γ th [28] γ th = B.7(a) B.7(b) B (x, y, σ) 2 x =(x, y, σ) T DoG D(x) D(x) =D + D T x D x 2 xt x x 2 (B.15) (B.15) x 0 D x + 2 D x ˆx = 0 2 (B.16) 62
73 ˆx ( ) 2 D ˆx = D x2 x (B.17) 2 D x 2 2 D xy 2 D xσ 2 D xy 2 D y 2 2 D yσ 2 D xσ 2 D yσ 2 D σ 2 x y σ = D x D y D σ (B.18) (B.18) ˆx x y σ = 2 D x 2 2 D xy 2 D xσ 2 D xy 2 D y 2 2 D yσ 2 D xσ 2 D yσ 2 D σ 2 1 D x D y D σ (B.19) (B.19) ˆx =(x, y, σ) B.2.3 DoG (B.19) ˆx = 2 D 1 D x 2 x (B.20) (B.20) (B.15) D(ˆx) =D T D ˆx x (B.21) D DoG ˆx (B.21) DoG DoG [28] 0.03 DoG ( ) B.7(c) 63
74 B.3 2 L(u, v) m(u, v) θ(u, v) { m(u, v) = f u (u, v) 2 + f v (u, v) 2 (B.22) θ(u, v) = tan 1 f v(u, v) (B.23) f u (u, v) f u (u, v) =L(u +1,v) L(u 1,v) f v (u, v) =L(u, v +1) L(u, v 1) (B.24) m(x, y) θ(x, y) B.8 h h θ = w(x, y) δ [θ,θ(x, y)] (B.25) x y w(x, y) = G(x, y, σ) m(x, y) (B.26) h θ 36 w(x, y) (x, y) G(x, y, σ) m(x, y) δ Kronecker θ(x, y) θ % B.8 1 B B.4 SIFT descriptor 128 B.10 64
75 B.8: ( B.10 ) 4 16 B.11 8 (45 ) B = =
76 B.9: 2 B.10: B.5 SIFT SIFT SIFT JPEG 5 1 B.12 SIFT (128 ) B.12 B.12(b), (c), (d), (e) JPEG ( ) B.12(f) Mikolajczyk 66
77 B.11: SIFT [29] 67
78 B.12: SIFT 68
79 C [1],,.,, Vol 22, 2008( ) [1] S. Shimizu, T. Nagahashi, and H. Fujiyoshi. Robust and Accurate Detection of Object Orientation and ID without Color Segmentation, Proc. on ROBOCUP2005 SYMPOSIUM, [2] Tomoyuki Nagahashi, Hironobu Fujiyoshi, and Takeo Kanade. Object Type Classification Using Structure-based Feature Representation, MVA2007 IAPR Conference on Machine Vision Applications, pp , May, [3] Tomoyuki Nagahashi, Hironobu Fujiyoshi, and Takeo Kanade. Image Segmentation Using Iterated Graph Cuts Based on Multi-scale Smoothing, Asian Conference on Computer Vision 2007, Part II, LNCS 4844, pp , [1],,. ID, 21 SIG-Challenge, pp , May, [2],,., CVIM 154, pp , [3],,. SIFT, SC-07-8, pp39-44, Jan
80 [4],,., 10 (MIRU2007), pp , Jul, [1],,.,, O-455, Sep,
IPSJ SIG Technical Report 1,a) 1,b) 1,c) 1,d) 2,e) 2,f) 2,g) 1. [1] [2] 2 [3] Osaka Prefecture University 1 1, Gakuencho, Naka, Sakai,
1,a) 1,b) 1,c) 1,d) 2,e) 2,f) 2,g) 1. [1] [2] 2 [3] 1 599 8531 1 1 Osaka Prefecture University 1 1, Gakuencho, Naka, Sakai, Osaka 599 8531, Japan 2 565 0871 Osaka University 1 1, Yamadaoka, Suita, Osaka
More informationIPSJ SIG Technical Report Vol.2010-CVIM-170 No /1/ Visual Recognition of Wire Harnesses for Automated Wiring Masaki Yoneda, 1 Ta
1 1 1 1 2 1. Visual Recognition of Wire Harnesses for Automated Wiring Masaki Yoneda, 1 Takayuki Okatani 1 and Koichiro Deguchi 1 This paper presents a method for recognizing the pose of a wire harness
More informationOptical Flow t t + δt 1 Motion Field 3 3 1) 2) 3) Lucas-Kanade 4) 1 t (x, y) I(x, y, t)
http://wwwieice-hbkborg/ 2 2 4 2 -- 2 4 2010 9 3 3 4-1 Lucas-Kanade 4-2 Mean Shift 3 4-3 2 c 2013 1/(18) http://wwwieice-hbkborg/ 2 2 4 2 -- 2 -- 4 4--1 2010 9 4--1--1 Optical Flow t t + δt 1 Motion Field
More information4. C i k = 2 k-means C 1 i, C 2 i 5. C i x i p [ f(θ i ; x) = (2π) p 2 Vi 1 2 exp (x µ ] i) t V 1 i (x µ i ) 2 BIC BIC = 2 log L( ˆθ i ; x i C i ) + q
x-means 1 2 2 x-means, x-means k-means Bayesian Information Criterion BIC Watershed x-means Moving Object Extraction Using the Number of Clusters Determined by X-means Clustering Naoki Kubo, 1 Kousuke
More information2.2 6).,.,.,. Yang, 7).,,.,,. 2.3 SIFT SIFT (Scale-Invariant Feature Transform) 8).,. SIFT,,. SIFT, Mean-Shift 9)., SIFT,., SIFT,. 3.,.,,,,,.,,,., 1,
1 1 2,,.,.,,, SIFT.,,. Pitching Motion Analysis Using Image Processing Shinya Kasahara, 1 Issei Fujishiro 1 and Yoshio Ohno 2 At present, analysis of pitching motion from baseball videos is timeconsuming
More information(b) BoF codeword codeword BoF (c) BoF Fergus Weber [11] Weber [12] Weber Fergus BoF (b) Fergus [13] Fergus 2. Fergus 2. 1 Fergus [3]
* A Multimodal Constellation Model for Generic Object Recognition Yasunori KAMIYA, Tomokazu TAKAHASHI,IchiroIDE, and Hiroshi MURASE Bag of Features (BoF) BoF EM 1. [1] Part-based Graduate School of Information
More informationyoo_graduation_thesis.dvi
200 3 A Graduation Thesis of College of Engineering, Chubu University Keypoint Matching of Range Data from Features of Shape and Appearance Yohsuke Murai 1 1 2 2.5D 3 2.1 : : : : : : : : : : : : : : :
More information(4) ω t(x) = 1 ω min Ω ( (I C (y))) min 0 < ω < C A C = 1 (5) ω (5) t transmission map tmap 1 4(a) 2. 3 2. 2 t 4(a) t tmap RGB 2 (a) RGB (A), (B), (C)
(MIRU2011) 2011 7 890 0065 1 21 40 105-6691 1 1 1 731 3194 3 4 1 338 8570 255 346 8524 1836 1 E-mail: {fukumoto,kawasaki}@ibe.kagoshima-u.ac.jp, ryo-f@hiroshima-cu.ac.jp, fukuda@cv.ics.saitama-u.ac.jp,
More informationIPSJ SIG Technical Report Vol.2012-CG-149 No.13 Vol.2012-CVIM-184 No /12/4 3 1,a) ( ) DB 3D DB 2D,,,, PnP(Perspective n-point), Ransa
3,a) 3 3 ( ) DB 3D DB 2D,,,, PnP(Perspective n-point), Ransac. DB [] [2] 3 DB Web Web DB Web NTT NTT Media Intelligence Laboratories, - Hikarinooka Yokosuka-Shi, Kanagawa 239-0847 Japan a) yabushita.hiroko@lab.ntt.co.jp
More informationMicrosoft PowerPoint - SSII_harada pptx
The state of the world The gathered data The processed data w d r I( W; D) I( W; R) The data processing theorem states that data processing can only destroy information. David J.C. MacKay. Information
More information(MIRU2008) HOG Histograms of Oriented Gradients (HOG)
(MIRU2008) 2008 7 HOG - - E-mail: katsu0920@me.cs.scitec.kobe-u.ac.jp, {takigu,ariki}@kobe-u.ac.jp Histograms of Oriented Gradients (HOG) HOG Shape Contexts HOG 5.5 Histograms of Oriented Gradients D Human
More information(MIRU2010) Geometric Context Randomized Trees Geometric Context Rand
(MIRU2010) 2010 7 Geometric Context Randomized Trees 487-8501 1200 E-mail: {fukuta,ky}@vision.cs.chubu.ac.jp, hf@cs.chubu.ac.jp Geometric Context Randomized Trees 10 3, Geometric Context, Abstract Image
More informationDuplicate Near Duplicate Intact Partial Copy Original Image Near Partial Copy Near Partial Copy with a background (a) (b) 2 1 [6] SIFT SIFT SIF
Partial Copy Detection of Line Drawings from a Large-Scale Database Weihan Sun, Koichi Kise Graduate School of Engineering, Osaka Prefecture University E-mail: sunweihan@m.cs.osakafu-u.ac.jp, kise@cs.osakafu-u.ac.jp
More informationIPSJ SIG Technical Report Vol.2013-CVIM-188 No /9/2 1,a) D. Marr D. Marr 1. (feature-based) (area-based) (Dense Stereo Vision) van der Ma
,a) D. Marr D. Marr. (feature-based) (area-based) (Dense Stereo Vision) van der Mark [] (Intelligent Vehicle: IV) SAD(Sum of Absolute Difference) Intel x86 CPU SSE2(Streaming SIMD Extensions 2) CPU IV
More informationTHE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS TECHNICAL REPORT OF IEICE. TRECVID2012 Instance Search {sak
THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS TECHNICAL REPORT OF IEICE. TRECVID2012 Instance Search 599 8531 1 1 E-mail: {sakata,matozaki}@m.cs.osakafu-u.ac.jp, {kise,masa}@cs.osakafu-u.ac.jp
More informationLBP 2 LBP 2. 2 Local Binary Pattern Local Binary pattern(lbp) [6] R
DEIM Forum 24 F5-4 Local Binary Pattern 6 84 E-mail: {tera,kida}@ist.hokudai.ac.jp Local Binary Pattern (LBP) LBP 3 3 LBP 5 5 5 LBP improved LBP uniform LBP.. Local Binary Pattern, Gradient Local Auto-Correlations,,,,
More informationbag-of-words bag-of-keypoints Web bagof-keypoints Nearest Neighbor SVM Nearest Neighbor SIFT Nearest Neighbor bag-of-keypoints Nearest Neighbor SVM 84
Bag-of-Keypoints Web G.Csurka bag-of-keypoints Web Bag-of-keypoints SVM 5.% Web Image Classification with Bag-of-Keypoints Taichi joutou and Keiji yanai Recently, need for generic image recognition is
More information[1] SBS [2] SBS Random Forests[3] Random Forests ii
Random Forests 2013 3 A Graduation Thesis of College of Engineering, Chubu University Proposal of an efficient feature selection using the contribution rate of Random Forests Katsuya Shimazaki [1] SBS
More informationIS1-09 第 回画像センシングシンポジウム, 横浜,14 年 6 月 2 Hough Forest Hough Forest[6] Random Forest( [5]) Random Forest Hough Forest Hough Forest 2.1 Hough Forest 1 2.2
IS1-09 第 回画像センシングシンポジウム, 横浜,14 年 6 月 MI-Hough Forest () E-mail: ym@vision.cs.chubu.ac.jphf@cs.chubu.ac.jp Abstract Hough Forest Random Forest MI-Hough Forest Multiple Instance Learning Bag Hough Forest
More informationConvolutional Neural Network A Graduation Thesis of College of Engineering, Chubu University Investigation of feature extraction by Convolution
Convolutional Neural Network 2014 3 A Graduation Thesis of College of Engineering, Chubu University Investigation of feature extraction by Convolutional Neural Network Fukui Hiroshi 1940 1980 [1] 90 3
More informationSICE東北支部研究集会資料(2013年)
280 (2013.5.29) 280-4 SURF A Study of SURF Algorithm using Edge Image and Color Information Yoshihiro Sasaki, Syunichi Konno, Yoshitaka Tsunekawa * *Iwate University : SURF (Speeded Up Robust Features)
More information本文6(599) (Page 601)
(MIRU2008) 2008 7 525 8577 1 1 1 E-mail: matsuzaki@i.ci.ritsumei.ac.jp, shimada@ci.ritsumei.ac.jp Object Recognition by Observing Grasping Scene from Image Sequence Hironori KASAHARA, Jun MATSUZAKI, Nobutaka
More informationIPSJ SIG Technical Report Vol.2009-CVIM-167 No /6/10 Real AdaBoost HOG 1 1 1, 2 1 Real AdaBoost HOG HOG Real AdaBoost HOG A Method for Reducing
Real AdaBoost HOG 1 1 1, 2 1 Real AdaBoost HOG HOG Real AdaBoost HOG A Method for Reducing number of HOG Features based on Real AdaBoost Chika Matsushima, 1 Yuji Yamauchi, 1 Takayoshi Yamashita 1, 2 and
More informationばらつき抑制のための確率最適制御
( ) http://wwwhayanuemnagoya-uacjp/ fujimoto/ 2011 3 9 11 ( ) 2011/03/09-11 1 / 46 Outline 1 2 3 4 5 ( ) 2011/03/09-11 2 / 46 Outline 1 2 3 4 5 ( ) 2011/03/09-11 3 / 46 (1/2) r + Controller - u Plant y
More informationSilhouette on Image Object Silhouette on Images Object 1 Fig. 1 Visual cone Fig. 2 2 Volume intersection method Fig. 3 3 Background subtraction Fig. 4
Image-based Modeling 1 1 Object Extraction Method for Image-based Modeling using Projection Transformation of Multi-viewpoint Images Masanori Ibaraki 1 and Yuji Sakamoto 1 The volume intersection method
More information(MIRU2009) cuboid cuboid SURF 6 85% Web. Web Abstract Extracting Spatio-te
(MIRU2009) 2009 7 182 8585 1 5 1 E-mail: noguchi-a@mm.cs.uec.ac.jp, yanai@cs.uec.ac.jp cuboid cuboid SURF 6 85% Web. Web Abstract Extracting Spatio-temporal Local Features Considering Consecutiveness of
More informationIPSJ SIG Technical Report Vol.2010-MPS-77 No /3/5 VR SIFT Virtual View Generation in Hallway of Cybercity Buildings from Video Sequen
VR 1 1 1 1 1 SIFT Virtual View Generation in Hallway of Cybercity Buildings from Video Sequences Sachiyo Yoshida, 1 Masami Takata 1 and Joe Kaduki 1 Appearance of Three-dimensional (3D) building model
More information(a) 1 (b) 3. Gilbert Pernicka[2] Treibitz Schechner[3] Narasimhan [4] Kim [5] Nayar [6] [7][8][9] 2. X X X [10] [11] L L t L s L = L t + L s
1 1 1, Extraction of Transmitted Light using Parallel High-frequency Illumination Kenichiro Tanaka 1 Yasuhiro Mukaigawa 1 Yasushi Yagi 1 Abstract: We propose a new sharpening method of transmitted scene
More informationReal AdaBoost HOG 2009 3 A Graduation Thesis of College of Engineering, Chubu University Efficient Reducing Method of HOG Features for Human Detection based on Real AdaBoost Chika Matsushima ITS Graphics
More information2007/8 Vol. J90 D No. 8 Stauffer [7] 2 2 I 1 I 2 2 (I 1(x),I 2(x)) 2 [13] I 2 = CI 1 (C >0) (I 1,I 2) (I 1,I 2) Field Monitoring Server
a) Change Detection Using Joint Intensity Histogram Yasuyo KITA a) 2 (0 255) (I 1 (x),i 2 (x)) I 2 = CI 1 (C>0) (I 1,I 2 ) (I 1,I 2 ) 2 1. [1] 2 [2] [3] [5] [6] [8] Intelligent Systems Research Institute,
More informationIPSJ SIG Technical Report Vol.2011-CVIM-177 No /5/ TRECVID2010 SURF Bag-of-Features 1 TRECVID SVM 700% MKL-SVM 883% TRECVID2010 MKL-SVM A
1 1 TRECVID2010 SURF Bag-of-Features 1 TRECVID SVM 700% MKL-SVM 883% TRECVID2010 MKL-SVM Analysis of video data recognition using multi-frame Kazuya Hidume 1 and Keiji Yanai 1 In this study, we aim to
More informationVol. 44 No. SIG 9(CVIM 7) ) 2) 1) 1 2) 3 7) 1) 2) 3 3) 4) 5) (a) (d) (g) (b) (e) (h) No Convergence? End (f) (c) Yes * ** * ** 1
Vol. 44 No. SIG 9(CVIM 7) July 2003, Robby T. Tan, 1 Estimating Illumination Position, Color and Surface Reflectance Properties from a Single Image Kenji Hara,, Robby T. Tan, Ko Nishino, Atsushi Nakazawa,
More information18 2 20 W/C W/C W/C 4-4-1 0.05 1.0 1000 1. 1 1.1 1 1.2 3 2. 4 2.1 4 (1) 4 (2) 4 2.2 5 (1) 5 (2) 5 2.3 7 3. 8 3.1 8 3.2 ( ) 11 3.3 11 (1) 12 (2) 12 4. 14 4.1 14 4.2 14 (1) 15 (2) 16 (3) 17 4.3 17 5. 19
More informationIPSJ SIG Technical Report iphone iphone,,., OpenGl ES 2.0 GLSL(OpenGL Shading Language), iphone GPGPU(General-Purpose Computing on Graphics Proc
iphone 1 1 1 iphone,,., OpenGl ES 2.0 GLSL(OpenGL Shading Language), iphone GPGPU(General-Purpose Computing on Graphics Processing Unit)., AR Realtime Natural Feature Tracking Library for iphone Makoto
More information28 Horizontal angle correction using straight line detection in an equirectangular image
28 Horizontal angle correction using straight line detection in an equirectangular image 1170283 2017 3 1 2 i Abstract Horizontal angle correction using straight line detection in an equirectangular image
More information(a) (b) (c) Canny (d) 1 ( x α, y α ) 3 (x α, y α ) (a) A 2 + B 2 + C 2 + D 2 + E 2 + F 2 = 1 (3) u ξ α u (A, B, C, D, E, F ) (4) ξ α (x 2 α, 2x α y α,
[II] Optimization Computation for 3-D Understanding of Images [II]: Ellipse Fitting 1. (1) 2. (2) (edge detection) (edge) (zero-crossing) Canny (Canny operator) (3) 1(a) [I] [II] [III] [IV ] E-mail sugaya@iim.ics.tut.ac.jp
More information2 Fig D human model. 1 Fig. 1 The flow of proposed method )9)10) 2.2 3)4)7) 5)11)12)13)14) TOF 1 3 TOF 3 2 c 2011 Information
1 1 2 TOF 2 (D-HOG HOG) Recall D-HOG 0.07 HOG 0.16 Pose Estimation by Regression Analysis with Depth Information Yoshiki Agata 1 and Hironobu Fujiyoshi 1 A method for estimating the pose of a human from
More information2 Poisson Image Editing DC DC 2 Poisson Image Editing Agarwala 3 4 Agarwala Poisson Image Editing Poisson Image Editing f(u) u 2 u = (x
1 Poisson Image Editing Poisson Image Editing Stabilization of Poisson Equation for Gradient-Based Image Composing Ryo Kamio Masayuki Tanaka Masatoshi Okutomi Poisson Image Editing is the image composing
More informationIPSJ SIG Technical Report GPS LAN GPS LAN GPS LAN Location Identification by sphere image and hybrid sensing Takayuki Katahira, 1 Yoshio Iwai 1
1 1 1 GPS LAN GPS LAN GPS LAN Location Identification by sphere image and hybrid sensing Takayuki Katahira, 1 Yoshio Iwai 1 and Hiroshi Ishiguro 1 Self-location is very informative for wearable systems.
More information22_04.dvi
Vol. 1 No. 2 32 40 (July 2008) 1, 2 1 Speaker Segmentation Using Audiovisual Correlation Yuyu Liu 1, 2 and Yoichi Sato 1 Audiovisual correlation has been used successfully for audio source localization.
More information3 2 2 (1) (2) (3) (4) 4 4 AdaBoost 2. [11] Onishi&Yoda [8] Iwashita&Stoica [5] 4 [3] 3. 3 (1) (2) (3)
(MIRU2012) 2012 8 820-8502 680-4 E-mail: {d kouno,shimada,endo}@pluto.ai.kyutech.ac.jp (1) (2) (3) (4) 4 AdaBoost 1. Kanade [6] CLAFIC [12] EigenFace [10] 1 1 2 1 [7] 3 2 2 (1) (2) (3) (4) 4 4 AdaBoost
More information一般社団法人電子情報通信学会 THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGIN
一般社団法人電子情報通信学会 THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS 信学技報 IEICE Technical Report PRMU2017-36,SP2017-12(2017-06)
More informationGoogle Goggles [1] Google Goggles Android iphone web Google Goggles Lee [2] Lee iphone () [3] [4] [5] [6] [7] [8] [9] [10] :
THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS TECHNICAL REPORT OF IEICE.,, 182-8585 1-5-1 E-mail: {maruya-t,akiyama-m}@mm.inf.uec.ac.jp, yanai@cs.uec.ac.jp SURF Bag-of-Features
More information% 2 3 [1] Semantic Texton Forests STFs [1] ( ) STFs STFs ColorSelf-Simlarity CSS [2] ii
2012 3 A Graduation Thesis of College of Engineering, Chubu University High Accurate Semantic Segmentation Using Re-labeling Besed on Color Self Similarity Yuko KAKIMI 2400 90% 2 3 [1] Semantic Texton
More informationIPSJ SIG Technical Report Vol.2013-CVIM-187 No /5/30 1,a) 1,b), 1,,,,,,, (DNN),,,, 2 (CNN),, 1.,,,,,,,,,,,,,,,,,, [1], [6], [7], [12], [13]., [
,a),b),,,,,,,, (DNN),,,, (CNN),,.,,,,,,,,,,,,,,,,,, [], [6], [7], [], [3]., [8], [0], [7],,,, Tohoku University a) omokawa@vision.is.tohoku.ac.jp b) okatani@vision.is.tohoku.ac.jp, [3],, (DNN), DNN, [3],
More information1 Kinect for Windows M = [X Y Z] T M = [X Y Z ] T f (u,v) w 3.2 [11] [7] u = f X +u Z 0 δ u (X,Y,Z ) (5) v = f Y Z +v 0 δ v (X,Y,Z ) (6) w = Z +
3 3D 1,a) 1 1 Kinect (X, Y) 3D 3D 1. 2010 Microsoft Kinect for Windows SDK( (Kinect) SDK ) 3D [1], [2] [3] [4] [5] [10] 30fps [10] 3 Kinect 3 Kinect Kinect for Windows SDK 3 Microsoft 3 Kinect for Windows
More information11) 13) 11),12) 13) Y c Z c Image plane Y m iy O m Z m Marker coordinate system T, d X m f O c X c Camera coordinate system 1 Coordinates and problem
1 1 1 Posture Esimation by Using 2-D Fourier Transform Yuya Ono, 1 Yoshio Iwai 1 and Hiroshi Ishiguro 1 Recently, research fields of augmented reality and robot navigation are actively investigated. Estimating
More informationpaper.dvi
23 Study on character extraction from a picture using a gradient-based feature 1120227 2012 3 1 Google Street View Google Street View SIFT 3 SIFT 3 y -80 80-50 30 SIFT i Abstract Study on character extraction
More information3: 2: 2. 2 Semi-supervised learning Semi-supervised learning [5,6] Semi-supervised learning Self-training [13] [14] Self-training Self-training Semi-s
THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS TECHNICAL REPORT OF IEICE. 599-8531 1-1 E-mail: tsukada@m.cs.osakafu-u.ac.jp, {masa,kise}@cs.osakafu-u.ac.jp Semi-supervised learning
More information258 5) GPS 1 GPS 6) GPS DP 7) 8) 10) GPS GPS 2 3 4 5 2. 2.1 3 1) GPS Global Positioning System
Vol. 52 No. 1 257 268 (Jan. 2011) 1 2, 1 1 measurement. In this paper, a dynamic road map making system is proposed. The proposition system uses probe-cars which has an in-vehicle camera and a GPS receiver.
More information三石貴志.indd
流通科学大学論集 - 経済 情報 政策編 - 第 21 巻第 1 号,23-33(2012) SIRMs SIRMs Fuzzy fuzzyapproximate approximatereasoning reasoningusing using Lukasiewicz Łukasiewicz logical Logical operations Operations Takashi Mitsuishi
More informationIPSJ SIG Technical Report Taubin Ellipse Fitting by Hyperaccurate Least Squares Yuuki Iwamoto, 1 Prasanna Rangarajan 2 and Kenichi Kanatani
1 2 1 2 Taubin Ellipse Fitting by Hyperaccurate Least Squares Yuuki Iwamoto, 1 Prasanna Rangarajan 2 and Kenichi Kanatani 1 This paper presents a new method for fitting an ellipse to a point sequence extracted
More informationMicrosoft PowerPoint - cvim_harada pptx
1 2 Flickr reaches 6 billion photos on 1 Aug, 2011. http://www.flickr.com/photos/eon60/6000000000/ 3 4 http://www.dpchallenge.com/image.php?image_id=997702 5 6 http://www.image-net.org/challenges/lsvrc/2011/pascal_ilsvrc_2011.pptx
More information& 3 3 ' ' (., (Pixel), (Light Intensity) (Random Variable). (Joint Probability). V., V = {,,, V }. i x i x = (x, x,, x V ) T. x i i (State Variable),
.... Deeping and Expansion of Large-Scale Random Fields and Probabilistic Image Processing Kazuyuki Tanaka The mathematical frameworks of probabilistic image processing are formulated by means of Markov
More informationuntitled
(Robot Vision) Vision ( (computer) Machine VisionComputer Vision ( ) ( ) ( ) ( ) ( ) 1 DTV 2 DTV D 3 ( ( ( ( ( DTV D 4 () 5 A B C D E F G H I A B C D E F G H I I = A + D + G - C - F - I J = A + B + C -
More informationSobel Canny i
21 Edge Feature for Monochrome Image Retrieval 1100311 2010 3 1 3 3 2 2 7 200 Sobel Canny i Abstract Edge Feature for Monochrome Image Retrieval Naoto Suzue Content based image retrieval (CBIR) has been
More informationNo. 3 Oct The person to the left of the stool carried the traffic-cone towards the trash-can. α α β α α β α α β α Track2 Track3 Track1 Track0 1
ACL2013 TACL 1 ACL2013 Grounded Language Learning from Video Described with Sentences (Yu and Siskind 2013) TACL Transactions of the Association for Computational Linguistics What Makes Writing Great?
More informationx, y x 3 y xy 3 x 2 y + xy 2 x 3 + y 3 = x 3 y xy 3 x 2 y + xy 2 x 3 + y 3 = 15 xy (x y) (x + y) xy (x y) (x y) ( x 2 + xy + y 2) = 15 (x y)
x, y x 3 y xy 3 x 2 y + xy 2 x 3 + y 3 = 15 1 1977 x 3 y xy 3 x 2 y + xy 2 x 3 + y 3 = 15 xy (x y) (x + y) xy (x y) (x y) ( x 2 + xy + y 2) = 15 (x y) ( x 2 y + xy 2 x 2 2xy y 2) = 15 (x y) (x + y) (xy
More information2. 30 Visual Words TF-IDF Lowe [4] Scale-Invarient Feature Transform (SIFT) Bay [1] Speeded Up Robust Features (SURF) SIFT 128 SURF 64 Visual Words Ni
DEIM Forum 2012 B5-3 606 8510 E-mail: {zhao,ohshima,tanaka}@dl.kuis.kyoto-u.ac.jp Web, 1. Web Web TinEye 1 Google 1 http://www.tineye.com/ 1 2. 3. 4. 5. 6. 2. 30 Visual Words TF-IDF Lowe [4] Scale-Invarient
More information2_05.dvi
Vol. 52 No. 2 901 909 (Feb. 2011) Gradient-Domain Image Editing is a useful technique to do various-type image editing, for example, Poisson Image Editing which can do seamless image composition. This
More informationFig. 1 Left: Example of a target image and lines. Solid lines mean foreground. Dotted lines mean background. Right: Example of an output mask i
Vol. 50 No. 12 3233 3249 (Dec. 2009) 1, 1 2, 2 1, 2 3 3 Seeded Region Growing Seeded Region Growing Seeded Region Growing Seeded Region Growing Proposal and Evaluation of Fast Image Cutout Based on Improved
More information情報処理学会研究報告 IPSJ SIG Technical Report Vol.2013-CVIM-186 No /3/15 EMD 1,a) SIFT. SIFT Bag-of-keypoints. SIFT SIFT.. Earth Mover s Distance
EMD 1,a) 1 1 1 SIFT. SIFT Bag-of-keypoints. SIFT SIFT.. Earth Mover s Distance (EMD), Bag-of-keypoints,. Bag-of-keypoints, SIFT, EMD, A method of similar image retrieval system using EMD and SIFT Hoshiga
More informationx T = (x 1,, x M ) x T x M K C 1,, C K 22 x w y 1: 2 2
Takio Kurita Neurosceince Research Institute, National Institute of Advanced Indastrial Science and Technology takio-kurita@aistgojp (Support Vector Machine, SVM) 1 (Support Vector Machine, SVM) ( ) 2
More information14 2 5
14 2 5 i ii Surface Reconstruction from Point Cloud of Human Body in Arbitrary Postures Isao MORO Abstract We propose a method for surface reconstruction from point cloud of human body in arbitrary postures.
More information211 kotaro@math.titech.ac.jp 1 R *1 n n R n *2 R n = {(x 1,..., x n ) x 1,..., x n R}. R R 2 R 3 R n R n R n D D R n *3 ) (x 1,..., x n ) f(x 1,..., x n ) f D *4 n 2 n = 1 ( ) 1 f D R n f : D R 1.1. (x,
More informationohpmain.dvi
fujisawa@ism.ac.jp 1 Contents 1. 2. 3. 4. γ- 2 1. 3 10 5.6, 5.7, 5.4, 5.5, 5.8, 5.5, 5.3, 5.6, 5.4, 5.2. 5.5 5.6 +5.7 +5.4 +5.5 +5.8 +5.5 +5.3 +5.6 +5.4 +5.2 =5.5. 10 outlier 5 5.6, 5.7, 5.4, 5.5, 5.8,
More informationIPSJ SIG Technical Report Vol.2017-MUS-116 No /8/24 MachineDancing: 1,a) 1,b) 3 MachineDancing MachineDancing MachineDancing 1 MachineDan
MachineDancing: 1,a) 1,b) 3 MachineDancing 2 1. 3 MachineDancing MachineDancing 1 MachineDancing MachineDancing [1] 1 305 0058 1-1-1 a) s.fukayama@aist.go.jp b) m.goto@aist.go.jp 1 MachineDancing 3 CG
More informationxx/xx Vol. Jxx A No. xx 1 Fig. 1 PAL(Panoramic Annular Lens) PAL(Panoramic Annular Lens) PAL (2) PAL PAL 2 PAL 3 2 PAL 1 PAL 3 PAL PAL 2. 1 PAL
PAL On the Precision of 3D Measurement by Stereo PAL Images Hiroyuki HASE,HirofumiKAWAI,FrankEKPAR, Masaaki YONEDA,andJien KATO PAL 3 PAL Panoramic Annular Lens 1985 Greguss PAL 1 PAL PAL 2 3 2 PAL DP
More informationComputer Security Symposium October ,a) 1,b) Microsoft Kinect Kinect, Takafumi Mori 1,a) Hiroaki Kikuchi 1,b) [1] 1 Meiji U
Computer Security Symposium 017 3-5 October 017 1,a) 1,b) Microsoft Kinect Kinect, Takafumi Mori 1,a) Hiroaki Kikuchi 1,b) 1. 017 5 [1] 1 Meiji University Graduate School of Advanced Mathematical Science
More information2011de.dvi
211 ( 4 2 1. 3 1.1............................... 3 1.2 1- -......................... 13 1.3 2-1 -................... 19 1.4 3- -......................... 29 2. 37 2.1................................ 37
More information画像工学入門
セグメンテーション 講義内容 閾値法,k-mean 法 領域拡張法 SNAK 法 P タイル法 モード法 P タイル法 画像内で対象物の占める面積 (P パーセント ) があらかじめわかっているとき, 濃度ヒストグラムを作成し, 濃度値の累積分布が全体の P パーセントとなる濃度値を見つけ, この値を閾値とする. モード法 画像の輝度ヒストグラムを調べ その分布のモード ( 頻値輝度 ) 間の谷をしきい値とする
More informationA Graduation Thesis of College of Engineering, Chubu University Pose Estimation by Regression Analysis with Depth Information Yoshiki Agata
2011 3 A Graduation Thesis of College of Engineering, Chubu University Pose Estimation by Regression Analysis with Depth Information Yoshiki Agata CG [2] [3][4] 3 3 [1] HOG HOG TOF(Time Of Flight) iii
More information2 ECCV2008,2010,2012 ECCV % % % ECCV % % % ECCV % % % Ligh
ECCV2012 1 2 1 3 2012 10 8 11 ECCV2012 1. ECCV2012 European Conference on Computer Vision (ECCV) 12 2012 10 8 11 4 General Chairs Roberto Cipolla (University of Cambridge, UK), Carlo Colombo (University
More informationSpin Image [3] 3D Shape Context [4] Spin Image 2 3D Shape Context Shape Index[5] Local Surface Patch[6] DAI [7], [8] [9], [10] Reference Frame SHO[11]
3-D 1,a) 1 1,b) 3 3 3 1% Spin Image 51.6% 93.8% 9 PCL Point Cloud Library Correspondence Grouping 13.5% 10 3 Extraction of 3-D Feature Point for Effect in Object Recognition based on Local Shape Distinctiveness
More information2014/3 Vol. J97 D No. 3 Recognition-based segmentation [7] 1 DP 1 Conditional random field; CRF [8] [10] CRF / OCR 2 2 2 2 OCR 2 2 2 2. 2 2 2 [11], [1
2, a) Scene Character Extraction by an Optimal Two-Dimensional Segmentation Hiroaki TAKEBE, a) and Seiichi UCHIDA / 2 2 2 2 2 2 1. FUJITSU LABORATORIES LTD., 4 1 1 Kamikodanaka, Nakahara-ku, Kawasaki-shi,
More information一般画像認識のための単語概念の視覚性の分析
Bag-of-keypoints による カテゴリー認識 第 14 回画像センシングシンポジウム (SSII2008) 2008 年 6 月 13 日 電気通信大学 柳井啓司 情報工学科 2 アウトライン 1. イントロダクション 2. Bag-of-keypoints アプローチ その具体的な方法の詳細 3. Bag-of-keypoints アプローチの拡張 位置情報, 色情報の利用 4. 確率的言語モデルの画像への適用
More informationIS2-06 第21回画像センシングシンポジウム 横浜 2015年6月 画像をスーパーピクセルに変換する手法として SLIC[5] を用いる Achanta らによって提案された SLIC 2.2 グラフマッチング は K-means をベースにした手法で 単純な K-means に いる SPIN
Cosegmentation E-mail: {tamanaha, nakayama}@nlab.ci.i.u-tokyo.ac.jp Abstract Cosegmentation Cosegmentation Cosegmentation 1 Never Ending Image Learner[1] Google Cosegmentation Cosegmentation Rother [2]
More information(MIRU2010) NTT Graphic Processor Unit GPU graphi
(MIRU2010) 2010 7 889 2192 1-1 905 2171 905 NTT 243 0124 3-1 E-mail: ac094608@edu.okinawa-ct.ac.jp, akisato@ieee.org Graphic Processor Unit GPU graphic processor unit CUDA Fully automatic extraction of
More information1 filename=mathformula tex 1 ax 2 + bx + c = 0, x = b ± b 2 4ac, (1.1) 2a x 1 + x 2 = b a, x 1x 2 = c a, (1.2) ax 2 + 2b x + c = 0, x = b ± b 2
filename=mathformula58.tex ax + bx + c =, x = b ± b 4ac, (.) a x + x = b a, x x = c a, (.) ax + b x + c =, x = b ± b ac. a (.3). sin(a ± B) = sin A cos B ± cos A sin B, (.) cos(a ± B) = cos A cos B sin
More informationIPSJ-CVIM
1 1 2 1 Estimation of Shielding Object Distribution in Scattering Media by Analyzing Light Transport Shosei Moriguchi, 1 Yasuhiro Mukaigawa, 1 Yasuyuki Matsushita 2 and Yasushi Yagi 1 In this paper, we
More informationSICE東北支部研究集会資料(2017年)
307 (2017.2.27) 307-8 Deep Convolutional Neural Network X Detecting Masses in Mammograms Based on Transfer Learning of A Deep Convolutional Neural Network Shintaro Suzuki, Xiaoyong Zhang, Noriyasu Homma,
More information[6] DoN DoN DDoN(Donuts DoN) DoN 4(2) DoN DDoN 3.2 RDoN(Ring DoN) 4(1) DoN 4(3) DoN RDoN 2 DoN 2.2 DoN PCA DoN DoN 2 DoN PCA 0 DoN 3. DoN
3 1,a) 1,b) 3D 3 3 Difference of Normals (DoN)[1] DoN, 1. 2010 Kinect[2] 3D 3 [3] 3 [4] 3 [5] 3 [6] [7] [1] [8] [9] [10] Difference of Normals (DoN) 48 8 [1] [6] DoN DoN 1 National Defense Academy a) em53035@nda.ac.jp
More information2003 : ( ) :80226561 1 1 1.1............................ 1 1.2......................... 1 1.3........................ 1 1.4......................... 4 2 5 2.1......................... 5 2.2........................
More information:EM,,. 4 EM. EM Finch, (AIC)., ( ), ( ), Web,,.,., [1].,. 2010,,,, 5 [2]., 16,000.,..,,. (,, )..,,. (socio-dynamics) [3, 4]. Weidlich Haag.
:EM,,. 4 EM. EM Finch, (AIC)., ( ), ( ),. 1. 1990. Web,,.,., [1].,. 2010,,,, 5 [2]., 16,000.,..,,. (,, )..,,. (socio-dynamics) [3, 4]. Weidlich Haag. [5]. 606-8501,, TEL:075-753-5515, FAX:075-753-4919,
More information,,, 2 ( ), $[2, 4]$, $[21, 25]$, $V$,, 31, 2, $V$, $V$ $V$, 2, (b) $-$,,, (1) : (2) : (3) : $r$ $R$ $r/r$, (4) : 3
1084 1999 124-134 124 3 1 (SUGIHARA Kokichi),,,,, 1, [5, 11, 12, 13], (2, 3 ), -,,,, 2 [5], 3,, 3, 2 2, -, 3,, 1,, 3 2,,, 3 $R$ ( ), $R$ $R$ $V$, $V$ $R$,,,, 3 2 125 1 3,,, 2 ( ), $[2, 4]$, $[21, 25]$,
More informationHaiku Generation Based on Motif Images Using Deep Learning Koki Yoneda 1 Soichiro Yokoyama 2 Tomohisa Yamashita 2 Hidenori Kawamura Scho
Haiku Generation Based on Motif Images Using Deep Learning 1 2 2 2 Koki Yoneda 1 Soichiro Yokoyama 2 Tomohisa Yamashita 2 Hidenori Kawamura 2 1 1 School of Engineering Hokkaido University 2 2 Graduate
More informationVol.58 No (Sep. 2017) 1 2,a) 3 1,b) , A EM A Latent Class Model to Analyze the Relationship Between Companies Appeal Poi
1 2,a) 3 1,b) 2017 1 17, 2017 6 6 A EM A Latent Class Model to Analyze the Relationship Between Companies Appeal Points and Students Reasons for Application Teppei Sakamoto 1 Haruka Yamashita 2,a) Tairiku
More information熊本県数学問題正解
00 y O x Typed by L A TEX ε ( ) (00 ) 5 4 4 ( ) http://www.ocn.ne.jp/ oboetene/plan/. ( ) (009 ) ( ).. http://www.ocn.ne.jp/ oboetene/plan/eng.html 8 i i..................................... ( )0... (
More information2008 : 80725872 1 2 2 3 2.1.......................................... 3 2.2....................................... 3 2.3......................................... 4 2.4 ()..................................
More information1 Table 1: Identification by color of voxel Voxel Mode of expression Nothing Other 1 Orange 2 Blue 3 Yellow 4 SSL Humanoid SSL-Vision 3 3 [, 21] 8 325
社団法人人工知能学会 Japanese Society for Artificial Intelligence 人工知能学会研究会資料 JSAI Technical Report SIG-Challenge-B3 (5/5) RoboCup SSL Humanoid A Proposal and its Application of Color Voxel Server for RoboCup SSL
More information() n C + n C + n C + + n C n n (3) n C + n C + n C 4 + n C + n C 3 + n C 5 + (5) (6 ) n C + nc + 3 nc n nc n (7 ) n C + nc + 3 nc n nc n (
3 n nc k+ k + 3 () n C r n C n r nc r C r + C r ( r n ) () n C + n C + n C + + n C n n (3) n C + n C + n C 4 + n C + n C 3 + n C 5 + (4) n C n n C + n C + n C + + n C n (5) k k n C k n C k (6) n C + nc
More informationIPSJ SIG Technical Report Vol.2014-MBL-70 No.49 Vol.2014-UBI-41 No /3/15 2,a) 2,b) 2,c) 2,d),e) WiFi WiFi WiFi 1. SNS GPS Twitter Facebook Twit
2,a) 2,b) 2,c) 2,d),e) WiFi WiFi WiFi 1. SNS GPS Twitter Facebook Twitter Ustream 1 Graduate School of Information Science and Technology, Osaka University, Japan 2 Cybermedia Center, Osaka University,
More informationh(n) x(n) s(n) S (ω) = H(ω)X(ω) (5 1) H(ω) H(ω) = F[h(n)] (5 2) F X(ω) x(n) X(ω) = F[x(n)] (5 3) S (ω) s(n) S (ω) = F[s(n)] (5
1 -- 5 5 2011 2 1940 N. Wiener FFT 5-1 5-2 Norbert Wiener 1894 1912 MIT c 2011 1/(12) 1 -- 5 -- 5 5--1 2008 3 h(n) x(n) s(n) S (ω) = H(ω)X(ω) (5 1) H(ω) H(ω) = F[h(n)] (5 2) F X(ω) x(n) X(ω) = F[x(n)]
More information1 I
1 I 3 1 1.1 R x, y R x + y R x y R x, y, z, a, b R (1.1) (x + y) + z = x + (y + z) (1.2) x + y = y + x (1.3) 0 R : 0 + x = x x R (1.4) x R, 1 ( x) R : x + ( x) = 0 (1.5) (x y) z = x (y z) (1.6) x y =
More informationI, II 1, A = A 4 : 6 = max{ A, } A A 10 10%
1 2006.4.17. A 3-312 tel: 092-726-4774, e-mail: hara@math.kyushu-u.ac.jp, http://www.math.kyushu-u.ac.jp/ hara/lectures/lectures-j.html Office hours: B A I ɛ-δ ɛ-δ 1. 2. A 1. 1. 2. 3. 4. 5. 2. ɛ-δ 1. ɛ-n
More informationIPSJ SIG Technical Report Vol.2013-CVIM-187 No /5/30 CT CT CT CT CT,,, Research on Automatic Liver Region Detection from Multi-Slice
CT 1 1 1 1 1 CT CT CT CT,,, Research on Automatic Liver Region Detection from Multi-Slice Abdominal CT Images Abstract: In the diagnosis of liver affection of cirrhosis and hepatocellular carcinoma,etc,
More information微分積分 サンプルページ この本の定価 判型などは, 以下の URL からご覧いただけます. このサンプルページの内容は, 初版 1 刷発行時のものです.
微分積分 サンプルページ この本の定価 判型などは, 以下の URL からご覧いただけます. ttp://www.morikita.co.jp/books/mid/00571 このサンプルページの内容は, 初版 1 刷発行時のものです. i ii 014 10 iii [note] 1 3 iv 4 5 3 6 4 x 0 sin x x 1 5 6 z = f(x, y) 1 y = f(x)
More information,,.,.,,.,.,.,.,,.,..,,,, i
22 A person recognition using color information 1110372 2011 2 13 ,,.,.,,.,.,.,.,,.,..,,,, i Abstract A person recognition using color information Tatsumo HOJI Recently, for the purpose of collection of
More information(3.6 ) (4.6 ) 2. [3], [6], [12] [7] [2], [5], [11] [14] [9] [8] [10] (1) Voodoo 3 : 3 Voodoo[1] 3 ( 3D ) (2) : Voodoo 3D (3) : 3D (Welc
1,a) 1,b) Obstacle Detection from Monocular On-Vehicle Camera in units of Delaunay Triangles Abstract: An algorithm to detect obstacles by using a monocular on-vehicle video camera is developed. Since
More informationCoding theorems for correlated sources with cooperative information
グラフコストの逐次更新を用いた映像顕著領域の自動抽出 2009 年 5 月 28 日 福地賢宮里洸司 (2) 木村昭悟 (1) 高木茂 (2) 大和淳司 (1) (1) 日本電信電話 ( 株 )NTT) コミュニケーション科学基礎研究所メディア情報研究部メディア認識研究グループ (2) 国立沖縄工業高等専門学校情報通信システム工学科 背景 ヒトはどのようにして もの を認識する能力を獲得するのか?
More information