[12] [5, 6, 7] [5, 6] [7] 1 [8] 1 1 [9] 1 [10, 11] [10] [11] 1 [13, 14] [13] [14] [13, 14] [10, 11, 13, 14] 1 [12]
|
|
|
- まとも とべ
- 9 years ago
- Views:
Transcription
1 Walking Person Recognition by Matching Video Fragments Masashi Nishiyama, Mayumi Yuasa, Tomokazu Wakasugi, Tomoyuki Shibata, Osamu Yamaguchi ( ), Corporate Research and Development Center, TOSHIBA Corporation [email protected] Abstract [1] [2] FacePass[3] 1(a) FacePassenger[4] 1(a) 1(b) 1 2 3
2 [12] [5, 6, 7] [5, 6] [7] 1 [8] 1 1 [9] 1 [10, 11] [10] [11] 1 [13, 14] [13] [14] [13, 14] [10, 11, 13, 14] 1 [12]
3 図6 カメラ内における断片的な動画像の生成 的な追跡処理を用いる 3 次元的な追跡を精度よく行う 図7 相互部分空間法による断片的な動画像同士 ために厳密なカメラキャリブレーションが要求される の比較 図5 段階的な対応付けの流れ 手法では 複数の歩行者の顔の様々な見え方を登録す るために 検出と追跡のタスクをそれぞれのカメラに 動的に割り当て顔画像の集合を生成する 各カメラか ら得られる人物毎の顔画像を対応付けるために 3 次元 また 運用中に何らかの原因でカメラの位置がずれる と追跡処理が破綻し識別性能が低下する ベルを判定する 同じラベルをもつ x を断片的な動画 3 段階的な対応付け 像 X に加える 一定の時間 T 1 以上新たな顔画像が追 複数カメラを用いて複数の歩行者を 動画像を用い た識別手法で個人識別するために カメラキャリブレー ション行うことなく顔画像を段階的に対応付け 人物 毎の動画像を生成する方法について述べる 加されなかった断片的な動画像 X は通過した人物と判 定し カメラ間の断片的な動画像の対応付けへ進む X のラベルを関数 M2 で判定し 同じラベルをもつ断片 的な動画像 X, X 0 を統合する 一定の時間 T 2 を経過し た断片的な動画像は対応付けが終了したと判断し 統 3.1 段階的な対応付けの枠組み 合された動画像 X とする この X を用いて個人識別を 最初に各カメラにおいて顔画像を対応付けし断片的 な動画像を生成する 断片的な動画像を式 (1) で定義 する 行う 3.2 断片的な動画像を生成するためのラベル付け 各カメラで獲得された顔画像 x は 関数 M1 により Xl {xi M1 (xi ) = l, i = 1,..., N } (1) 図 6 のように 同じカメラにおいて蓄積された断片的 な動画像と対応付けられる 対応付ける際には 断片 ここで x は 1 枚の顔画像 M1 は顔画像に対してラベ ルを返す関数 l は断片的な動画像に付けられたラベル 的な動画像に属する最新の顔画像 x X と x との間で 式 (3) の類似度 S を算出する N は獲得された顔画像の枚数を表す 関数 M1 につい S= ては 3.2 節で述べる 次に カメラ間で断片的な動画像 を対応付けし 個人識別で用いる統合された動画像 X Ssimple 1 + α(t t ) (3) ここで Ssimple は x, x 間の単純類似度 α は定数 t, t は を生成する X は式 (2) で定義される x, x が獲得された時間を表す 単純類似度は Ssimple = Xk {Xj M2 (Xj ) = k, j = 1,..., M } (2) ここで M2 は断片的な動画像に対してラベルを返す関 数 k は統合された動画像に付けられたラベル M は 獲得された断片的な動画像の個数を表す 関数 M2 に ついては 3.3 節で述べる 図 5 に 三台のカメラの下 で 二人の人物が歩行したときに段階的に対応付けさ れる流れを示す 実システム上では 顔画像は時間の経過と共に順に 獲得される 各カメラにおいて断片的な動画像を生成 するために 顔画像 x が獲得される毎に関数 M1 でラ cos2 θ で定義される θ は 顔画像をラスタースキャン することで変換されたベクトル同士のなす角度を表す 関数 M1 は 閾値 S1 を越え最も高い類似度が算出さ れた断片的な動画像のラベルを返す また 算出され た全ての類似度が S1 未満の場合 新たな人物が表れた と判定し 新たなラベルを返す 対応付ける断片的な 動画像が 1 個も蓄積されていない場合も新たなラベル を返す
4 8 3.3 M 2 S 7 (OMSM Orthogonal Mutual Subspace Method)[15] OMSM OMSM M 2 S2 S2 3.4 OMSM X [16] O P, Q P Q S θ (4) S = cos 2 θ (4) θ = 0 cos 2 θ R Ra = λa (5) R = (r mn ) (m, n = 1... D P ) (6) D Q r mn = (ψ m, φ l )(φ l, ψ n ) (7) l=1 ψ m, φ l P,Q m, l (ψ m, φ l ) ψ m φ l D P, D Q P, Q D P D Q (a) (b) (c) (d) (e) (f) x 8 (i) (ii)3 [17] (iii) [18] 4.2 x Joint Haar-like AdaBoost [19] [20] [21] 4.3 [20]
5 (i) without occlusion (ii) with occlusion (a) (b) 10(f) [20] 2 η (8) 13 η = η + β(p 1 P 2 ) (8) η P 1 P β 2 (c) 2 (d) (e) 4.4 [20] (i),(ii) 3 (i) 3 (ii) pixels [17] [18] 1024 (i) 76 (ii) 59 S1 (i) A 19 B 5 C 4 (ii) A 7 B 8 C 11 (ii) 5.2
6 2 1 Camera CMR(%) EER(%) C C C All (i) (ii) (%) Camera (i) (ii) C C C All (C1, C2, C3) pixels (i) 5.1 All C1, C2, C (ii) C2, C3 C (CMR:Correct Match Rate) 2. (EER:Equal Error Rate) FAR( ) FRR( ) FAR F AR = (9) FRR F RR = (10) (1 ) C1, C2, C3 All C2, C3 C1 1(ii) 7 All
7 98 CMR(%) (i) without matching fragmented sequences (ii) with matching fragmented sequences (ii) ideal Number of individuals M False Matching Rate(%) Number of individuals M 17 EER(%) (i) without matching fragmented sequences (ii) with matching fragmented sequences (ii) ideal Number of individuals M C1, C2, C3 ( ) M M M 1 10 M 2 2 S2 0 M 16 2 All 349 M CMR 17 EER 18 (i) 2 C1, C2, C3 (ii) 18 (iii) 2 All 16 CMR EER EER (i) (ii) 7 (iii) (ii) 10 (i) CMR EER 6 349
8 5 89.9% 94.2% 8.3% 4.2% [1],, D-II Vol. J80-D-II, No. 8, pp , 1997 [2],,,, D-II Vol. J88-D-II, No. 8, pp , [3],,,,,, FacePass, Vol. 56, No.7, pp , 2002 [4],,,,,,, FacePassenger, FIT2005 I-010 pp.27-28, [5],,,, M. Jones, J. Thornton,, 10, pp , [6] Z. Yang, H. AI, B. Wu, S. Lao, and L. Cai, Face Pose Estimation and its Application in Video Shot Selection, International Conference on Pattern Recognition 2004, pp , [7] R. Chellappa, V. Kruger, and S. Zhou, Probabilistic Recognition of Human Faces from Video, The IEEE International Conference on Image Processing, Vol. I, pp , [8] K. S. Huang, and M. M. Trivedi, Streaming Face Recognition using Multicamera Video Arrays, International Conference on Pattern Recognition 2002, pp , [9],,,,, :, Vol. 43, No. SIG 4(CVIM 4), pp , [10],,,,,, D-II, Vol.J84-D-II, No.8, pp , [11],,,, 8, pp , [12],,,,, D-II, Vol.J84- D-II, No.3, pp , [13] J. G. Wang, R. Venkateswarlu, and E. T. Lim, Face tracking and recognition from stereo sequence, 4th International Conference on Audio- and Video-based Biometric Person Authentication, pp , [14],,,,,,, PRMU , pp , [15],,,, 2005-CVIM-151 (3), pp , [16] E. Oja, Subspace Methods of Pattern Recognition, Research Studies Press, England, 1983 [17] T. Kozakaya, and O. Yamaguchi, Face Recognition by Projection-based 3D Normalization and Shading Subspace Orthogonalization, 7th International Conference Automatic Face and Gesture Recognition, [18] M. Nishiyama, and O. Yamaguchi, Face Recognition Using the Classified Appearance-based Quotient Image, 7th International Conference Automatic Face and Gesture Recognition, [19] T. Mita, T. Kaneko, and O. Hori, Joint Haar-like Features for Face Detection, Tenth IEEE International Conference on Computer Vision 2005, pp , [20],,, (D-II), Vol. J80-D-II, No. 8, pp , Aug [21],,,, 6 (SI2005), pp , 2005.
3 2 2 (1) (2) (3) (4) 4 4 AdaBoost 2. [11] Onishi&Yoda [8] Iwashita&Stoica [5] 4 [3] 3. 3 (1) (2) (3)
(MIRU2012) 2012 8 820-8502 680-4 E-mail: {d kouno,shimada,endo}@pluto.ai.kyutech.ac.jp (1) (2) (3) (4) 4 AdaBoost 1. Kanade [6] CLAFIC [12] EigenFace [10] 1 1 2 1 [7] 3 2 2 (1) (2) (3) (4) 4 4 AdaBoost
1 (PCA) 3 2 P.Viola 2) Viola AdaBoost 1 Viola OpenCV 3) Web OpenCV T.L.Berg PCA kpca LDA k-means 4) Berg 95% Berg Web k-means k-means
Web, Web k-means 62% Associating Faces and Names in Web Photo News Akio Kitahara and Keiji Yanai We propose a system which extracts faces and person names from news articles with photographs on the Web
(MIRU2008) HOG Histograms of Oriented Gradients (HOG)
(MIRU2008) 2008 7 HOG - - E-mail: [email protected], {takigu,ariki}@kobe-u.ac.jp Histograms of Oriented Gradients (HOG) HOG Shape Contexts HOG 5.5 Histograms of Oriented Gradients D Human
,,.,.,,.,.,.,.,,.,..,,,, i
22 A person recognition using color information 1110372 2011 2 13 ,,.,.,,.,.,.,.,,.,..,,,, i Abstract A person recognition using color information Tatsumo HOJI Recently, for the purpose of collection of
2007/8 Vol. J90 D No. 8 Stauffer [7] 2 2 I 1 I 2 2 (I 1(x),I 2(x)) 2 [13] I 2 = CI 1 (C >0) (I 1,I 2) (I 1,I 2) Field Monitoring Server
a) Change Detection Using Joint Intensity Histogram Yasuyo KITA a) 2 (0 255) (I 1 (x),i 2 (x)) I 2 = CI 1 (C>0) (I 1,I 2 ) (I 1,I 2 ) 2 1. [1] 2 [2] [3] [5] [6] [8] Intelligent Systems Research Institute,
色の類似性に基づいた形状特徴量CS-HOGの提案
IS3-04 第 18 回 画 像 センシングシンポジウム, 横 浜, 2012 年 6 月 CS-HOG CS-HOG : Color Similarity-based HOG feature Yuhi Goto, Yuji Yamauchi, Hironobu Fujiyoshi Chubu University E-mail: [email protected] Abstract
28 Horizontal angle correction using straight line detection in an equirectangular image
28 Horizontal angle correction using straight line detection in an equirectangular image 1170283 2017 3 1 2 i Abstract Horizontal angle correction using straight line detection in an equirectangular image
す 局所領域 ωk において 線形変換に用いる係数 (ak 画素の係数 (ak bk ) を算出し 入力画像の信号成分を bk ) は次式のコスト関数 E を最小化するように最適化 有さない画素に対して 式 (2) より画素値を算出する される これにより 低解像度な画像から補間によるアップサ E(
IR E-mail: [email protected] Abstract IR RGB ( ) IR IR IR RGB RGB PSNR 1 Time-Of- Flight(TOF)[1] Kinect [2] TOF LED TOF [3] [6] [4][5] 2 [6] RGB ( ) Infrared(IR) IR 2 2.1 1 す 局所領域 ωk において 線形変換に用いる係数 (ak
xx/xx Vol. Jxx A No. xx 1 Fig. 1 PAL(Panoramic Annular Lens) PAL(Panoramic Annular Lens) PAL (2) PAL PAL 2 PAL 3 2 PAL 1 PAL 3 PAL PAL 2. 1 PAL
PAL On the Precision of 3D Measurement by Stereo PAL Images Hiroyuki HASE,HirofumiKAWAI,FrankEKPAR, Masaaki YONEDA,andJien KATO PAL 3 PAL Panoramic Annular Lens 1985 Greguss PAL 1 PAL PAL 2 3 2 PAL DP
& Vol.5 No (Oct. 2015) TV 1,2,a) , Augmented TV TV AR Augmented Reality 3DCG TV Estimation of TV Screen Position and Ro
TV 1,2,a) 1 2 2015 1 26, 2015 5 21 Augmented TV TV AR Augmented Reality 3DCG TV Estimation of TV Screen Position and Rotation Using Mobile Device Hiroyuki Kawakita 1,2,a) Toshio Nakagawa 1 Makoto Sato
(3.6 ) (4.6 ) 2. [3], [6], [12] [7] [2], [5], [11] [14] [9] [8] [10] (1) Voodoo 3 : 3 Voodoo[1] 3 ( 3D ) (2) : Voodoo 3D (3) : 3D (Welc
1,a) 1,b) Obstacle Detection from Monocular On-Vehicle Camera in units of Delaunay Triangles Abstract: An algorithm to detect obstacles by using a monocular on-vehicle video camera is developed. Since
IPSJ SIG Technical Report Vol.2012-CG-149 No.13 Vol.2012-CVIM-184 No /12/4 3 1,a) ( ) DB 3D DB 2D,,,, PnP(Perspective n-point), Ransa
3,a) 3 3 ( ) DB 3D DB 2D,,,, PnP(Perspective n-point), Ransac. DB [] [2] 3 DB Web Web DB Web NTT NTT Media Intelligence Laboratories, - Hikarinooka Yokosuka-Shi, Kanagawa 239-0847 Japan a) [email protected]
[1] SBS [2] SBS Random Forests[3] Random Forests ii
Random Forests 2013 3 A Graduation Thesis of College of Engineering, Chubu University Proposal of an efficient feature selection using the contribution rate of Random Forests Katsuya Shimazaki [1] SBS
WISS 2018 [2 4] [5,6] Query-by-Dancing Query-by- Dancing Cao [1] OpenPose 2 Ghias [7] Query by humming Chen [8] Query by rhythm Jang [9] Query-by-tapp
Query-by-Dancing: WISS 2018. Query-by-Dancing Query-by-Dancing 1 OpenPose [1] Copyright is held by the author(s). DJ DJ DJ WISS 2018 [2 4] [5,6] Query-by-Dancing Query-by- Dancing Cao [1] OpenPose 2 Ghias
1 Kinect for Windows M = [X Y Z] T M = [X Y Z ] T f (u,v) w 3.2 [11] [7] u = f X +u Z 0 δ u (X,Y,Z ) (5) v = f Y Z +v 0 δ v (X,Y,Z ) (6) w = Z +
3 3D 1,a) 1 1 Kinect (X, Y) 3D 3D 1. 2010 Microsoft Kinect for Windows SDK( (Kinect) SDK ) 3D [1], [2] [3] [4] [5] [10] 30fps [10] 3 Kinect 3 Kinect Kinect for Windows SDK 3 Microsoft 3 Kinect for Windows
(fnirs: Functional Near-Infrared Spectroscopy) [3] fnirs (oxyhb) Bulling [4] Kunze [5] [6] 2. 2 [7] [8] fnirs 3. 1 fnirs fnirs fnirs 1
THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS TECHNICAL REPORT OF IEICE. fnirs Kai Kunze 599 8531 1 1 223 8526 4 1 1 E-mail: [email protected], [email protected],
1 Fig. 1 Extraction of motion,.,,, 4,,, 3., 1, 2. 2.,. CHLAC,. 2.1,. (256 ).,., CHLAC. CHLAC, HLAC. 2.3 (HLAC ) r,.,. HLAC. N. 2 HLAC Fig. 2
CHLAC 1 2 3 3,. (CHLAC), 1).,.,, CHLAC,.,. Suspicious Behavior Detection based on CHLAC Method Hideaki Imanishi, 1 Toyohiro Hayashi, 2 Shuichi Enokida 3 and Toshiaki Ejima 3 We have proposed a method for
GID Haar-like Mean-Shift Multi-Viewpoint Human Tracking Based on Face Detection Using Haar-like Features and Mean-Shift Yu Ito (Shizuoka Univers
GID-08-6 Haar-like Mean-Shift Multi-Viewpoint Human Tracking Based on Face Detection Using Haar-like Features and Mean-Shift Yu Ito (Shizuoka University), Atsushi Yamashita, Toru Kaneko (Shizuoka University)
バイノーラルマイクを用いたライフログ映像のショット識別 Life-log Video Shot Discrimination using Binaural Microphone 山野貴一郎 伊藤克亘 法政大学大学院情報科学研究科 法政大学情報科学部 Kiichiro YAMANO Katunobu
バイノーラルマイクを用いたライフログ映像のショット識別 Life-log Video Shot Discrimination using Binaural Microphone 山野貴一郎 伊藤克亘 法政大学大学院情報科学研究科 法政大学情報科学部 Kiichiro YAMANO Katunobu ITOU Graduate School of Computer and Information Sciences,
BDH Cao BDH BDH Cao Cao Cao BDH ()*$ +,-+.)*$!%&'$!"#$ 2. 1 Weng [4] Metric Learning Weng DB DB Yang [5] John [6] Sparse Coding sparse coding DB [7] K
Bucket Distance Hashing Metric Learning 1,a) 1,b) 1,c) 1,d) (DB) [1] DB Cao [2] Cao Metric Learning Cao Cao Cao Cao Cao 100 DB 10% 1. m DB DB DB 1 599 8531 1 1 Graduate School of Engineering, Osaka Prefecture
Web Basic Web SAS-2 Web SAS-2 i
19 Development of moving image delivery system for elementary school 1080337 2008 3 10 Web Basic Web SAS-2 Web SAS-2 i Abstract Development of moving image delivery system for elementary school Ayuko INOUE
(4) ω t(x) = 1 ω min Ω ( (I C (y))) min 0 < ω < C A C = 1 (5) ω (5) t transmission map tmap 1 4(a) 2. 3 2. 2 t 4(a) t tmap RGB 2 (a) RGB (A), (B), (C)
(MIRU2011) 2011 7 890 0065 1 21 40 105-6691 1 1 1 731 3194 3 4 1 338 8570 255 346 8524 1836 1 E-mail: {fukumoto,kawasaki}@ibe.kagoshima-u.ac.jp, [email protected], [email protected],
第 1 回バイオメトリクス研究会 ( 早稲田大学 ) THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS Proceedings of Biometrics Workshop,169
THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS Proceedings of Biometrics Workshop,169-8555 3-4-1,169-8555 3-4-1 E-mail: s [email protected], [email protected] Wolf
I
I II III IV V VI VII VIII IX X XI XII XIII XIV 1. 2 3 4 5 2. 6 7 8 3. 1 2 3 9 4 5 10 6 11 4. 1 2 3 1 2 12 1 2 3 1 2 3 13 14 1 2 1 15 16 1. 20 1 21 1 22 23 1 2 3 4 24 1 2 ok 25 1 2 26 1 2 3 27 2. 28
258 5) GPS 1 GPS 6) GPS DP 7) 8) 10) GPS GPS 2 3 4 5 2. 2.1 3 1) GPS Global Positioning System
Vol. 52 No. 1 257 268 (Jan. 2011) 1 2, 1 1 measurement. In this paper, a dynamic road map making system is proposed. The proposition system uses probe-cars which has an in-vehicle camera and a GPS receiver.
光学
Range Image Sensors Using Active Stereo Methods Kazunori UMEDA and Kenji TERABAYASHI Active stereo methods, which include the traditional light-section method and the talked-about Kinect sensor, are typical
1 (1) (2)
1 2 (1) (2) (3) 3-78 - 1 (1) (2) - 79 - i) ii) iii) (3) (4) (5) (6) - 80 - (7) (8) (9) (10) 2 (1) (2) (3) (4) i) - 81 - ii) (a) (b) 3 (1) (2) - 82 - - 83 - - 84 - - 85 - - 86 - (1) (2) (3) (4) (5) (6)
- 2 -
- 2 - - 3 - (1) (2) (3) (1) - 4 - ~ - 5 - (2) - 6 - (1) (1) - 7 - - 8 - (i) (ii) (iii) (ii) (iii) (ii) 10 - 9 - (3) - 10 - (3) - 11 - - 12 - (1) - 13 - - 14 - (2) - 15 - - 16 - (3) - 17 - - 18 - (4) -
2 1980 8 4 4 4 4 4 3 4 2 4 4 2 4 6 0 0 6 4 2 4 1 2 2 1 4 4 4 2 3 3 3 4 3 4 4 4 4 2 5 5 2 4 4 4 0 3 3 0 9 10 10 9 1 1
1 1979 6 24 3 4 4 4 4 3 4 4 2 3 4 4 6 0 0 6 2 4 4 4 3 0 0 3 3 3 4 3 2 4 3? 4 3 4 3 4 4 4 4 3 3 4 4 4 4 2 1 1 2 15 4 4 15 0 1 2 1980 8 4 4 4 4 4 3 4 2 4 4 2 4 6 0 0 6 4 2 4 1 2 2 1 4 4 4 2 3 3 3 4 3 4 4
2. Apple iphoto 1 Google Picasa 2 Calendar for Everything [1] PLUM [2] LifelogViewer 3 1 Apple iphoto, 2 Goo
DEIM Forum 2012 D9-4 606 8501 E-mail: {sasage,tsukuda,nakamura,tanaka}@dl.kuis.kyoto-u.ac.jp,,,, 1. 2000 1 20 10 GPS A A A A A A A 2. Apple iphoto 1 Google Picasa 2 Calendar for Everything [1] Email PLUM
IPSJ-CVIM
STHOG 1 1 1 STHOG STHOG Pedestrian Matching across Cameras using STHOG Features Ryo Kawai, 1 Yasushi Makihara 1 and Yasushi Yagi 1 In this paper, we propose a method of pedestrian matching across CCTV
SSII原稿v5.doc
ステレオ計測と多項式曲面表現を利用した歪曲形状書籍画像の歪み補正 Restoration of Distorted Document Images by Using Stereo Measurement and Polynomial Surface Representation 田中友 鈴木優輔 山下淳 金子透 uu Tanaka, usuke Suzuki, Atushi amashita and
1(a) (b),(c) - [5], [6] Itti [12] [13] gaze eyeball head 2: [time] [7] Stahl [8], [9] Fang [1], [11] 3 -
Vol216-CVIM-22 No18 216/5/12 1 1 1 Structure from Motion - 1 8% Tobii Pro TX3 NAC EMR ACTUS Eye Tribe Tobii Pro Glass NAC EMR-9 Pupil Headset Ville [1] EMR-9 [2] 1 Osaka University Gaze Head Eye (a) deg
1., 1 COOKPAD 2, Web.,,,,,,.,, [1]., 5.,, [2].,,.,.,, 5, [3].,,,.,, [4], 33,.,,.,,.. 2.,, 3.., 4., 5., ,. 1.,,., 2.,. 1,,
THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS TECHNICAL REPORT OF IEICE.,, 464 8601 470 0393 101 464 8601 E-mail: [email protected], {ide,murase,hirayama}@is.nagoya-u.ac.jp,
表紙4_1/山道 小川内 小川内 芦塚
1 2008.1Vol.23 2008.1Vol.23 2 2008.1Vol.23 3 2008.1Vol.23 4 5 2008.1Vol.23 2008.1Vol.23 6 7 2008.1Vol.23 2008.1Vol.23 8 9 2008.1Vol.23 10 2008.1Vol.23 11 2008.1Vol.23 12 2008.1Vol.23 Center 13 2008.1Vol.23
Convolutional Neural Network A Graduation Thesis of College of Engineering, Chubu University Investigation of feature extraction by Convolution
Convolutional Neural Network 2014 3 A Graduation Thesis of College of Engineering, Chubu University Investigation of feature extraction by Convolutional Neural Network Fukui Hiroshi 1940 1980 [1] 90 3
Research into the child rearing behavior of mothers I Correlation with methods of their mothers Hiroko HARADA Kaiser-Meyer-Olkin α α α α Kaiser-Meyer-Olkin α α α α P P P P P P P P P P P P P P P P
スライド 1
知能制御システム学 画像追跡 (1) 特徴点の検出と追跡 東北大学大学院情報科学研究科鏡慎吾 swk(at)ic.is.tohoku.ac.jp 2008.07.07 今日の内容 前回までの基本的な画像処理の例を踏まえて, ビジュアルサーボシステムの構成要素となる画像追跡の代表的手法を概説する 画像上の ある点 の追跡 オプティカルフローの拘束式 追跡しやすい点 (Harris オペレータ ) Lucas-Kanade
3 Abstract CAD 3-D ( ) 4 Spin Image Correspondence Grouping 46.1% 17.4% 97.6% ICP [0.6mm/point] 1 CAD [1][2]
3 E-mail: {akizuki}@isl.sist.chukyo-u.ac.jp Abstract CAD 3-D ( ) 4 Spin Image Correspondence Grouping 46.1% 17.4% 97.6% ICP [0.6mm/point] 1 CAD [1][2] Shape Index [3] [4][5] 3 SHOT [6] [7] Point Pair Feature
provider_020524_2.PDF
1 1 1 2 2 3 (1) 3 (2) 4 (3) 6 7 7 (1) 8 (2) 21 26 27 27 27 28 31 32 32 36 1 1 2 2 (1) 3 3 4 45 (2) 6 7 5 (3) 6 7 8 (1) ii iii iv 8 * 9 10 11 9 12 10 13 14 15 11 16 17 12 13 18 19 20 (2) 14 21 22 23 24
[2] OCR [3], [4] [5] [6] [4], [7] [8], [9] 1 [10] Fig. 1 Current arrangement and size of ruby. 2 Fig. 2 Typography combined with printing
1,a) 1,b) 1,c) 2012 11 8 2012 12 18, 2013 1 27 WEB Ruby Removal Filters Using Genetic Programming for Early-modern Japanese Printed Books Taeka Awazu 1,a) Masami Takata 1,b) Kazuki Joe 1,c) Received: November
(a) 1 (b) 3. Gilbert Pernicka[2] Treibitz Schechner[3] Narasimhan [4] Kim [5] Nayar [6] [7][8][9] 2. X X X [10] [11] L L t L s L = L t + L s
1 1 1, Extraction of Transmitted Light using Parallel High-frequency Illumination Kenichiro Tanaka 1 Yasuhiro Mukaigawa 1 Yasushi Yagi 1 Abstract: We propose a new sharpening method of transmitted scene
(a) (b) (c) Canny (d) 1 ( x α, y α ) 3 (x α, y α ) (a) A 2 + B 2 + C 2 + D 2 + E 2 + F 2 = 1 (3) u ξ α u (A, B, C, D, E, F ) (4) ξ α (x 2 α, 2x α y α,
[II] Optimization Computation for 3-D Understanding of Images [II]: Ellipse Fitting 1. (1) 2. (2) (edge detection) (edge) (zero-crossing) Canny (Canny operator) (3) 1(a) [I] [II] [III] [IV ] E-mail [email protected]
HASC2012corpus HASC Challenge 2010,2011 HASC2011corpus( 116, 4898), HASC2012corpus( 136, 7668) HASC2012corpus HASC2012corpus
HASC2012corpus 1 1 1 1 1 1 2 2 3 4 5 6 7 HASC Challenge 2010,2011 HASC2011corpus( 116, 4898), HASC2012corpus( 136, 7668) HASC2012corpus HASC2012corpus: Human Activity Corpus and Its Application Nobuo KAWAGUCHI,
WISS 2006 2 PowerPoint [3] [16] Mehrabian [10] 7% 93% [10] [19][18] Hindus [7] Lyons [9] [8] [14] TalkMan [4] [5] [6] 3 [19][18] [19] [19] 1 F0 [11] 7
WISS2006 A Presentation Training System using Speech and Image Processing. Web 1 [19] Copyright is held by the author(s). Kazutaka Kurihara and Takeo Igarashi,, Masataka Goto and Jun Ogata and Yosuke Matsusaka,,
Haiku Generation Based on Motif Images Using Deep Learning Koki Yoneda 1 Soichiro Yokoyama 2 Tomohisa Yamashita 2 Hidenori Kawamura Scho
Haiku Generation Based on Motif Images Using Deep Learning 1 2 2 2 Koki Yoneda 1 Soichiro Yokoyama 2 Tomohisa Yamashita 2 Hidenori Kawamura 2 1 1 School of Engineering Hokkaido University 2 2 Graduate
