) 1 2 2[m] % H W T (x, y) I D(x, y) d d = 1 [T (p, q) I D(x + p, y + q)] HW 2 (1) p q t 3 (X t,y t,z t) x t [ ] T x t

Similar documents
1 Table 1: Identification by color of voxel Voxel Mode of expression Nothing Other 1 Orange 2 Blue 3 Yellow 4 SSL Humanoid SSL-Vision 3 3 [, 21] 8 325

258 5) GPS 1 GPS 6) GPS DP 7) 8) 10) GPS GPS ) GPS Global Positioning System

(3.6 ) (4.6 ) 2. [3], [6], [12] [7] [2], [5], [11] [14] [9] [8] [10] (1) Voodoo 3 : 3 Voodoo[1] 3 ( 3D ) (2) : Voodoo 3D (3) : 3D (Welc

IPSJ SIG Technical Report Vol.2009-CVIM-167 No /6/10 Real AdaBoost HOG 1 1 1, 2 1 Real AdaBoost HOG HOG Real AdaBoost HOG A Method for Reducing

1 Fig. 1 Extraction of motion,.,,, 4,,, 3., 1, 2. 2.,. CHLAC,. 2.1,. (256 ).,., CHLAC. CHLAC, HLAC. 2.3 (HLAC ) r,.,. HLAC. N. 2 HLAC Fig. 2

光学

2 Fig D human model. 1 Fig. 1 The flow of proposed method )9)10) 2.2 3)4)7) 5)11)12)13)14) TOF 1 3 TOF 3 2 c 2011 Information

IPSJ SIG Technical Report Vol.2010-CVIM-170 No /1/ Visual Recognition of Wire Harnesses for Automated Wiring Masaki Yoneda, 1 Ta

A Navigation Algorithm for Avoidance of Moving and Stationary Obstacles for Mobile Robot Masaaki TOMITA*3 and Motoji YAMAMOTO Department of Production

( ), ( ) Patrol Mobile Robot To Greet Passing People Takemi KIMURA(Univ. of Tsukuba), and Akihisa OHYA(Univ. of Tsukuba) Abstract This research aims a

Human-Agent Interaction Simposium A Heterogeneous Robot System U

Fig Measurement data combination. 2 Fig. 2. Ray vector. Fig (12) 1 2 R 1 r t 1 3 p 1,i i 2 3 Fig.2 R 2 t 2 p 2,i [u, v] T (1)(2) r R 1 R 2

4. C i k = 2 k-means C 1 i, C 2 i 5. C i x i p [ f(θ i ; x) = (2π) p 2 Vi 1 2 exp (x µ ] i) t V 1 i (x µ i ) 2 BIC BIC = 2 log L( ˆθ i ; x i C i ) + q

& Vol.5 No (Oct. 2015) TV 1,2,a) , Augmented TV TV AR Augmented Reality 3DCG TV Estimation of TV Screen Position and Ro

(MIRU2008) HOG Histograms of Oriented Gradients (HOG)

,,.,.,,.,.,.,.,,.,..,,,, i

IPSJ SIG Technical Report Vol.2012-CG-148 No /8/29 3DCG 1,a) On rigid body animation taking into account the 3D computer graphics came

1 Kinect for Windows M = [X Y Z] T M = [X Y Z ] T f (u,v) w 3.2 [11] [7] u = f X +u Z 0 δ u (X,Y,Z ) (5) v = f Y Z +v 0 δ v (X,Y,Z ) (6) w = Z +

(a) (b) 2 2 (Bosch, IR Illuminator 850 nm, UFLED30-8BD) ( 7[m] 6[m]) 3 (PointGrey Research Inc.Grasshopper2 M/C) Hz (a) (b

xx/xx Vol. Jxx A No. xx 1 Fig. 1 PAL(Panoramic Annular Lens) PAL(Panoramic Annular Lens) PAL (2) PAL PAL 2 PAL 3 2 PAL 1 PAL 3 PAL PAL 2. 1 PAL

28 Horizontal angle correction using straight line detection in an equirectangular image

IPSJ SIG Technical Report GPS LAN GPS LAN GPS LAN Location Identification by sphere image and hybrid sensing Takayuki Katahira, 1 Yoshio Iwai 1

2). 3) 4) 1.2 NICTNICT DCRA Dihedral Corner Reflector micro-arraysdcra DCRA DCRA DCRA 3D DCRA PC USB PC PC ON / OFF Velleman K8055 K8055 K8055

28 TCG SURF Card recognition using SURF in TCG play video

THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS TECHNICAL REPORT OF IEICE.

2007/8 Vol. J90 D No. 8 Stauffer [7] 2 2 I 1 I 2 2 (I 1(x),I 2(x)) 2 [13] I 2 = CI 1 (C >0) (I 1,I 2) (I 1,I 2) Field Monitoring Server

1 Web [2] Web [3] [4] [5], [6] [7] [8] S.W. [9] 3. MeetingShelf Web MeetingShelf MeetingShelf (1) (2) (3) (4) (5) Web MeetingShelf

17 Proposal of an Algorithm of Image Extraction and Research on Improvement of a Man-machine Interface of Food Intake Measuring System

IPSJ SIG Technical Report Vol.2013-CVIM-188 No /9/2 1,a) D. Marr D. Marr 1. (feature-based) (area-based) (Dense Stereo Vision) van der Ma

Fig. 3 Flow diagram of image processing. Black rectangle in the photo indicates the processing area (128 x 32 pixels).

IHI Robust Path Planning against Position Error for UGVs in Rough Terrain Yuki DOI, Yonghoon JI, Yusuke TAMURA(University of Tokyo), Yuki IKEDA, Atsus

光学

IS1-09 第 回画像センシングシンポジウム, 横浜,14 年 6 月 2 Hough Forest Hough Forest[6] Random Forest( [5]) Random Forest Hough Forest Hough Forest 2.1 Hough Forest 1 2.2

EQUIVALENT TRANSFORMATION TECHNIQUE FOR ISLANDING DETECTION METHODS OF SYNCHRONOUS GENERATOR -REACTIVE POWER PERTURBATION METHODS USING AVR OR SVC- Ju

DPA,, ShareLog 3) 4) 2.2 Strino Strino STRain-based user Interface with tacticle of elastic Natural ObjectsStrino 1 Strino ) PC Log-Log (2007 6)

TCP/IP IEEE Bluetooth LAN TCP TCP BEC FEC M T M R M T 2. 2 [5] AODV [4]DSR [3] 1 MS 100m 5 /100m 2 MD 2 c 2009 Information Processing Society of

[2] OCR [3], [4] [5] [6] [4], [7] [8], [9] 1 [10] Fig. 1 Current arrangement and size of ruby. 2 Fig. 2 Typography combined with printing

3 2 2 (1) (2) (3) (4) 4 4 AdaBoost 2. [11] Onishi&Yoda [8] Iwashita&Stoica [5] 4 [3] 3. 3 (1) (2) (3)


5D1 SY0004/14/ SICE 1, 2 Dynamically Consistent Motion Design of Humanoid Robots even at the Limit of Kinematics Kenya TANAKA 1 and Tomo

スライド 1

IPSJ SIG Technical Report Vol.2015-MUS-107 No /5/23 HARK-Binaural Raspberry Pi 2 1,a) ( ) HARK 2 HARK-Binaural A/D Raspberry Pi 2 1.

IPSJ SIG Technical Report Vol.2010-GN-74 No /1/ , 3 Disaster Training Supporting System Based on Electronic Triage HIROAKI KOJIMA, 1 KU

IPSJ SIG Technical Report Vol.2014-MBL-70 No.49 Vol.2014-UBI-41 No /3/15 2,a) 2,b) 2,c) 2,d),e) WiFi WiFi WiFi 1. SNS GPS Twitter Facebook Twit

Fig. 2 Signal plane divided into cell of DWT Fig. 1 Schematic diagram for the monitoring system

IPSJ SIG Technical Report Vol.2016-CE-137 No /12/ e β /α α β β / α A judgment method of difficulty of task for a learner using simple

013858,繊維学会誌ファイバー1月/報文-02-古金谷

情報処理学会研究報告 IPSJ SIG Technical Report Vol.2017-CG-166 No /3/ HUNTEXHUNTER1 NARUTO44 Dr.SLUMP1,,, Jito Hiroki Satoru MORITA The

1 3DCG [2] 3DCG CG 3DCG [3] 3DCG 3 3 API 2 3DCG 3 (1) Saito [4] (a) 1920x1080 (b) 1280x720 (c) 640x360 (d) 320x G-Buffer Decaudin[5] G-Buffer D

MmUm+FopX m Mm+Mop F-Mm(Fop-Mopum)M m+mop MSuS+FX S M S+MOb Fs-Ms(Mobus-Fex)M s+mob Fig. 1 Particle model of single degree of freedom master/ slave sy

本文6(599) (Page 601)

Duplicate Near Duplicate Intact Partial Copy Original Image Near Partial Copy Near Partial Copy with a background (a) (b) 2 1 [6] SIFT SIFT SIF

IPSJ SIG Technical Report Secret Tap Secret Tap Secret Flick 1 An Examination of Icon-based User Authentication Method Using Flick Input for

情報処理学会研究報告 IPSJ SIG Technical Report Vol.2016-MBL-80 No.11 Vol.2016-CDS-17 No /8/ (VR) (AR) VR, AR VR, AR Study of a Feedback Method fo

Input image Initialize variables Loop for period of oscillation Update height map Make shade image Change property of image Output image Change time L

(a) 1 (b) 3. Gilbert Pernicka[2] Treibitz Schechner[3] Narasimhan [4] Kim [5] Nayar [6] [7][8][9] 2. X X X [10] [11] L L t L s L = L t + L s

IPSJ SIG Technical Report iphone iphone,,., OpenGl ES 2.0 GLSL(OpenGL Shading Language), iphone GPGPU(General-Purpose Computing on Graphics Proc

IPSJ SIG Technical Report Vol.2012-MUS-96 No /8/10 MIDI Modeling Performance Indeterminacies for Polyphonic Midi Score Following and

IS3-18 第21回画像センシングシンポジウム 横浜 2015年6月 2つの人物検出の組み合わせと複数特徴量の利用による人物追跡 川下 雄大 増山 岳人 梅田 和昇 中央大学大学院 中央大学 Abstract 本

VRSJ-SIG-MR_okada_79dce8c8.pdf

GPGPU

A Feasibility Study of Direct-Mapping-Type Parallel Processing Method to Solve Linear Equations in Load Flow Calculations Hiroaki Inayoshi, Non-member

IPSJ SIG Technical Report Vol.2014-EIP-63 No /2/21 1,a) Wi-Fi Probe Request MAC MAC Probe Request MAC A dynamic ads control based on tra

24 Depth scaling of binocular stereopsis by observer s own movements

2003/3 Vol. J86 D II No Fig. 1 An exterior view of eye scanner. CCD [7] CCD PC USB PC PC USB RS-232C PC

25 D Effects of viewpoints of head mounted wearable 3D display on human task performance

Vol.54 No (July 2013) [9] [10] [11] [12], [13] 1 Fig. 1 Flowchart of the proposed system. c 2013 Information

Visual Evaluation of Polka-dot Patterns Yoojin LEE and Nobuko NARUSE * Granduate School of Bunka Women's University, and * Faculty of Fashion Science,

Vol. 42 No. SIG 8(TOD 10) July HTML 100 Development of Authoring and Delivery System for Synchronized Contents and Experiment on High Spe

橡上野先生訂正2

( ) [1] [4] ( ) 2. [5] [6] Piano Tutor[7] [1], [2], [8], [9] Radiobaton[10] Two Finger Piano[11] Coloring-in Piano[12] ism[13] MIDI MIDI 1 Fig. 1 Syst

IPSJ SIG Technical Report Vol.2009-DPS-141 No.20 Vol.2009-GN-73 No.20 Vol.2009-EIP-46 No /11/27 1. MIERUKEN 1 2 MIERUKEN MIERUKEN MIERUKEN: Spe

IPSJ SIG Technical Report Vol.2012-IS-119 No /3/ Web A Multi-story e-picture Book with the Degree-of-interest Extraction Function

900 GPS GPS DGPS Differential GPS RTK-GPS Real Time Kinematic GPS 2) DGPS RTK-GPS GPS GPS Wi-Fi 3) RFID 4) M-CubITS 5) Wi-Fi PSP PlayStation Portable

Silhouette on Image Object Silhouette on Images Object 1 Fig. 1 Visual cone Fig. 2 2 Volume intersection method Fig. 3 3 Background subtraction Fig. 4

SICE東北支部研究集会資料(2012年)

log F0 意識 しゃべり 葉の log F0 Fig. 1 1 An example of classification of substyles of rap. ' & 2. 4) m.o.v.e 5) motsu motsu (1) (2) (3) (4) (1) (2) mot

Vol. 48 No. 4 Apr LAN TCP/IP LAN TCP/IP 1 PC TCP/IP 1 PC User-mode Linux 12 Development of a System to Visualize Computer Network Behavior for L

IPSJ SIG Technical Report Vol.2014-GN-90 No.16 Vol.2014-CDS-9 No.16 Vol.2014-DCC-6 No /1/24 1,a) 2,b) 2,c) 1,d) QUMARION QUMARION Kinect Kinect

9.プレゼン資料(小泉)R1

Vol.53 No (Mar. 2012) 1, 1,a) 1, 2 1 1, , Musical Interaction System Based on Stage Metaphor Seiko Myojin 1, 1,a

第1章

1 p.27 Fig. 1 Example of a koto score. [1] 1 1 [1] A 2. Rogers [4] Zhang [5] [6] [7] Löchtefeld [8] Xiao [

2. CABAC CABAC CABAC 1 1 CABAC Figure 1 Overview of CABAC 2 DCT 2 0/ /1 CABAC [3] 3. 2 値化部 コンテキスト計算部 2 値算術符号化部 CABAC CABAC

( 1) 3. Hilliges 1 Fig. 1 Overview image of the system 3) PhotoTOC 5) 1993 DigitalDesk 7) DigitalDesk Koike 2) Microsoft J.Kim 4). 2 c 2010

,,,,,,,,,,,,,,,,,,, 976%, i

System to Diagnosis Concrete Deterioration with Spectroscopic Analysis IHI IHI IHI The most popular method for inspecting concrete structures for dete

IPSJ SIG Technical Report An Evaluation Method for the Degree of Strain of an Action Scene Mao Kuroda, 1 Takeshi Takai 1 and Takashi Matsuyama 1

4.1 % 7.5 %

光学

3D UbiCode (Ubiquitous+Code) RFID ResBe (Remote entertainment space Behavior evaluation) 2 UbiCode Fig. 2 UbiCode 2. UbiCode 2. 1 UbiCode UbiCode 2. 2

2 ( ) i

[1] SBS [2] SBS Random Forests[3] Random Forests ii

2.2 6).,.,.,. Yang, 7).,,.,,. 2.3 SIFT SIFT (Scale-Invariant Feature Transform) 8).,. SIFT,,. SIFT, Mean-Shift 9)., SIFT,., SIFT,. 3.,.,,,,,.,,,., 1,

IPSJ SIG Technical Report Vol.2014-CDS-10 No /5/ Intuitive appliance control method based on high-accurate indoor localization system

11) 13) 11),12) 13) Y c Z c Image plane Y m iy O m Z m Marker coordinate system T, d X m f O c X c Camera coordinate system 1 Coordinates and problem

1(a) (b),(c) - [5], [6] Itti [12] [13] gaze eyeball head 2: [time] [7] Stahl [8], [9] Fang [1], [11] 3 -

SICE東北支部研究集会資料(2004年)

Gaze Head Eye (a) deg (b) 45 deg (c) 9 deg 1: - 1(b) - [5], [6] [7] Stahl [8], [9] Fang [1], [11] Itti [12] Itti [13] [7] Fang [1],

ActionScript Flash Player 8 ActionScript3.0 ActionScript Flash Video ActionScript.swf swf FlashPlayer AVM(Actionscript Virtual Machine) Windows

Transcription:

1 1 Multi-Person Tracking for a Mobile Robot using Overlapping Silhouette Templates Junji Satake 1 and Jun Miura 1 This paper describes a stereo-based person tracking method for a person following robot. Many previous works on person tracking use laser range finders which can provide very accurate range measurements. Stereo-based systems have also been popular, but most of them are not used for controlling a real robot. In this paper, we propose an accurate, stable tracking method using overlapping silhouette templates which consider how persons overlap in the image. Experimental results show the effectiveness of the proposed method. In addition, we examine the possibility of extending the method to multi-camera cases. 1 Toyohashi University of Technology 1. 1) 4) 5),6) 7),8) 9) 11) Ess 12),13) 14) 16) probabilistic exclusion principle Khan 15) 2 Tweed Calway 16) 1 c 2010 Information Processing Society of Japan

2. 2.1 17) 1 2 2[m] 2.2 90% H W T (x, y) I D(x, y) d d = 1 [T (p, q) I D(x + p, y + q)] HW 2 (1) p q 1 4 3 2 3 2.2 3 3 t 3 (X t,y t,z t) x t [ ] T x t = X t Y t Z t Ẋ t Ẏ t (2) Ẋt, Ẏ t X-Y x t+1 = F tx t + w t (3) 3 (1) d L (a) Left Front Right 1 Fig. 1 Depth templates (b) 2 Fig. 2 Detection examples using depth templates ( ) L = 1 exp d2 (4) 2πσ 2σ 2 2.3 N 2 A, B 2 c 2010 Information Processing Society of Japan

Person B Person A Fig. 3 3 Definition of coordinate systems Fig. 4 4 Procedure of tracking using an overlapping silhouette template x A t x B t x AB t [ ] x AB t = x A t x B t (5) N N N N = 100 2 25 N AB =25 25 L AB L A,L B L AB = L A L B [ ] [ ] x AB F t 0 t+1 = x AB w t t + (6) 0 F t w t 4 2 2 3 2.1 5 Fig. 5 Comparison of tracking results with overlapping silhouette templates (drawn by white frame) and with individual templates (drawn by red or blue frame) 1 3 1 2.1 L AB A,B x AB t x A t x B t N AB N 3 c 2010 Information Processing Society of Japan

5 5 5 (a) (b) 1 3. 18) 1 2 6 1 PC 7 2 3.1 7(a) N 2 6(c) 3 3.2 7(b) N 個のパーティクルを保持 N 個のパーティクルを保持 (c) 2 人物位置 (d) 6 Fig. 6 Tracking using multiple cameras N 個のパーティクルを保持 統合モジュール 人物位置を統合 (a) 各パーティクルを評価 各パーティクルを評価 パーティクルの 3 次元位置 評価結果 各パーティクルを評価 統合モジュール N 個のパーティクルを保持 (b) 7 Fig. 7 Integration of tracking with multiple cameras N 3 4 c 2010 Information Processing Society of Japan

2.3 6 3.1 N 1 PC 4.2 4. 4.1 1 3 MobileRobots PeopleBot PointGreyResearch Bumblebee2 XGA 20fps f=2.5[mm] 12[cm] 1 PC Core2Duo, 3.06 [GHz] 512 384 10[fps] 8 N = 100 N AB =25 25 #160 #202 B #310 #324 A 1 Table 1 Comparison of tracking results concerning the number of particles (a) using only individual templates number of particles success rate positional error processing time N = 100 73.3 [%] 9.56 [pixel] 248 [ms] N = 200 75.0 [%] 9.95 [pixel] 383 [ms] (b) using overlapping silhouette templates number of particles success rate positional error processing time N AB =15 15 86.7 [%] 7.09 [pixel] 226 [ms] N AB =20 20 90.0 [%] 6.74 [pixel] 314 [ms] N AB =25 25 93.3 [%] 6.64 [pixel] 436 [ms] N AB =30 30 90.0 [%] 6.53 [pixel] 568 [ms] N AB = 100 100 91.7 [%] 6.43 [pixel] 4993 [ms] 60 12 5 1 2 60 2 1 9 N = 100 N AB =20 20 A B A 4.2 6 1 2 3 PC 4 PC TCP/IP 5 c 2010 Information Processing Society of Japan

3.2 10 5. 1 2 2 1) P. Viola, M.J. Jones, and D. Snow: Detecting pedestrians using patterns of motion and appearance, Int. J. Computer Vision, Vol.63, No.2, pp.153 161 (2005). 2) N. Dalal and B. Briggs: Histograms of oriented gradients for human detection, IEEE Conf. Computer Vision and Patttern Recognition, pp.886 893 (2005). 3) B. Han, S. W. Joo, and L. S. Davis: Probabilistic fusion tracking using mixture kernel-based Bayesian filtering, Int. Conf. Computer Vision (2007). 4) S. Munder, C. Schnorr, and D. M. Gavrila: Pedestrian detection and tracking using a mixture of view-based shape-texture models, IEEE Trans. Intelligent Transportation Systems, Vol.9, No.2, pp.333 343 (2008). 5) D. Schulz, W. Burgard, D. Fox, and A. B. Cremers: People tracking with a mobile robot using sample-based joint probabilistic data association filters, Int. J. Robotics Research, Vol.22, No.2, pp.99 116 (2003). 6) N. Bellotto and H. Hu: Multisensor data fusion for joint people tracking and identification with a service robot, IEEE Int. Conf. Robotics and Biomimetics, pp. 1494 1499 (2007). 7) H. Koyasu, J. Miura, and Y. Shirai: Realtime omnidirectional stereo for obstacle detection and tracking in dynamic environments, IEEE/RSJ Int. Conf. Intelligent Robots and Sysetms, pp.31 36 (2001). 8) M. Kobilarov, G. Sukhatme, J. Hyams, and P. Batavia: People tracking and following with mobile robot using omnidirectional camera and a laser, IEEE Int. Conf. Robotics and Automation, pp.557 562 (2006). 9) D. Beymer and K. Konolige: Real-time tracking of multiple people using continuous detection, Int. Conf. Computer Vision (1999). 10) A. Howard, L. H. Matthies, A. Huertas, M. Bajracharya, and A. Rankin: Detecting pedestrians with stereo vision: safe operation of autonomous ground vehicles in dynamic environments, Int. Symp. Robotics Research (2007). 11) D. Calisi, L. Iocchi, and R. Leone: Person following through appearance models and stereo vision using a mobile robot, VISAPP 2007 Workshop on Robot Vision, pp.46 56 (2007). 12) A. Ess, B. Leibe, and L. V. Cool: Depth and appearance for mobile scene analysis, Int. Conf. Computer Vision (2007). 13) A. Ess, B. Leibe, K. Schindler, and L. V. Cool: A mobile vision system for robust multi-person tracking, IEEE Conf. Computer Vision and Pattern Recognition (2008). 14) J. MacCormick and A. Blake: A probabilistic exclusion principle for tracking multiple objects, Int. Conf. Computer Vision, pp.572 578 (1999). 15) Z. Khan, T. Balch, and F. Dellaert: An MCMC-based particle filter for tracking multiple interacting targets, European Conf. Computer Vision, pp.279 290 (2004). 16) D. Tweed and A. Calway: Tracking many objects using subordinated Condensation, British Machine Vision Conf., pp.283 292 (2002). 17) J. Satake and J. Miura: Robust stereo-based person detection and tracking for a person following robot, IEEE ICRA 2009 Workshop on People Detection and Tracking (2009). 18) SIR/MCMC Vol.28, No.1, pp.65 76 (2008) 6 c 2010 Information Processing Society of Japan

情報処理学会研究報告 図 8 オフライン画像を用いた人物追跡の結果 Fig. 8 Experimental result of tracking using off-line images 図 9 特定人物追従の結果 Fig. 9 Result on a person following control 7 c 2010 Information Processing Society of Japan

情報処理学会研究報告 (a) ロボットカメラ (b) 天井カメラ 1 (c) 天井カメラ 2 (d) 統合結果 図 10 複数台カメラを用いた人物追跡結果 Fig. 10 Tracking result using multiple cameras 8 c 2010 Information Processing Society of Japan