22_05.dvi

Similar documents
Optical Flow t t + δt 1 Motion Field 3 3 1) 2) 3) Lucas-Kanade 4) 1 t (x, y) I(x, y, t)

& Vol.5 No (Oct. 2015) TV 1,2,a) , Augmented TV TV AR Augmented Reality 3DCG TV Estimation of TV Screen Position and Ro

1 Fig. 1 Extraction of motion,.,,, 4,,, 3., 1, 2. 2.,. CHLAC,. 2.1,. (256 ).,., CHLAC. CHLAC, HLAC. 2.3 (HLAC ) r,.,. HLAC. N. 2 HLAC Fig. 2

4. C i k = 2 k-means C 1 i, C 2 i 5. C i x i p [ f(θ i ; x) = (2π) p 2 Vi 1 2 exp (x µ ] i) t V 1 i (x µ i ) 2 BIC BIC = 2 log L( ˆθ i ; x i C i ) + q

2). 3) 4) 1.2 NICTNICT DCRA Dihedral Corner Reflector micro-arraysdcra DCRA DCRA DCRA 3D DCRA PC USB PC PC ON / OFF Velleman K8055 K8055 K8055

258 5) GPS 1 GPS 6) GPS DP 7) 8) 10) GPS GPS ) GPS Global Positioning System

IPSJ SIG Technical Report iphone iphone,,., OpenGl ES 2.0 GLSL(OpenGL Shading Language), iphone GPGPU(General-Purpose Computing on Graphics Proc

(MIRU2008) HOG Histograms of Oriented Gradients (HOG)

1 Table 1: Identification by color of voxel Voxel Mode of expression Nothing Other 1 Orange 2 Blue 3 Yellow 4 SSL Humanoid SSL-Vision 3 3 [, 21] 8 325

2 Fig D human model. 1 Fig. 1 The flow of proposed method )9)10) 2.2 3)4)7) 5)11)12)13)14) TOF 1 3 TOF 3 2 c 2011 Information

11) 13) 11),12) 13) Y c Z c Image plane Y m iy O m Z m Marker coordinate system T, d X m f O c X c Camera coordinate system 1 Coordinates and problem

IPSJ SIG Technical Report Vol.2014-CG-155 No /6/28 1,a) 1,2,3 1 3,4 CG An Interpolation Method of Different Flow Fields using Polar Inter

2.2 6).,.,.,. Yang, 7).,,.,,. 2.3 SIFT SIFT (Scale-Invariant Feature Transform) 8).,. SIFT,,. SIFT, Mean-Shift 9)., SIFT,., SIFT,. 3.,.,,,,,.,,,., 1,

174 July ),12),21) 14) 3 9) N SVD Singular Value Decomposition 15) 18) SVD 2 SVD 2 N SVD Vasilescu N SVD 16) N SVD PCA Principal Component

(3.6 ) (4.6 ) 2. [3], [6], [12] [7] [2], [5], [11] [14] [9] [8] [10] (1) Voodoo 3 : 3 Voodoo[1] 3 ( 3D ) (2) : Voodoo 3D (3) : 3D (Welc

IPSJ SIG Technical Report Vol.2012-CG-148 No /8/29 3DCG 1,a) On rigid body animation taking into account the 3D computer graphics came

VRSJ-SIG-MR_okada_79dce8c8.pdf

Input image Initialize variables Loop for period of oscillation Update height map Make shade image Change property of image Output image Change time L

IPSJ SIG Technical Report Vol.2012-MUS-96 No /8/10 MIDI Modeling Performance Indeterminacies for Polyphonic Midi Score Following and

IPSJ SIG Technical Report 1,a) 1,b) 1,c) 1,d) 2,e) 2,f) 2,g) 1. [1] [2] 2 [3] Osaka Prefecture University 1 1, Gakuencho, Naka, Sakai,

2007/8 Vol. J90 D No. 8 Stauffer [7] 2 2 I 1 I 2 2 (I 1(x),I 2(x)) 2 [13] I 2 = CI 1 (C >0) (I 1,I 2) (I 1,I 2) Field Monitoring Server

IPSJ SIG Technical Report Vol.2010-CVIM-170 No /1/ Visual Recognition of Wire Harnesses for Automated Wiring Masaki Yoneda, 1 Ta

Silhouette on Image Object Silhouette on Images Object 1 Fig. 1 Visual cone Fig. 2 2 Volume intersection method Fig. 3 3 Background subtraction Fig. 4

1 Kinect for Windows M = [X Y Z] T M = [X Y Z ] T f (u,v) w 3.2 [11] [7] u = f X +u Z 0 δ u (X,Y,Z ) (5) v = f Y Z +v 0 δ v (X,Y,Z ) (6) w = Z +

Vol. 44 No. SIG 9(CVIM 7) ) 2) 1) 1 2) 3 7) 1) 2) 3 3) 4) 5) (a) (d) (g) (b) (e) (h) No Convergence? End (f) (c) Yes * ** * ** 1

28 Horizontal angle correction using straight line detection in an equirectangular image

IPSJ SIG Technical Report Vol.2013-CVIM-188 No /9/2 1,a) D. Marr D. Marr 1. (feature-based) (area-based) (Dense Stereo Vision) van der Ma

Fig. 3 Flow diagram of image processing. Black rectangle in the photo indicates the processing area (128 x 32 pixels).

a) Extraction of Similarities and Differences in Human Behavior Using Singular Value Decomposition Kenichi MISHIMA, Sayaka KANATA, Hiroaki NAKANISHI a

IPSJ SIG Technical Report GPS LAN GPS LAN GPS LAN Location Identification by sphere image and hybrid sensing Takayuki Katahira, 1 Yoshio Iwai 1

(a) (b) (c) Canny (d) 1 ( x α, y α ) 3 (x α, y α ) (a) A 2 + B 2 + C 2 + D 2 + E 2 + F 2 = 1 (3) u ξ α u (A, B, C, D, E, F ) (4) ξ α (x 2 α, 2x α y α,

2003/3 Vol. J86 D II No Fig. 1 An exterior view of eye scanner. CCD [7] CCD PC USB PC PC USB RS-232C PC

[1] SBS [2] SBS Random Forests[3] Random Forests ii

(a) 1 (b) 3. Gilbert Pernicka[2] Treibitz Schechner[3] Narasimhan [4] Kim [5] Nayar [6] [7][8][9] 2. X X X [10] [11] L L t L s L = L t + L s

1 Web [2] Web [3] [4] [5], [6] [7] [8] S.W. [9] 3. MeetingShelf Web MeetingShelf MeetingShelf (1) (2) (3) (4) (5) Web MeetingShelf

xx/xx Vol. Jxx A No. xx 1 Fig. 1 PAL(Panoramic Annular Lens) PAL(Panoramic Annular Lens) PAL (2) PAL PAL 2 PAL 3 2 PAL 1 PAL 3 PAL PAL 2. 1 PAL

1(a) (b),(c) - [5], [6] Itti [12] [13] gaze eyeball head 2: [time] [7] Stahl [8], [9] Fang [1], [11] 3 -

IPSJ SIG Technical Report Vol.2014-GN-90 No.16 Vol.2014-CDS-9 No.16 Vol.2014-DCC-6 No /1/24 1,a) 2,b) 2,c) 1,d) QUMARION QUMARION Kinect Kinect

Vol1-CVIM-172 No.7 21/5/ Shan 1) 2 2)3) Yuan 4) Ancuti 5) Agrawal 6) 2.4 Ben-Ezra 7)8) Raskar 9) Image domain Blur image l PSF b / = F(

proc.dvi

3 2 2 (1) (2) (3) (4) 4 4 AdaBoost 2. [11] Onishi&Yoda [8] Iwashita&Stoica [5] 4 [3] 3. 3 (1) (2) (3)

IPSJ SIG Technical Report An Evaluation Method for the Degree of Strain of an Action Scene Mao Kuroda, 1 Takeshi Takai 1 and Takashi Matsuyama 1

Gaze Head Eye (a) deg (b) 45 deg (c) 9 deg 1: - 1(b) - [5], [6] [7] Stahl [8], [9] Fang [1], [11] Itti [12] Itti [13] [7] Fang [1],

IPSJ SIG Technical Report Vol.2015-CVIM-196 No /3/6 1,a) 1,b) 1,c) U,,,, The Camera Position Alignment on a Gimbal Head for Fixed Viewpoint Swi

IPSJ SIG Technical Report Vol.2009-CVIM-167 No /6/10 Real AdaBoost HOG 1 1 1, 2 1 Real AdaBoost HOG HOG Real AdaBoost HOG A Method for Reducing

[2] OCR [3], [4] [5] [6] [4], [7] [8], [9] 1 [10] Fig. 1 Current arrangement and size of ruby. 2 Fig. 2 Typography combined with printing

[2] 2. [3 5] 3D [6 8] Morishima [9] N n 24 24FPS k k = 1, 2,..., N i i = 1, 2,..., n Algorithm 1 N io user-specified number of inbetween omis

IPSJ SIG Technical Report Vol.2009-DPS-141 No.20 Vol.2009-GN-73 No.20 Vol.2009-EIP-46 No /11/27 1. MIERUKEN 1 2 MIERUKEN MIERUKEN MIERUKEN: Spe

Table 1. Reluctance equalization design. Fig. 2. Voltage vector of LSynRM. Fig. 4. Analytical model. Table 2. Specifications of analytical models. Fig

IHI Robust Path Planning against Position Error for UGVs in Rough Terrain Yuki DOI, Yonghoon JI, Yusuke TAMURA(University of Tokyo), Yuki IKEDA, Atsus

IPSJ SIG Technical Report Vol.2011-EC-19 No /3/ ,.,., Peg-Scope Viewer,,.,,,,. Utilization of Watching Logs for Support of Multi-

Vol. 42 No. SIG 8(TOD 10) July HTML 100 Development of Authoring and Delivery System for Synchronized Contents and Experiment on High Spe

IPSJ SIG Technical Report 1, Instrument Separation in Reverberant Environments Using Crystal Microphone Arrays Nobutaka ITO, 1, 2 Yu KITANO, 1

,4) 1 P% P%P=2.5 5%!%! (1) = (2) l l Figure 1 A compilation flow of the proposing sampling based architecture simulation

GPGPU

THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS TECHNICAL REPORT OF IEICE.

塗装深み感の要因解析

,,.,.,,.,.,.,.,,.,..,,,, i

fi¡ŒØ.dvi

(bundle adjustment) 8),9) ),6),7) GPS 8),9) GPS GPS 8) GPS GPS GPS GPS Anai 9) GPS GPS GPS GPS GPS GPS GPS Maier ) GPS GPS Anai 9) GPS GPS M GPS M inf

5 インチ PDP カメラ (a) (b) 1 Fig. 1 Information display. (a) f=25mm (b) f=16mm 2 UXGA Fig. 2 Examples of captured image. [3] [4] 1 [5] [7] 1 3pixel 5 1 7pi


( ) [1] [4] ( ) 2. [5] [6] Piano Tutor[7] [1], [2], [8], [9] Radiobaton[10] Two Finger Piano[11] Coloring-in Piano[12] ism[13] MIDI MIDI 1 Fig. 1 Syst

第 55 回自動制御連合講演会 2012 年 11 月 17 日,18 日京都大学 1K403 ( ) Interpolation for the Gas Source Detection using the Parameter Estimation in a Sensor Network S. T

8.dvi

Sobel Canny i

it-ken_open.key

IPSJ SIG Technical Report Vol.2012-CG-149 No.13 Vol.2012-CVIM-184 No /12/4 3 1,a) ( ) DB 3D DB 2D,,,, PnP(Perspective n-point), Ransa

ActionScript Flash Player 8 ActionScript3.0 ActionScript Flash Video ActionScript.swf swf FlashPlayer AVM(Actionscript Virtual Machine) Windows

14 2 5

IPSJ SIG Technical Report Vol.2012-IS-119 No /3/ Web A Multi-story e-picture Book with the Degree-of-interest Extraction Function

ID 3) 9 4) 5) ID 2 ID 2 ID 2 Bluetooth ID 2 SRCid1 DSTid2 2 id1 id2 ID SRC DST SRC 2 2 ID 2 2 QR 6) 8) 6) QR QR QR QR

Fig Measurement data combination. 2 Fig. 2. Ray vector. Fig (12) 1 2 R 1 r t 1 3 p 1,i i 2 3 Fig.2 R 2 t 2 p 2,i [u, v] T (1)(2) r R 1 R 2

) 1 2 2[m] % H W T (x, y) I D(x, y) d d = 1 [T (p, q) I D(x + p, y + q)] HW 2 (1) p q t 3 (X t,y t,z t) x t [ ] T x t

20 Method for Recognizing Expression Considering Fuzzy Based on Optical Flow

本文6(599) (Page 601)

130 Oct Radial Basis Function RBF Efficient Market Hypothesis Fama ) 4) 1 Fig. 1 Utility function. 2 Fig. 2 Value function. (1) (2)

6_27.dvi

第62巻 第1号 平成24年4月/石こうを用いた木材ペレット

A Feasibility Study of Direct-Mapping-Type Parallel Processing Method to Solve Linear Equations in Load Flow Calculations Hiroaki Inayoshi, Non-member

Vol.55 No (Jan. 2014) saccess 6 saccess 7 saccess 2. [3] p.33 * B (A) (B) (C) (D) (E) (F) *1 [3], [4] Web PDF a m

1 p.27 Fig. 1 Example of a koto score. [1] 1 1 [1] A 2. Rogers [4] Zhang [5] [6] [7] Löchtefeld [8] Xiao [

dews2004-final.dvi

23 Fig. 2: hwmodulev2 3. Reconfigurable HPC 3.1 hw/sw hw/sw hw/sw FPGA PC FPGA PC FPGA HPC FPGA FPGA hw/sw hw/sw hw- Module FPGA hwmodule hw/sw FPGA h

Vol. 43 No. 2 Feb. 2002,, MIDI A Probabilistic-model-based Quantization Method for Estimating the Position of Onset Time in a Score Masatoshi Hamanaka

IS1-09 第 回画像センシングシンポジウム, 横浜,14 年 6 月 2 Hough Forest Hough Forest[6] Random Forest( [5]) Random Forest Hough Forest Hough Forest 2.1 Hough Forest 1 2.2

Vol. 23 No. 4 Oct Kitchen of the Future 1 Kitchen of the Future 1 1 Kitchen of the Future LCD [7], [8] (Kitchen of the Future ) WWW [7], [3

2. CABAC CABAC CABAC 1 1 CABAC Figure 1 Overview of CABAC 2 DCT 2 0/ /1 CABAC [3] 3. 2 値化部 コンテキスト計算部 2 値算術符号化部 CABAC CABAC

顔画像を用いた個人認証システムの性能検討に関する研究

SICE東北支部研究集会資料(2004年)

318 T. SICE Vol.52 No.6 June 2016 (a) (b) (c) (a) (c) ) 11) (1) (2) 1 5) 6) 7), 8) 5) 20 11) (1

IPSJ SIG Technical Report Vol.2015-MUS-107 No /5/23 HARK-Binaural Raspberry Pi 2 1,a) ( ) HARK 2 HARK-Binaural A/D Raspberry Pi 2 1.

Miyazaki-3DForum dvi

情報処理学会研究報告 IPSJ SIG Technical Report Vol.2011-CVIM-178 No /9/ Model-based Human Torso 3D Shape Estimation Shunta Saito, 1 Makiko

HASC2012corpus HASC Challenge 2010,2011 HASC2011corpus( 116, 4898), HASC2012corpus( 136, 7668) HASC2012corpus HASC2012corpus

main.dvi

Duplicate Near Duplicate Intact Partial Copy Original Image Near Partial Copy Near Partial Copy with a background (a) (b) 2 1 [6] SIFT SIFT SIF

スライド 1

IPSJ SIG Technical Report Vol.2011-MUS-91 No /7/ , 3 1 Design and Implementation on a System for Learning Songs by Presenting Musical St

DPA,, ShareLog 3) 4) 2.2 Strino Strino STRain-based user Interface with tacticle of elastic Natural ObjectsStrino 1 Strino ) PC Log-Log (2007 6)

Transcription:

Vol. 1 No. 2 41 49 (July 2008) 3 1 1 3 2 1 1 3 Person-independent Monocular Tracking of Face and Facial Actions Yusuke Sugano 1 and Yoichi Sato 1 This paper presents a monocular method of tracking faces and facial actions using a multilinear face model that treats interpersonal and intrapersonal shape variations separately. We created this method by integrating two different frameworks: particle filter-based tracking for time-dependent facial action and pose estimation and incremental bundle adjustment for person-dependent shape estimation. This combination together with multilinear face models is the key to tracking faces and facial actions of arbitrary people in real time with no pre-learned individual face models. Experiments using real video sequences demonstrate the effectiveness of our method. 1 Institute of Industrial Science, The University of Tokyo 1. 3 HCI ITS 3 5),8),9),17),18) 5),8),9),17) 1 3 Bregler 1) Tomasi-Kanade 2 3 18) 3 6),16) 2 AAM Active Appearance Model Gross 6) 41 c 2008 Information Processing Society of Japan

42 3 3 Zhu 16) 3 3 3 1 2 2 Dornaika 4) 3D Vlasic 13) DeCarlo 2) 1 2 1 Fig. 1 System overview. 3 1 Estimation step 3 2 3 2 Modeling step 2 3 2

43 3 2 3 4 5 6 2. K 3 M R 3K T K =10 2 2 M N-mode SVD Singular Value Decomposition Vasilescu 11),12) N-mode SVD Vasilescu 3 T R 3K S A Feature points M Action Shape Shape A Action S N-mode SVD T T = C feature U feature shape U shape action U action (1) = M shape U shape action U action (2) i i i (2) SVD 3 2 Fig. 2 Example of facial deformation. 3 Fig. 3 Data tensor. U i i C R 3K S A SVD U feature C M T M shape Ǔ shape action Ǔ action (3) M

44 3 18) S K 2 1 1 5 A =10 S A M T (3) A A S S s R S a R A M M M = M + M shape s T action a T (4) (3) Ǔ shape =(š 1,...,š S ) T Ǔ action =(ǎ 1,...,ǎ A ) T S A š 1 š S s σ s ǎ 1 ǎ A ā σ a s a p 3. 1 1 2 3 14) 10) 3) 4 Fig. 4 Flow of incremental bundle adjustment. 2 1 1 3.1 i 6 p i a i s a i s (4) M i m i = P(p i, M i (a i, s)) (5) P M i p i M i m i K 2 2K i 2 ˆm i 4 F t = ˆm i m i (p i, a i, s) 2 (6) i f t f t 4 t n (6) f t

45 3 1 t f t 4 t p t a t Zhang 15) n 3.2 (6) C pi C ai C s LM Levenberg-Marquardt 7) min F t, p i C pi, a i C ai, s C s (7) {p i },{a i },s ˆp C pi = {p i ˆp i λ p p i ˆp i + λ p } (8) λ p C ai 2 C s = {s s 2σ s s s +2σ s } (9) 95% s s (t) s (t) = t 1 s (t 1) + 1 t t s (10) 4. 1 1 2 3 3 1 Pose estimation step Feature-point recalculation step 4.1 p t a t s (10) s (t 1) (4) a t M t = M + M t a t (M t = M shape s T (t 1)) (11) (6 + A ) x t =(p T t, a T t ) T {(u (i) t ; π (i) t )}(i =1...N) (6 + A ) N u (i) t π (i) t t 1 {(u (i) t 1 ; π(i) t 1 )}

46 3 N u (i) t = u t 1 + τv t 1 + ω (12) u t 1 {(u (i) t 1 ; π(i) t 1 )} τ v t 1 t 1 x a t v t 1 a t 0 ω 0 18) κσ a κ 0.2 u (i) t π (i) t ( ( ) 2 K N(u (i) ) ( t ) A π (i) t exp 2σ 2 exp 1 2 b=1 ( ) a (i) t,b ā 2 ) b ς b N (u (i) t ) T K K K 1 σ 1.0 2 a (i) t a (i) t,b ā b ς b a (i) t ā σ a b 1 π (i) t {(u (i) t ; π (i) t )} x t x 0 OKAO K n (6) 5 (13) 5 Fig. 5 Finding true feature points. ā s 4.2 2 4.1 5 m t 2 ˆm t (6) Gokturk 5) E t E t d ˆm ˆm t } E t = {ρ Î t Î t 1 2 + Î t Î 1 2 + ɛ ˆm t m t 2 (14) ROI 1 Î t R K ˆm t Î t k ˆm t k 2 ρ 4 16 16 2 m t 2 4.1 x t (5) 2 m t ˆm t ɛ 4000 5. 18)

47 3 1 Table 1 Comparison of estimation errors. [mm] x y z [deg.] roll pitch yaw Particle filter-based estimation using the generic PCA model Mean 6.14 4.71 51.32 Mean 0.34 6.54 3.34 Std. Dev. 4.88 4.09 38.29 Std. Dev. 0.29 4.71 2.73 Our method using the multilinear model Mean 3.26 4.37 20.18 Mean 0.41 3.12 2.33 Std. Dev. 2.62 2.83 11.18 Std. Dev. 0.27 2.49 1.98 PCA 20 26 Intel Core 2 Duo E6700 PC 1 3.0 GB OS Windows XP 2 IEEE1394 60 1800 640 480 T 16 16 1000 n =7 LM 10 5 90 [ms] 32 [ms/frame] 6 2 6 2 Fig. 6 Result images: the right column shows actual estimation results of our method using the multilinear model, and the center column shows results of the generic model-based method. The left column shows these results rendered from a different viewpoint. PCA 16) 26 10 15 5 7 1 6.

48 3 1 2 3 7 Fig. 7 x y z roll z yaw y pitch x Estimation results: x, y, andz are the horizontal, vertical, and depth-directional translation, and roll, pitch, andyaw are the rotation around the z, y, andx axes, respectively. The bottom graph shows the facial shape estimation error in the model coordinate system. 1) Bregler, C., Hertzmann, A. and Biermann, H.: Recovering non-rigid 3d shape from image streams, Proc. IEEE Int. Conf. Computer Vision and Pattern Recognition, Vol.2, pp.690 696 (2000). 2)DeCarlo,D.and Metaxas,D.:Adjusting Shape Parameters using Model-based Optical Flow Residuals, IEEE Trans. Pattern Analysis and Machine Intelligence, Vol.24, No.6, pp.814 823 (2002). 3) Del Bue, A., Smeraldi, F., Agapito, L. and Mary, Q.: Non-rigid structure from motion using non-parametric tracking and non-linear optimization, Proc. IEEE Workshop on Articulated and Non-Rigid Motion, Vol.1 (2004). 4) Dornaika, F. and Davoine, F.: On appearance based face and facial action tracking, IEEE Trans. Circuits and Systems for Video Technology, Vol.16, No.9, pp.1107 1124 (2006). 5) Gokturk, S.B., Bouguet, J.Y. and Grzeszczuk, R.: A data-driven model for monocular face tracking, Proc. IEEE Int. Conf. Computer Vision, Vol.2, pp.701 708 (2001). 6) Gross, R., Matthews, I. and Baker, S.: Generic vs. person specific active appearance models, Image and Vision Computing, Vol.23, No.11, pp.1080 1093 (2005).

49 3 7) Lourakis, M.I.A.: levmar: Levenberg-Marquardt nonlinear least squares algorithms in C/C++ (2004). http://www.ics.forth.gr/ lourakis/levmar/ 8) Matthews, I. and Baker, S.: Active appearance models revisited, Int. J. Computer Vision, Vol.60, No.2, pp.135 164 (2004). 9) Munoz, E., Buenaposada, J.M. and Baumela, L.: Efficient model-based 3D tracking of deformable objects, Proc. IEEE Int. Conf. Computer Vision, pp.877 882 (2005). 10) Vacchetti, L., Lepetit, V. and Fua, P.: Stable real-time 3D tracking using online and offline information, IEEE Trans. Pattern Analysis and Machine Intelligence, Vol.26, No.10, pp.1380 1384 (2004). 11) Vasilescu, M.A.O. and Terzopoulos, D.: Multilinear analysis of image ensembles: Tensorfaces, Proc. European Conf. on Computer Vision, pp.447 460 (2002). 12) Vasilescu, M.A.O. and Terzopoulos, D.: Multilinear image analysis for facial recognition, Proc. Int. Conf. Pattern Recognition (ICPR 02 ), Vol.2, pp.511 514 (2002). 13) Vlasic, D., Brand, M., Pfister, H. and Popovic, J.: Face transfer with multilinear models, ACM Trans. Graphics (Proc. ACM SIGGRAPH 2005 ), Vol.24, No.3, pp.426 433 (2005). 14) Xin, L., Wang, Q., Tao, J., Tang, X., Tan, T. and Shum, H.: Automatic 3D face modeling from video, Proc. IEEE Int. Conf. Computer Vision, Vol.2, pp.1193 1199 (2005). 15) Zhang, Z. and Shan, Y.: Incremental motion estimation through modified bundle adjustment, Proc. IEEE Int. Conf. Image Processing, Vol.2, pp.343 346 (2003). 16) Zhu, J., Hoi, S.C.H. and Lyu, M.R.: Real-time non-rigid shape recovery via active appearance models for augmented reality, Proc. 9th European Conf. Computer Vision, pp.186 197 (2006). 17) Zhu, Z. and Ji, Q.: Robust Real-Time Face Pose and Facial Expression Recovery, Proc. IEEE Int. Conf. Computer Vision and Pattern Recognition, pp.681 688 (2006). 18) Vol.47, No.SIG10 (CVIM15), pp.185 194 (2006). ( 19 9 21 ) ( 20 3 10 ) 2005 2007 1990 1997 Ph.D. in Robotics. 2008 2006 1999 1999 ACM IEEE