1 Kinect for Windows M = [X Y Z] T M = [X Y Z ] T f (u,v) w 3.2 [11] [7] u = f X +u Z 0 δ u (X,Y,Z ) (5) v = f Y Z +v 0 δ v (X,Y,Z ) (6) w = Z +

Similar documents
xx/xx Vol. Jxx A No. xx 1 Fig. 1 PAL(Panoramic Annular Lens) PAL(Panoramic Annular Lens) PAL (2) PAL PAL 2 PAL 3 2 PAL 1 PAL 3 PAL PAL 2. 1 PAL

2003/3 Vol. J86 D II No Fig. 1 An exterior view of eye scanner. CCD [7] CCD PC USB PC PC USB RS-232C PC

IPSJ SIG Technical Report iphone iphone,,., OpenGl ES 2.0 GLSL(OpenGL Shading Language), iphone GPGPU(General-Purpose Computing on Graphics Proc

Computer Security Symposium October ,a) 1,b) Microsoft Kinect Kinect, Takafumi Mori 1,a) Hiroaki Kikuchi 1,b) [1] 1 Meiji U


,,.,.,,.,.,.,.,,.,..,,,, i

Gaze Head Eye (a) deg (b) 45 deg (c) 9 deg 1: - 1(b) - [5], [6] [7] Stahl [8], [9] Fang [1], [11] Itti [12] Itti [13] [7] Fang [1],

2007/8 Vol. J90 D No. 8 Stauffer [7] 2 2 I 1 I 2 2 (I 1(x),I 2(x)) 2 [13] I 2 = CI 1 (C >0) (I 1,I 2) (I 1,I 2) Field Monitoring Server

IPSJ SIG Technical Report Vol.2012-IS-119 No /3/ Web A Multi-story e-picture Book with the Degree-of-interest Extraction Function

IPSJ SIG Technical Report GPS LAN GPS LAN GPS LAN Location Identification by sphere image and hybrid sensing Takayuki Katahira, 1 Yoshio Iwai 1

1(a) (b),(c) - [5], [6] Itti [12] [13] gaze eyeball head 2: [time] [7] Stahl [8], [9] Fang [1], [11] 3 -

proc.dvi

258 5) GPS 1 GPS 6) GPS DP 7) 8) 10) GPS GPS ) GPS Global Positioning System

VRSJ-SIG-MR_okada_79dce8c8.pdf

IPSJ SIG Technical Report Vol.2017-MUS-116 No /8/24 MachineDancing: 1,a) 1,b) 3 MachineDancing MachineDancing MachineDancing 1 MachineDan

28 Horizontal angle correction using straight line detection in an equirectangular image

3 2 2 (1) (2) (3) (4) 4 4 AdaBoost 2. [11] Onishi&Yoda [8] Iwashita&Stoica [5] 4 [3] 3. 3 (1) (2) (3)

IPSJ SIG Technical Report Vol.2014-MBL-70 No.49 Vol.2014-UBI-41 No /3/15 2,a) 2,b) 2,c) 2,d),e) WiFi WiFi WiFi 1. SNS GPS Twitter Facebook Twit

IPSJ SIG Technical Report Vol.2017-HCI-173 No.5 Vol.2017-EC-44 No /6/1 1,a) 1,2,b) 3,c) 1,d) 3D * 1* Graduate School of Engineerin

IPSJ SIG Technical Report Vol.2015-MUS-107 No /5/23 HARK-Binaural Raspberry Pi 2 1,a) ( ) HARK 2 HARK-Binaural A/D Raspberry Pi 2 1.

(4) ω t(x) = 1 ω min Ω ( (I C (y))) min 0 < ω < C A C = 1 (5) ω (5) t transmission map tmap 1 4(a) t 4(a) t tmap RGB 2 (a) RGB (A), (B), (C)

IPSJ SIG Technical Report Vol.2012-CG-149 No.13 Vol.2012-CVIM-184 No /12/4 3 1,a) ( ) DB 3D DB 2D,,,, PnP(Perspective n-point), Ransa

JIS Z803: (substitution method) 3 LCR LCR GPIB

4. C i k = 2 k-means C 1 i, C 2 i 5. C i x i p [ f(θ i ; x) = (2π) p 2 Vi 1 2 exp (x µ ] i) t V 1 i (x µ i ) 2 BIC BIC = 2 log L( ˆθ i ; x i C i ) + q

No. 3 Oct The person to the left of the stool carried the traffic-cone towards the trash-can. α α β α α β α α β α Track2 Track3 Track1 Track0 1

[1] SBS [2] SBS Random Forests[3] Random Forests ii

光学

IPSJ SIG Technical Report 1,a) 1,b) 1,c) 1,d) 2,e) 2,f) 2,g) 1. [1] [2] 2 [3] Osaka Prefecture University 1 1, Gakuencho, Naka, Sakai,

KinecV2 2.2 Kinec Kinec [8] Kinec Kinec [9] KinecV1 3D [10] Kisikidis [11] Kinec Kinec Kinec 3 KinecV2 PC 1 KinecV2 Kinec PC Kinec KinecV2 PC KinecV2

untitled

Convolutional Neural Network A Graduation Thesis of College of Engineering, Chubu University Investigation of feature extraction by Convolution

DEIM Forum 2012 E Web Extracting Modification of Objec

(a) (b) (c) Canny (d) 1 ( x α, y α ) 3 (x α, y α ) (a) A 2 + B 2 + C 2 + D 2 + E 2 + F 2 = 1 (3) u ξ α u (A, B, C, D, E, F ) (4) ξ α (x 2 α, 2x α y α,

Optical Flow t t + δt 1 Motion Field 3 3 1) 2) 3) Lucas-Kanade 4) 1 t (x, y) I(x, y, t)

mthesis.dvi

(a) 1 (b) 3. Gilbert Pernicka[2] Treibitz Schechner[3] Narasimhan [4] Kim [5] Nayar [6] [7][8][9] 2. X X X [10] [11] L L t L s L = L t + L s

IPSJ SIG Technical Report Vol.2015-CVIM-196 No /3/6 1,a) 1,b) 1,c) U,,,, The Camera Position Alignment on a Gimbal Head for Fixed Viewpoint Swi

14 2 5

1 Web [2] Web [3] [4] [5], [6] [7] [8] S.W. [9] 3. MeetingShelf Web MeetingShelf MeetingShelf (1) (2) (3) (4) (5) Web MeetingShelf

IPSJ SIG Technical Report Vol.2010-MPS-77 No /3/5 VR SIFT Virtual View Generation in Hallway of Cybercity Buildings from Video Sequen

IPSJ SIG Technical Report Vol.2009-DPS-141 No.20 Vol.2009-GN-73 No.20 Vol.2009-EIP-46 No /11/27 1. MIERUKEN 1 2 MIERUKEN MIERUKEN MIERUKEN: Spe

(MIRU2008) HOG Histograms of Oriented Gradients (HOG)

HASC2012corpus HASC Challenge 2010,2011 HASC2011corpus( 116, 4898), HASC2012corpus( 136, 7668) HASC2012corpus HASC2012corpus

& Vol.5 No (Oct. 2015) TV 1,2,a) , Augmented TV TV AR Augmented Reality 3DCG TV Estimation of TV Screen Position and Ro

す 局所領域 ωk において 線形変換に用いる係数 (ak 画素の係数 (ak bk ) を算出し 入力画像の信号成分を bk ) は次式のコスト関数 E を最小化するように最適化 有さない画素に対して 式 (2) より画素値を算出する される これにより 低解像度な画像から補間によるアップサ E(

Silhouette on Image Object Silhouette on Images Object 1 Fig. 1 Visual cone Fig. 2 2 Volume intersection method Fig. 3 3 Background subtraction Fig. 4

IS1-09 第 回画像センシングシンポジウム, 横浜,14 年 6 月 2 Hough Forest Hough Forest[6] Random Forest( [5]) Random Forest Hough Forest Hough Forest 2.1 Hough Forest 1 2.2

スライド 1

2). 3) 4) 1.2 NICTNICT DCRA Dihedral Corner Reflector micro-arraysdcra DCRA DCRA DCRA 3D DCRA PC USB PC PC ON / OFF Velleman K8055 K8055 K8055

IPSJ SIG Technical Report Vol.2014-HCI-158 No /5/22 1,a) 2 2 3,b) Development of visualization technique expressing rainfall changing conditions

Fig. 2 Signal plane divided into cell of DWT Fig. 1 Schematic diagram for the monitoring system


1 Table 1: Identification by color of voxel Voxel Mode of expression Nothing Other 1 Orange 2 Blue 3 Yellow 4 SSL Humanoid SSL-Vision 3 3 [, 21] 8 325

IPSJ SIG Technical Report Vol.2009-CVIM-167 No /6/10 Real AdaBoost HOG 1 1 1, 2 1 Real AdaBoost HOG HOG Real AdaBoost HOG A Method for Reducing

(3.6 ) (4.6 ) 2. [3], [6], [12] [7] [2], [5], [11] [14] [9] [8] [10] (1) Voodoo 3 : 3 Voodoo[1] 3 ( 3D ) (2) : Voodoo 3D (3) : 3D (Welc

SICE東北支部研究集会資料(2012年)

IPSJ SIG Technical Report Vol.2014-GN-90 No.16 Vol.2014-CDS-9 No.16 Vol.2014-DCC-6 No /1/24 1,a) 2,b) 2,c) 1,d) QUMARION QUMARION Kinect Kinect

yoo_graduation_thesis.dvi

IPSJ SIG Technical Report Vol.2014-DPS-158 No.27 Vol.2014-CSEC-64 No /3/6 1,a) 2,b) 3,c) 1,d) 3 Cappelli Bazen Cappelli Bazen Cappelli 1.,,.,.,

SSII原稿v5.doc

2.2 (a) = 1, M = 9, p i 1 = p i = p i+1 = 0 (b) = 1, M = 9, p i 1 = 0, p i = 1, p i+1 = 1 1: M 2 M 2 w i [j] w i [j] = 1 j= w i w i = (w i [ ],, w i [

3.1 Thalmic Lab Myo * Bluetooth PC Myo 8 RMS RMS t RMS(t) i (i = 1, 2,, 8) 8 SVM libsvm *2 ν-svm 1 Myo 2 8 RMS 3.2 Myo (Root

2. CABAC CABAC CABAC 1 1 CABAC Figure 1 Overview of CABAC 2 DCT 2 0/ /1 CABAC [3] 3. 2 値化部 コンテキスト計算部 2 値算術符号化部 CABAC CABAC

Vol. 44 No. SIG 9(CVIM 7) ) 2) 1) 1 2) 3 7) 1) 2) 3 3) 4) 5) (a) (d) (g) (b) (e) (h) No Convergence? End (f) (c) Yes * ** * ** 1

JFE.dvi

光学


Input image Initialize variables Loop for period of oscillation Update height map Make shade image Change property of image Output image Change time L

九州大学学術情報リポジトリ Kyushu University Institutional Repository 多視点動画像処理による 3 次元モデル復元に基づく自由視点画像生成のオンライン化 : PC クラスタを用いた実現法 上田, 恵九州大学システム情報科学研究院知能システム学部門 有田, 大

(EC2014) YOUPLAY 1,a) 2,3 1 1,4,b) 1 YOUPLAY YOUPLAY YOUPLAY YOUPLAY Vol.0 (03/20 24, 2013) YOUPLAY Vol.1 (11/16 24, 2013) 2 HEP HALL

mt_4.dvi

IPSJ SIG Technical Report Vol.2011-EC-19 No /3/ ,.,., Peg-Scope Viewer,,.,,,,. Utilization of Watching Logs for Support of Multi-

3 3 3 Knecht (2-3fps) AR [3] 2. 2 Debevec High Dynamic Range( HDR) [4] HDR Derek [5] 2. 3 [6] 3. [6] x E(x) E(x) = 2π π 2 V (x, θ i, ϕ i )L(θ

情報処理学会研究報告 IPSJ SIG Technical Report Vol.2011-MBL-57 No.27 Vol.2011-UBI-29 No /3/ A Consideration of Features for Fatigue Es

A Graduation Thesis of College of Engineering, Chubu University Pose Estimation by Regression Analysis with Depth Information Yoshiki Agata

Vol1-CVIM-172 No.7 21/5/ Shan 1) 2 2)3) Yuan 4) Ancuti 5) Agrawal 6) 2.4 Ben-Ezra 7)8) Raskar 9) Image domain Blur image l PSF b / = F(

2) 3) LAN 4) 2 5) 6) 7) K MIC NJR4261JB0916 8) 24.11GHz V 5V 3kHz 4 (1) (8) (1)(5) (2)(3)(4)(6)(7) (1) (2) (3) (4)

IPSJ SIG Technical Report Vol.2012-CG-148 No /8/29 3DCG 1,a) On rigid body animation taking into account the 3D computer graphics came

IPSJ SIG Technical Report Vol.2010-CVIM-170 No /1/ Visual Recognition of Wire Harnesses for Automated Wiring Masaki Yoneda, 1 Ta

SICE東北支部研究集会資料(2017年)

2 Fig D human model. 1 Fig. 1 The flow of proposed method )9)10) 2.2 3)4)7) 5)11)12)13)14) TOF 1 3 TOF 3 2 c 2011 Information

IPSJ SIG Technical Report Vol.2013-CVIM-188 No /9/2 1,a) D. Marr D. Marr 1. (feature-based) (area-based) (Dense Stereo Vision) van der Ma

1 3DCG [2] 3DCG CG 3DCG [3] 3DCG 3 3 API 2 3DCG 3 (1) Saito [4] (a) 1920x1080 (b) 1280x720 (c) 640x360 (d) 320x G-Buffer Decaudin[5] G-Buffer D

Σ A Σ B r Σ A (Σ A ): A r = [ A r A x r A y r z ] T Σ B : B r = [ B r B x r B y r z ] T A r = A x B B r x + A y B B r y + A z B B r z A r = A R B B r

: : : 2 2

図 2: 高周波成分を用いた超解像 解像度度画像とそれらを低解像度化して得られる 低解像度画像との差により低解像度の高周波成分 を得る 高解像度と低解像度の高周波成分から位 置関係を保ったままパッチ領域をそれぞれ切り出 し 高解像度パッチ画像と低解像度パッチ画像の ペアとしてデータベースに登録する

1 Fig. 1 Extraction of motion,.,,, 4,,, 3., 1, 2. 2.,. CHLAC,. 2.1,. (256 ).,., CHLAC. CHLAC, HLAC. 2.3 (HLAC ) r,.,. HLAC. N. 2 HLAC Fig. 2

36 581/2 2012


5b_08.dvi

Lyra X Y X Y ivis Designer Lyra ivisdesigner Lyra ivisdesigner 2 ( 1 ) ( 2 ) ( 3 ) ( 4 ) ( 5 ) (1) (2) (3) (4) (5) Iv Studio [8] 3 (5) (4) (1) (


Haiku Generation Based on Motif Images Using Deep Learning Koki Yoneda 1 Soichiro Yokoyama 2 Tomohisa Yamashita 2 Hidenori Kawamura Scho

a) Extraction of Similarities and Differences in Human Behavior Using Singular Value Decomposition Kenichi MISHIMA, Sayaka KANATA, Hiroaki NAKANISHI a

IPSJ SIG Technical Report Vol.2011-MUS-91 No /7/ , 3 1 Design and Implementation on a System for Learning Songs by Presenting Musical St

11) 13) 11),12) 13) Y c Z c Image plane Y m iy O m Z m Marker coordinate system T, d X m f O c X c Camera coordinate system 1 Coordinates and problem


IPSJ SIG Technical Report Vol.2015-CVIM-195 No /1/23 RGB-D RGB 3 1,a) RGB-D RGB-D 3. RGB CG RGB DTAM[1] MonoFusi


26 Development of Learning Support System for Fixation of Basketball Shoot Form

Transcription:

3 3D 1,a) 1 1 Kinect (X, Y) 3D 3D 1. 2010 Microsoft Kinect for Windows SDK( (Kinect) SDK ) 3D [1], [2] [3] [4] [5] [10] 30fps [10] 3 Kinect 3 Kinect Kinect for Windows SDK 3 Microsoft 3 Kinect for Windows SDK 3 Zhang Zhang Kinect RGB [6] Khoshelham Elberink Microsoft Kinect for Windows [7] Smisek [8] Herrera C. [9] RGB RGB Karan [10] Kinect 1 The University of Shiga Prefecture 1 Presently with Graduate school of The University of Shiga Prefecture a) oo23kgouhara@ec.usp.ac.jp Microsoft Kinect 2. Kinect 1 Kinect 2 O b Kinect Z 0 Q y Kinect c 2015 Information Processing Society of Japan 1

1 Kinect for Windows 3.1 3 M = [X Y Z] T M = [X Y Z ] T f (u,v) w 3.2 [11] [7] u = f X +u Z 0 δ u (X,Y,Z ) (5) v = f Y Z +v 0 δ v (X,Y,Z ) (6) w = Z +δz(u,v)+z 0 (7) (u 0,v 0 ) δ u (X,Y,Z ) δ v (X,Y,Z ) 2 w Z k δz(u,v) z w Z 0 y z d M M y Y M = RM +t R Y Z D QOP X Y Z θ x θ y θ z QYZ D b = Z 0 Z k (1) 1 0 0 Z 0 R = 0 cosθ x sinθ x OYZ Oyz 0 sinθ x cosθ x d f = D (2) cosθ Z k y 0 sinθ y 0 1 0 (1) (2) sinθ y 0 cosθ y Z k Z 0 Z k = 1+ Z0 fb d (3) cosθ z sinθ z 0 sinθ z cosθ z 0 0 0 1 2 OWZ Owz X k = x t [t x t y t z ] T k (4) Z k f (u k,v k,w k ) f Z k (5) (6) (7) û k ˆv k ŵ k x k X k n e = ( u k û k 2 + v k ˆv k 2 + w k ŵ k 2 ) (8) 3. k=1 f u 0 v 0 δ u δ v δz Z 0 θ x θ y θ z t (8) c 2015 Information Processing Society of Japan 2

3 (c): θ x θ y (d):(c) δz (5) (6) δz 3 (3D) (8) δz δz δz 4 (a): (b):(a) 3.3 δz δz θ x θ y 4(a) 5 4 (d) 4(b) w u w X Z w Z known Z 0 X Z θ y δz = (w Z 0 ) Z known u w θ y (u,v) 5 4 (d) θ x θ z δz 0 4(a) θ y 4(a) 3.4 3D θ x θ y θ x 4(c) (d) 3D 4(c) w 4(d) 4(d) w w w c 2015 Information Processing Society of Japan 3

3.5 2 (X 1,Y 1,Z 1 ) (X 2,Y 2,Z 2 ) 3D (u 1,v 1,w 1 ) (u 2,v 2,w 2 ) (5) (6) u θ z θ 2 u 1 = f X 2 X 1 z = 0 Z +(u 2r2 2 u 1r1)k 2 1 (13) 2 3.3 3D v 2 v 1 = f Y 2 Y 1 θ x θ y θ x θ y Z +(v 2r 2 2 v 1r 1)k 2 1 (14) 2 R Z 1 Z 2 R t 2 (X 1,Y 1,Z 1 ) (X 2,Y 2,Z 2 ) t z t = [0 0 t z ] T (X 1,Y 1,Z 1 ) (X 2,Y 2,Z 2 ) u 0 v 0 f (13) (14) ŭ 0 v 0 k 1 k 1 2 k 1 [u v w] T R [u v w ] T = R 1 [u v w] T 3.6 w 3.3 3.2 δz Z 0 Z 0 (u k,v k,w k ) (û k,ˆv k,ŵ k f f u 0 v 0 k 1 Z 0 θ x θ y θ z t 2 (8) (X a,y a,z a ) (X b,y b,z b ) [u v w ] T (u a v a w a ) (u b v b w b ) (5) (6) u a u b = fx a X b Z a v a v b = f Y a Y b Z a (9) (10) R t Z (5) (6) (11) (12) p = X /Z 2 (X a,y a,z a ) (X b,y b,z b ) q = Y /Z (5) (6) p q 2 (X a,y a,z a ) (X b,y b,z b ) Z X Y [u v w ] T = R 1 [u v w] T M X a X b Y a Y b X a X b Y a Y b M = R 1( M t ) Z a = Z b Z a = Z b Z a t z 2 δz 3.7 (u,v,w) M = [X Y Z] T w δz Z 0 4. Kinect f (9) (10) f 3 4.1 3D δ u = uk 1 r 2 (11) 3D 6 δ v = vk 1 r 2 (12) ( ) X r 2 2 ) Y 2 7 = +( Z Z 1200mm 900mm [12] 100mm 50mm k 1 100mm 6 c 2015 Information Processing Society of Japan 4

6 7 8 δz.(a):δz. (b): 7 4.2 δz Kinect 1000 mm 10 9 1 1 θ x {(u,v) u = 319,10 v 59} 4.3.1 {(u,v) u = 319,420 v 469} f k 1 θ y {(u,v) 10 u 59,v = 239} {(u,v) 580 u 629,v = 239} 3.3 δz 945mm 8 [u v w ] T 4.2 δz 1000 mm Z 0 Z 0 = 16 mm ŭ 0 = 319 v 0 = 239 [u v w ] T (u,v ) 4.3 (u,v ) w 4.2 Kinect 3D 1000 mm f θ z = 0 f 4.2 θ x θ y R (9) f f 2 t = [0 0 1000] T 31 32,32 33,33 40,40 41,41 49,49 50 θ x θ y 4.2 48 w R 1 c 2015 Information Processing Society of Japan 5

9 (a): (b): θ x θ y = 100 (c): δz Z 0, 13.6 2.65 5.78 1 (i) (ii) θ x [deg] -0.1249 5.453 10 2 4.547 10 2 θ y [deg] 6.720 10 2 2.858 10 2 1.156 10 2 θ z [deg] 0 8.213 10 3 0.3507 t x [mm] 0 129.0 132.0 t y [mm] 0 125.7 160.4 t z [mm] 1000 960.5 669.6 k 1 4.690 10 7 0 0.8425 f [mm] 593.0 562.4 520.7 u 0 [pixel] 319 415.3 275 v 0 [pixel] 239 315.1 330 Z 0 [mm] 16 16.83 16 k 1 2 (13) k 1 48 k 1 2 10 11,11 18,18 63,63 71 1 (i) (ii) k 1 1 5. SDK (i)(ii) 2 2 2 6. 1 Khoshelham Elberink[7] 5.453±0.012 mm (i)(ii) f 2 (i)(ii) SDK (i) Kinect Kinect Kinect 4.4 7 X,Y,Z u,v,w (8) f k 1 c 2015 Information Processing Society of Japan 6

2 2 SDK (i) (ii) [mm] [mm] [ ] [mm] [ ] [mm] [ ] 14-15 100 115.5 15.5 97.94 2.06 106.3 6.29 15-20 608.3 702.9 15.6 586.1 3.64 633.7 4.18 20-21 100 116.6 16.6 97.55 2.45 103.8 3.83 21-22 100 117.6 17.6 96.79 3.21 106.1 6.14 22-23 100 110.1 10.1 93.60 6.40 102.3 2.32 23-24 100 118.1 18.1 97.64 2.36 104.8 4.85 24-26 200 227.5 13.7 195.1 2.43 210.8 5.42 26-30 608.3 692.3 13.8 586.7 3.54 635.3 4.44 30-34 400 452.1 13.0 385.2 3.70 417.5 4.39 34-39 412.3 468.7 13.7 398.3 3.40 431.8 4.72 39-42 400 454.8 13.7 386.9 3.28 418.2 4.54 42-47 412.3 479.7 16.4 400.7 2.81 433.8 5.21 47-51 400 463.0 15.8 388.5 2.87 421.1 5.27 51-55 608.3 672.7 10.6 603.4 0.799 650.3 6.90 55-57 200 214.8 7.38 202.8 1.39 216.7 8.34 57-58 100 109.0 8.98 100.6 0.591 109.1 9.09 58-59 100 116.0 16.0 100.6 0.616 107.5 7.51 59-60 100 110.5 10.5 95.51 4.49 105.0 4.99 60-61 100 114.6 14.6 98.78 1.22 107.5 7.53 61-66 608.3 670.0 10.1 618.8 1.74 667.5 9.72 34(15), pp. 1995 2006, Nov., 2013. [6] Cha Zhang and Zhengyou Zhang, Calibration between 7. depth and color sensors for commodity depth cameras, IEEE International Conference on Multimedia and Expo Kinect 3D (ICME) 2011, pp. 1 6, July 11 15, 2011. [7] Kourosh Khoshelham and Sander Oude Elberink, Accuracy and Resolution of Kinect Depth Data for Indoor (i) Mapping Application, Sensors 12, pp. 1437 1454, 2012. Kinect [8] Jan Smisek, et al., 3D with Kinect, Consumer Depth Cameras for Computer Vision, Advances in Computer Vision and Pattern Recognition 2013, pp. 3 25, Springer. [9] Daniel Herrera C., et al. Joint Depth and Color Camera Calibration with Distortion Correction, IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(10), pp. 2058 2064, Oct., 2012. [10] Branko Karan, Calibration of Depth Measurement [1] Alexander Weiss et al., Home 3D Body Scans form Model for Kinect-Type 3D Vison Sensors, Poster Proceedings of 21st International Conference in Central Eu- Noisy Image and Range Data, IEEE International Conference on Computer Vision 2011, pp. 1951 1958, Nov. rope on Computer Graphics, Visualization and Computer Vision, pp. 61 64, June 24 27, 2013 6 13, 2011. [2] Jing Tong, et al., Scanning 3D Full Human Bodies Using Kinects, IEEE Trans. on Visualization and Com- [11] Zhengyou Zhang, Flexible camera calibration by viewing a plane from unknown orientations, The Proceedings of the Seventh IEEE International Conference on puter Graphics, 18(4), pp. 643 650, April, 2012. [3] Richard A. Newcombe et al., KinectFusion: Real-Time Computer Vision 1999, Vol. 1, pp. 666 673, 20 Sep, Dense Surface mapping and Tracking, 10th IEEE International Sypmposium on Mixed and Augumented Real- 1999. [12] R. Y. Tsai, A versatile camera calibration technique ity 2011, pp. 127 136, Oct. 26-29, 2011. for high-accuracy 3D machine vision metrology using [4] Lu Xia, et al., Human Detection Using Depth Information by Kinect, International Workshop on Human off-the-shelf TV cameras and lenses, IEEE Journal of Robotics and Automation, 3(4), pp. 323-344, August Activity Understanding from 3D Data 2011, pp. 15-22, 1987. Jun. 24, 2011. [5] Lulu Chen, et al., A survay of human motion analysis using depth imagery, Pattern Recognition Letters, c 2015 Information Processing Society of Japan 7