Computer Security Symposium October ,a) 1,b) Microsoft Kinect Kinect, Takafumi Mori 1,a) Hiroaki Kikuchi 1,b) [1] 1 Meiji U

Similar documents
Computer Security Symposium October 2018 DTW 1 2 Microsoft Kinect 3 DTW EER EER 5 45 Kinect DTW 1. [1] Muaaz [5] DTW [2][3] [2] 2 10

1 Kinect for Windows M = [X Y Z] T M = [X Y Z ] T f (u,v) w 3.2 [11] [7] u = f X +u Z 0 δ u (X,Y,Z ) (5) v = f Y Z +v 0 δ v (X,Y,Z ) (6) w = Z +

IPSJ SIG Technical Report 1,a) 1,b) 1,c) 1,d) 2,e) 2,f) 2,g) 1. [1] [2] 2 [3] Osaka Prefecture University 1 1, Gakuencho, Naka, Sakai,

3 2 2 (1) (2) (3) (4) 4 4 AdaBoost 2. [11] Onishi&Yoda [8] Iwashita&Stoica [5] 4 [3] 3. 3 (1) (2) (3)

xx/xx Vol. Jxx A No. xx 1 Fig. 1 PAL(Panoramic Annular Lens) PAL(Panoramic Annular Lens) PAL (2) PAL PAL 2 PAL 3 2 PAL 1 PAL 3 PAL PAL 2. 1 PAL

(a) (b) 2 2 (Bosch, IR Illuminator 850 nm, UFLED30-8BD) ( 7[m] 6[m]) 3 (PointGrey Research Inc.Grasshopper2 M/C) Hz (a) (b

IPSJ SIG Technical Report GPS LAN GPS LAN GPS LAN Location Identification by sphere image and hybrid sensing Takayuki Katahira, 1 Yoshio Iwai 1

IPSJ SIG Technical Report iphone iphone,,., OpenGl ES 2.0 GLSL(OpenGL Shading Language), iphone GPGPU(General-Purpose Computing on Graphics Proc

4. C i k = 2 k-means C 1 i, C 2 i 5. C i x i p [ f(θ i ; x) = (2π) p 2 Vi 1 2 exp (x µ ] i) t V 1 i (x µ i ) 2 BIC BIC = 2 log L( ˆθ i ; x i C i ) + q

Convolutional Neural Network A Graduation Thesis of College of Engineering, Chubu University Investigation of feature extraction by Convolution

顔画像を用いた個人認証システムの性能検討に関する研究

IPSJ SIG Technical Report Vol.2014-DPS-158 No.27 Vol.2014-CSEC-64 No /3/6 1,a) 2,b) 3,c) 1,d) 3 Cappelli Bazen Cappelli Bazen Cappelli 1.,,.,.,

IPSJ SIG Technical Report Vol.2014-MBL-70 No.49 Vol.2014-UBI-41 No /3/15 2,a) 2,b) 2,c) 2,d),e) WiFi WiFi WiFi 1. SNS GPS Twitter Facebook Twit

IPSJ SIG Technical Report Vol.2014-DBS-159 No.6 Vol.2014-IFAT-115 No /8/1 1,a) 1 1 1,, 1. ([1]) ([2], [3]) A B 1 ([4]) 1 Graduate School of Info


(MIRU2008) HOG Histograms of Oriented Gradients (HOG)

14) Ogihara ATM 15) ATM 10 16) 17),18) 1 4) 1 8),9) 10) 12) realadaboost 13) % 12) 2. 3 Gluhchev 19) 1 19) 2 10) 12) 3. 2 ID 1 8) 9),20) 2 2

IPSJ SIG Technical Report Vol.2011-EC-19 No /3/ ,.,., Peg-Scope Viewer,,.,,,,. Utilization of Watching Logs for Support of Multi-

IS1-09 第 回画像センシングシンポジウム, 横浜,14 年 6 月 2 Hough Forest Hough Forest[6] Random Forest( [5]) Random Forest Hough Forest Hough Forest 2.1 Hough Forest 1 2.2

SICE東北支部研究集会資料(2017年)

3.1 Thalmic Lab Myo * Bluetooth PC Myo 8 RMS RMS t RMS(t) i (i = 1, 2,, 8) 8 SVM libsvm *2 ν-svm 1 Myo 2 8 RMS 3.2 Myo (Root

[6] DoN DoN DDoN(Donuts DoN) DoN 4(2) DoN DDoN 3.2 RDoN(Ring DoN) 4(1) DoN 4(3) DoN RDoN 2 DoN 2.2 DoN PCA DoN DoN 2 DoN PCA 0 DoN 3. DoN

Optical Flow t t + δt 1 Motion Field 3 3 1) 2) 3) Lucas-Kanade 4) 1 t (x, y) I(x, y, t)

WISS Woodman Labs GoPro 1 [5, 3, 2] Copyright is held by the author(s). 1 GoPro GoPro 2 6 GoPro RICOH THETA 3 Kodak P

Duplicate Near Duplicate Intact Partial Copy Original Image Near Partial Copy Near Partial Copy with a background (a) (b) 2 1 [6] SIFT SIFT SIF

兵庫県立大学学報vol.17

_ _2013_八木_YR

生体認証システムにおける情報漏洩対策技術の研究動向

a) Extraction of Similarities and Differences in Human Behavior Using Singular Value Decomposition Kenichi MISHIMA, Sayaka KANATA, Hiroaki NAKANISHI a

Silhouette on Image Object Silhouette on Images Object 1 Fig. 1 Visual cone Fig. 2 2 Volume intersection method Fig. 3 3 Background subtraction Fig. 4

スライド 1

DEIM Forum 2019 A7-1 Flexible Distance-based Hashing mori

THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS TECHNICAL REPORT OF IEICE. TRECVID2012 Instance Search {sak

1 Fig. 1 Extraction of motion,.,,, 4,,, 3., 1, 2. 2.,. CHLAC,. 2.1,. (256 ).,., CHLAC. CHLAC, HLAC. 2.3 (HLAC ) r,.,. HLAC. N. 2 HLAC Fig. 2

2007/8 Vol. J90 D No. 8 Stauffer [7] 2 2 I 1 I 2 2 (I 1(x),I 2(x)) 2 [13] I 2 = CI 1 (C >0) (I 1,I 2) (I 1,I 2) Field Monitoring Server

THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS TECHNICAL REPORT OF IEICE {s-kasihr, wakamiya,

(a) 1 (b) 3. Gilbert Pernicka[2] Treibitz Schechner[3] Narasimhan [4] Kim [5] Nayar [6] [7][8][9] 2. X X X [10] [11] L L t L s L = L t + L s

IPSJ-CVIM

& Vol.5 No (Oct. 2015) TV 1,2,a) , Augmented TV TV AR Augmented Reality 3DCG TV Estimation of TV Screen Position and Ro

KinecV2 2.2 Kinec Kinec [8] Kinec Kinec [9] KinecV1 3D [10] Kisikidis [11] Kinec Kinec Kinec 3 KinecV2 PC 1 KinecV2 Kinec PC Kinec KinecV2 PC KinecV2

IPSJ SIG Technical Report Vol.2017-MUS-116 No /8/24 MachineDancing: 1,a) 1,b) 3 MachineDancing MachineDancing MachineDancing 1 MachineDan

2reN-A14.dvi

[1] SBS [2] SBS Random Forests[3] Random Forests ii

Vol.55 No (Jan. 2014) saccess 6 saccess 7 saccess 2. [3] p.33 * B (A) (B) (C) (D) (E) (F) *1 [3], [4] Web PDF a m

Haiku Generation Based on Motif Images Using Deep Learning Koki Yoneda 1 Soichiro Yokoyama 2 Tomohisa Yamashita 2 Hidenori Kawamura Scho

2.2 (a) = 1, M = 9, p i 1 = p i = p i+1 = 0 (b) = 1, M = 9, p i 1 = 0, p i = 1, p i+1 = 1 1: M 2 M 2 w i [j] w i [j] = 1 j= w i w i = (w i [ ],, w i [

yoo_graduation_thesis.dvi

IPSJ SIG Technical Report Vol.2011-MUS-91 No /7/ , 3 1 Design and Implementation on a System for Learning Songs by Presenting Musical St

VRSJ-SIG-MR_okada_79dce8c8.pdf

IPSJ SIG Technical Report Vol.2014-GN-90 No.16 Vol.2014-CDS-9 No.16 Vol.2014-DCC-6 No /1/24 1,a) 2,b) 2,c) 1,d) QUMARION QUMARION Kinect Kinect

2007 CME



2). 3) 4) 1.2 NICTNICT DCRA Dihedral Corner Reflector micro-arraysdcra DCRA DCRA DCRA 3D DCRA PC USB PC PC ON / OFF Velleman K8055 K8055 K8055

Gaze Head Eye (a) deg (b) 45 deg (c) 9 deg 1: - 1(b) - [5], [6] [7] Stahl [8], [9] Fang [1], [11] Itti [12] Itti [13] [7] Fang [1],


(4) ω t(x) = 1 ω min Ω ( (I C (y))) min 0 < ω < C A C = 1 (5) ω (5) t transmission map tmap 1 4(a) t 4(a) t tmap RGB 2 (a) RGB (A), (B), (C)

2009/9 Vol. J92 D No. 9 HTML [3] Microsoft PowerPoint Apple Keynote OpenOffice Impress XML 4 1 (A) (C) (F) Fig. 1 1 An example of slide i


IPSJ SIG Technical Report Vol.2012-CG-149 No.13 Vol.2012-CVIM-184 No /12/4 3 1,a) ( ) DB 3D DB 2D,,,, PnP(Perspective n-point), Ransa

(a) (b) (c) Canny (d) 1 ( x α, y α ) 3 (x α, y α ) (a) A 2 + B 2 + C 2 + D 2 + E 2 + F 2 = 1 (3) u ξ α u (A, B, C, D, E, F ) (4) ξ α (x 2 α, 2x α y α,

NewsLetter-No2

2006 Indexed Fuzzy Vault 3ADM1117 3ADM3225

日歯雑誌(H22・7月号)HP用/p06‐16 クリニカル① 田崎

Sobel Canny i

2. Eades 1) Kamada-Kawai 7) Fruchterman 2) 6) ACE 8) HDE 9) Kruskal MDS 13) 11) Kruskal AGI Active Graph Interface 3) Kruskal 5) Kruskal 4) 3. Kruskal

WISS 2018 [2 4] [5,6] Query-by-Dancing Query-by- Dancing Cao [1] OpenPose 2 Ghias [7] Query by humming Chen [8] Query by rhythm Jang [9] Query-by-tapp

1(a) (b),(c) - [5], [6] Itti [12] [13] gaze eyeball head 2: [time] [7] Stahl [8], [9] Fang [1], [11] 3 -

Honda 3) Fujii 4) 5) Agrawala 6) Osaragi 7) Grabler 8) Web Web c 2010 Information Processing Society of Japan

IPSJ SIG Technical Report Vol.2012-ICS-167 No /3/ ,,., 3, 3., 3, 3. Automatic 3D Map Generation by Using a Small Unmanned Vehicle

1 2 3 マルチメディア, 分散, 協調とモバイル (DICOMO2013) シンポジウム 平成 25 年 7 月.,.,,.,. Surrogate Diner,., Surrogate Diner,, 3,, Surrogate Diner. An Interface Agent for Ps

中小企業 indd

広報1606月号_最終.indd

27 YouTube YouTube UGC User Generated Content CDN Content Delivery Networks LRU Least Recently Used UGC YouTube CGM Consumer Generated Media CGM CGM U

IPSJ SIG Technical Report Vol.2010-CVIM-170 No /1/ Visual Recognition of Wire Harnesses for Automated Wiring Masaki Yoneda, 1 Ta

蜷咲ァー譛ェ險ュ螳・3

IPSJ SIG Technical Report Vol.2011-CVIM-177 No /5/ TRECVID2010 SURF Bag-of-Features 1 TRECVID SVM 700% MKL-SVM 883% TRECVID2010 MKL-SVM A

2003/3 Vol. J86 D II No Fig. 1 An exterior view of eye scanner. CCD [7] CCD PC USB PC PC USB RS-232C PC

( ) pp p ,pp.340-



258 5) GPS 1 GPS 6) GPS DP 7) 8) 10) GPS GPS ) GPS Global Positioning System

The Plasma Boundary of Magnetic Fusion Devices

No. 3 Oct The person to the left of the stool carried the traffic-cone towards the trash-can. α α β α α β α α β α Track2 Track3 Track1 Track0 1

IEEE HDD RAID MPI MPU/CPU GPGPU GPU cm I m cm /g I I n/ cm 2 s X n/ cm s cm g/cm

Human-Agent Interaction Simposium A Heterogeneous Robot System U

Accuracy Improvement by Compound Discriminant Functions for Resembling Character Recognition Takashi NAKAJIMA, Tetsushi WAKABAYASHI, Fumitaka KIMURA,

IPSJ SIG Technical Report Vol.2013-CVIM-187 No /5/30 1,a) 1,b), 1,,,,,,, (DNN),,,, 2 (CNN),, 1.,,,,,,,,,,,,,,,,,, [1], [6], [7], [12], [13]., [

情報処理学会研究報告 IPSJ SIG Technical Report Vol.2011-MBL-57 No.27 Vol.2011-UBI-29 No /3/ A Consideration of Features for Fatigue Es

(MIRU2010) Geometric Context Randomized Trees Geometric Context Rand

(1) (2) (3) (4) (5) 2.1 ( ) 2

it-ken_open.key

14 2 5

IPSJ SIG Technical Report Vol.2015-CVIM-196 No /3/6 1,a) 1,b) 1,c) U,,,, The Camera Position Alignment on a Gimbal Head for Fixed Viewpoint Swi

H19国際学研究科_02.indd

THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS TECHNICAL REPORT OF IEICE.

untitled

IPSJ SIG Technical Report Vol.2014-HCI-158 No /5/22 1,a) 2 2 3,b) Development of visualization technique expressing rainfall changing conditions

2008

IPSJ SIG Technical Report Vol.2010-GN-74 No /1/ , 3 Disaster Training Supporting System Based on Electronic Triage HIROAKI KOJIMA, 1 KU

Transcription:

Computer Security Symposium 017 3-5 October 017 1,a) 1,b) Microsoft Kinect Kinect, Takafumi Mori 1,a) Hiroaki Kikuchi 1,b) 1. 017 5 [1] 1 Meiji University Graduate School of Advanced Mathematical Science a) cs1705@meiji.ac.jp b) kikn@meiji.ac.jp Microsoft Kinect v EER 0.5 0.50. - 7 -

1 m : n 1 : n n 1 : n m n m n m : n n 1 3. Han 006 (GEI: Gait Energy Image)[3] 1 1 Kinect v RGB 10 1080 RGB 30fps 51 44 30fps 6 6 5 0.5 4.5m Shiraga GEINet[1] (CNN: Convolutional Neural Network) GEI 015 Anderson Kinect [4] 013 Igual [5] 4. 4.1 4. 4..1 Kinect v Kinect v Microsoft NUI(Natural User Interface) [8] Kinect RGB 3 Kinect SDK Kinect v [7] Kinect v 1-73 -

.3m 1m Kinect.m 1 Kinect v [6] 3 017/08/05 017/08/17 10 6 D A B C 1 シークエンス D E 4 165cm 170cm 0 51 3 1 4.3 4.3.1 3 10 U 1 U 10 ID 4 3 6 4.3. 1 1 1 1 3 1 1 3 1 A 3 E 1 S S4 S1 S0 S1 S18 S3 S17 S11 S16 S15 S14 S13 S1 S10 S S6 S5 S8 S7 S S4 S3 S1 4 4.3.3 4 4.3.3.1 1 5 5 4-74 -

5 ID S 1 FootL-AnkleL 11.76. 11.58 17.71 S FootR-AnkleR 11.88.08 11.86 17.05 S 3 AnkleL-KneeL 37.60 3.57 38.00 45.6 S 4 AnkleR-KneeR 37.46 3.60 37.86 45.33 S 5 KneeL-HipL 35.77 3.76 35.63 4.77 S 6 KneeR-HipR 35.6 3.7 35.76 51.38 S 7 HipL-SpineBase 8.1 1.44 8.5 11.66 S 8 HipR-SpineBase 7.71 1.56 8.15 10.76 S SpineBase-SpineMid 31.10 1.01 31.05 3.50 S 10 SpineMid-SpineShoulder.80 0.68.76 8.10 S 11 SpineShoulder-ShoulderL 17.4 1.15 17. 1.61 S 1 SpineShoulder-ShoulderR 17.58 1.43 17.6.46 S 13 ShoulderL-ElbowL 5.65.0 5.65 31.1 S 14 ShoulderR-ElbowR 5.57.3 5.41 31.60 S 15 ElbowL-WristL 3..18 3.43 31.37 S 16 ElbowR-WristR 4.8.46 3.66 3.33 S 17 WristL-HandL 7.5 1.71 7.6 1.56 S 18 WristR-HandR 7.71 1.7 7.77 15.08 S1 HandL-HandTipL 7.01 1.3 7.37 11.50 S 0 HandR-HandTipR 7.08 1.88 7.40 15.67 S 1 WristL-ThumbL.18.54.01 1.8 S WristR-ThumbR.43.45.3 18.73 S 3 SpineShoulder-Neck 7.46 0.1 7.45.10 S 4 Neck-Head 14.68 1.0 14.70 18.5 D6 A A1 D5 D4 A6 A5 D3 D D1 A4 A3 5 6 4.3.3. 6 5 4.3.3.3 ( ) 7 6 4.3.4 4.3. 4.3.3 3 4.3.5 i k f {S 1,..., S 4, D 1,..., D 6, A 1,..., A 6 } µ(f i,k ), median(f i,k ), max(f i,k ) - 75 -

6 ID D 1 FootL-FootR 3.40 14.61 3.10 74.51 D AnkleL-AnkleRigh 3.37 1.75 33. 73.4 D 3 KneeL-KneeR 3.1 4.6 3.46 37.7 D 4 FootL-HandTipL 71.1.6 70.51 106.0 D 5 FootR-HandTipR 6.18 11.0 67.48 11.61 D 6 HandTipL-HandTipR 47.0.6 46.46 3.01 7 ID A 1 ShoulderL-HandTipL 13.41 6.73 13.00 40.16 A ShoulderR-HandTipR 1. 7. 1.33 64.0 A 3 HipL-AnkeL 18.5 10.46 16.87 45.4 A 4 HipR-AnkleR 18.6 10.6 16.6 43.44 A 5 HipL-KneeL 13.76 7.68 1.17 38.60 A 6 HipR-KneeR 13.83 7.1 1.03 41.30 θ µ(f i,k ) µ(f j,k ) T same(i, j) = F if µ(f i,k ) µ(f j,k ) θ otherwise f g T if (µ(f i,k ) µ(f j,k )) + (µ(g i,k ) µ(g j,k )) θ same(i, j) = F otherwise 3 0.5 0.0 0.5 Y Head HandTipLeft HandTipRight AnkleLeft AnkleRight.0 1.5 1.0 0.5 0.0 0.5 X 7 4.4 4.4.1 3 7 5 4.4. 1 8 4.4.3 D 5 8 U 6 D 5 Median µ(d 5 ) 10 4.4.4 µ(d 5 ) Distance 10 0 30 40 50 60 D D(MovingAverage) 0 5 10 15 0 5 30 35 Frame 8 1 D - 76 -

8 ID µ(d 5 ) median(d 5 ) max(d 5 ) U 1 1 65.4 63. 77.0 U 1 6.8 63.0 70. U 1 76.3 73.1 100.4 U 75.6 75.3 104.5 U 5 1 6.7 61.6 68.5 U 5 60.8 60.8 71. Value 50 60 70 80 0 Value 60 70 80 0 mean median max U 6 D 5 U7 U5 U1 U8 U U10 U U6 U3 U4 UserID 10 µ(d 5 ) 11 (FAR) (FRR) (ROC) 1 µ(s 6 ),median(d 1 ), max(a ) 3 3 median(s 6 ) Density 0.00 0.05 0.10 0.15 0.0 FAR 0.0 0. 0.4 0.6 0.8 1.0 1 Self Others 0 10 0 30 40 Distance 11 µ(d 5 ) ) 0.0 0. 0.4 0.6 0.8 1.0 FRR S6 mean D1 median A max µ(s 6 ),median(d 1 ), max(a ) ROC EER Top10 EER EER µ(d 5 ) 0. max(d 5 ) 0. µ(s 6 ) 0.30 median(d 5 ) 0.30 µ(s 5 ) 0.31 median(a 4 ) 0.31 median(s5) 0.31 median(s 6 ) 0.31 µ(d 4 ) 0.3 µ(a 4 ) 0.3 5. 5.1 EER 10-77 -

Value 0.30 0.35 0.40 0 10 0 10 7 7 7 7 7 7 5 5 8 5 8 1 10 8 1 1 1 5 8 10 5 6 10 10 36 6 1 1 8 3 8 6 10 3 3 4 4 10 6 4 33 4 6 4 4 1 3 4 5 6 Num of Feature 0 0 0 40 13 EER 15 Max MDS 14 Density 0.00 0.05 0.10 0.15 0.0 Self Others 0 5 10 15 0 5 Distance (Max) 6 5. EER 10 10 10 11 Max EER 13 EER 5 Max 6 14 5.3 A1 A6 (MDS: Multi-Dimensional Scaling) 15 6. Kinect Kinect v 10 1 10 11 A 5,A 6 EER 7. Kinect v EER 0.5 0.50-78 -

10 EER 10 EER µ(d 3 ), µ(d 5 ) 0.5 µ(d 3 ), µ(d 4 ), µ(d 5 ) 0.5 µ(d ), µ(d 3 ), µ(d 4 ), µ(d 5 ) 0.5 µ(d 3 ), µ(d 4 ), µ(d 5 ), µ(d 6 ) 0.5 µ(d 1 ), µ(d 3 ), µ(d 4 ), µ(d 5 ), µ(d 6 ) 0.5 µ(d ), µ(d 3 ), µ(d 4 ), µ(d 5 ), µ(d 6 ) 0.5 µ(d ), µ(d 3 ), µ(d 5 ) 0.6 µ(d 1 ), µ(d ), µ(d 3 ), µ(d 4 ), µ(d 5 ) 0.6 µ(d 4 ), µ(d 5 ), µ(d 6 ) 0.6 µ(d 1 ), µ(d ), µ(d 3 ), µ(d 4 ), µ(d 5 ), µ(d 6 ) 0.6 11 EER 10 EER µ(a 6 ) 0.50 median(a 6 ) 0.48 median(a 5 ), µ(a 6 ) 0.47 median(a ), 0.47 median(a ), median(a 5 ) 0.46 max(a 5 ) 0.45 max(a 5 ), 0.45 median(a ), median(a 5 ), median(a 6 ) 0.45 median(a ), median(a 6 ) 0.45 median(d 1 ) 0.45 (https://developer.microsoft.com/jajp/windows/kinect/hardware, 017 8 ) [8],, SCIS015, Jan 015. [1] K. Shiraga, Y. Makihara, D. Muramatsu, T. Echigo, Y. Yagi, GEINet: View-Invariant Gait Recognition Using a Convolutional Neural Network, In Proc. of the 8th IAPR Int. Conf. on Biometrics (ICB 016), pp.1-8, Halmstad, Sweden, Jun 016 [] S. D. Bakchy, M. R. Islam, A. Sayeed, Human identification on the basis of gait analysis using Kohonen selforganizing mapping technique nd International Conference on Electrical, Computer & Telecommunication Engineering (ICECTE), Dec 016 [3] J. Han and B. Bhanu, Individual recognition using gait energy image, IEEE Trans. Pattern Anal. Mach. Intell., vol.8, no., pp.3163, 006. [4] V. Andersson and R. Araujo, Person Identification Using Anthropometric and Gait Data from Kinect Sensor, In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence. 015. [5] L. Igual, À. Lapedriza, R. Borràs, Robust gait-based gender classification using depth cameras, EURASIP Journal on Image and Video Processing 013, 013. [6] JointType Enumeration - MSDN - Microsoft, (https://msdn.microsoft.com/jajp/library/microsoft.kinect.jointtype.aspx,017 8 ) [7] Kinect - Microsoft Developer, - 7 -