dthesis.dvi
|
|
- ひろと うなだ
- 5 years ago
- Views:
Transcription
1 NAIST-IS-DD
2 ( )
3 3 1 3, NAIST-IS- DD , i
4 2 3 GPS 4 5 ii
5 Construction of Telepresence Systems Using an Omnidirectional Multi-camera System 3 Sei Ikeda Abstract Telepresence system using real images provides us with a rich sense of presence in a remote site. The sense of presence is created by reproducing a eld of view according to the change of position and direction of user's view. Telepresence system can be classied into two types according to whether the movement of user's virtual view position is active or passive. Forbothtypes, it is required to provide a user with a rich sense of presence without increasing human cost for generating image contents. The purpose of this study is to develop image generation/presentation methods for passive and active telepresence systems toprovide a high-quality sense of presence. To reproduce a rich sense of presence, highresolution and omnidirectional videos acquired with an omnidirectional multicamera system are presented, and an image presentation which maintains the temporal continuity of the videos are employed. The active type especially needs toconsider some other issues including user interface to decide the movement of view point and image presentation methods suitable for the user interface. For an active telepresence system, an image presentation system using locomotion interface is proposed. In this system, the image is presented considering the variation in head position caused by user's locomotion and unintended movement of omnidirectional multi-camera system in image acquisition. In this dissertation, 3 Doctoral Dissertation, Department of Information Systems, Graduate School of Information Science, Nara Institute of Science and Technology, NAIST-IS-DD , March24, iii
6 Chapter 1 gives a perspective of the study in the area of telepresence. Chapter 2 describes a method for estimating intrinsic camera parameters required for generation of spherical video and estimation of speed and pose of omnidirectional multi-camera system. Chapter 3 describes a method for estimating extrinsic parameters which represent the position and pose of omnidirectional multi-camera system. To avoid accumulative errors in parameter estimation, GPS position of information is combined with a feature tracking-based method. Chapter 4 describes an immersive display system using a locomotion interface and an image generation method based on estimated parameters. Chapter 5 gives conclusions. Keywords: Telepresence, Image-based Rendering, Omnidirectional Multi-camera System, Camera Calibration, Sense of Presence iv
7 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : Ladybug : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 46 v
8 2.6.5 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : GPS : : : : : : : : : : : : : : : GPS : : : : : : : : : : : : : : GPS : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : vi
9 94 vii
10 1 CRT [CB61] : : : : : : : : : : : : : : 2 2 ( : NHK 21) : : : : : : : : 9 3 : : : : : : : : : : : : : : : : : 11 4 Ladybug ( ) ( ) : : : : : : : : : 18 5 Ladybug ( ( ) ( )) : : : 19 6 Ladybug : : : : : : : : : : : : : : : : : : : 19 7 : : : : : : : : : : : : : : : : : : 21 8 : : : : : : : : : : : : : : : : : : : : : : 25 9 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : cos 4 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : (R ) : : : : : : : : : : : : : : : : (G ) : : : : : : : : : : : : : : : (B ) : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : Ladybug : : : : : : : : : : : : : : : : : : : : : : ( ( ) ( )) : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 46 viii
11 29 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 55 34! : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : GPS : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : GPS : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 91 ix
12 1 : : : : 5 2 : : : : : : : : : : : : : : : : : : : : : : : 11 3 Ladybug : : : : : : : : : : : : : : : : : : : : : : : : : : : : 20 4 : : : : : : : 23 5 : : : : : : : : : : : : : : : : : : : : : : 33 6 [pixel] : : : : : : : : : : : : : : : : : : : : [rad] : : : : : : : : : : : : : : : : 47 8 : : : : : : : : : : : : : : : : : 48 9 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 88 x
13 telepresence 1980 MIT Massachusetts Institute of Technology Marvin Minsky Minsky OMNI [Min80] Lauel Sheridan [LF92] (teleoperation) [ 00, GADS04] (teleconference) [Jou02, J. 04] (surveillance) [PGA + 00] Philco Comeau HEADSIGHT [CB61] 1
14 1 CRT [CB61] 1 CRT,,., 360 [ZSE86, SE87, Mov99, JXB + 91, YYM02, 02, YKNH98, SKN03, TKHN98, FC200, Poi02, HKCY00]., 360 [ 98, 05]. 2
15 1 Lipman Aspen Movie-Map [A. 80] Chen QuickTime VR [Che95] 2 QuickTime VR Taylor VideoPlus [C. 02] Aspen Movie-Map [A. 80] QuickTime VR [Che95] 3 [A. 80, Che95, C. 02] 3 Kanade Virtualized Reality [TPP97] [THKM00, DTF + 01] [JHNK00] 3
16 [ 98, 05] 4
17 [C. 02, MAS + 04] 1 Comeau HEADSIGHT [TG00] 1 HEADSIGHT [CB61] 5
18 (i) :,. (ii) :, (iii) : (iv) :,,. 6
19 (v) : (vi) :
20 1.4,.,, 1, ,.,. : [ZSE86, SE87, Mov99] 360.,.,,. : [JXB + 91, YYM02, 02],,. [JXB + 91], [YYM02], [ 02], [S. 97], 8
21 2 ( : NHK 21)., 2,.,, 1,. [KHN02], ,.,
22 ,. : [TKHN98, FC200], 360..,. : [Poi02, HKCY00].,,..,..,, ,,,,,.,,..,, 10
23 (a) [TKHN98] (b) SOS [HKCY00] 3 2 HD,.,,.,. 11
24 ,.,,., [TKHN98], 3(a),.,,.,.,.., [ 99] [ 02]., [TKHN98],,. CYLINDRA[JHNK00, 00],,., 3(b) 12
25 (SOS)[HKCY00] 6 COSMOS[TMY98],.,, SOS.,,,.,,.,,,., [ 03] [FZ98, PKV + 00, Dav03, SKYT02, CMC03, VLF04, GF03] [FZ98, PKV + 00] [Dav03, SKYT02, CMC03, VLF04] [GF03] [Dav03, SKYT02] 13
26 CAD [CMC03, VLF04] [Dav03, SKYT02] ( ) [CMC03, VLF04] CAD CAD CAD GPS 2 [GF03] RTK-GPS cm ( GPS ) GPS 1Hz 1.6, 2 (1) (2) 14
27 ,.,..,,.,. GPS [SKYT02] GPS GPS 15
28 GPS
29 2. 2.1,, 3 Ladybug 2.2 Ladybug Point Grey Research Ladybug [Poi02]. Ladybug 4( ) 5, 1 CCD 17
30 4 Ladybug ( ) ( ) 4( ) HDD. 3, Ladybug 5 768, 1,024 6, 75% 15fps 20. 6,., PC IEEE1394, PC 6 5fps
31 Ladybug ( ( ) ( )) ª ªª ªªªªª n ªªªª 6 Ladybug 19
32 ,,, 7,.,, Tsai [R. 87]..,. Tsai, 7 (x d ;y d ) (x u ;y u ). x u = x d (1 + 1 r r r ) (1) y u = x d (1 + 1 r r r ) (2) 3 Ladybug 6 1, % 15fps 20 20
33 ªª zw ( x u, yu ) ( x d, yd ) x I xc z C O I O C O W y W x W y I y C s M c ªª 7 r = q x 2 d + x 2 d (3),, 1., 1,,,. [J. 93], 2 3.,., x y d x d y. (x d ;y d ) (x f ;y f ),. x d = d0 x s x (x f 0 c x ); y d = d y (y f 0 c y ) (4), c x ;c y, s x, d 0 x, x N cx N fx (d 0 N x = d cx x N fx ). 21
34 , T c (t x ;t y ;t z ) R c (; ; ), M c., M c = = = 2 4 R c T c r 1 r 2 r 3 t x r 4 r 5 r 6 t y r 7 r 8 r 9 t z c 1 c 3 + s 1 s 2 s 3 s 1 c 2 0c 1 s 3 + s 1 s 2 c 3 t x 0s 1 c 3 + c 1 s 2 s 3 c 1 c 2 s 1 s 3 + c 1 s 2 c 3 t y c 2 s 3 0s 2 c 2 c 3 t z s 1 = sin ; s 2 =sin; s 3 = sin c 1 =cos; c 2 =cos; c 3 =cos, 7 [x W ;y W ;z W ] T, [x C ;y C ;z C ] T, c (5) (6) x C y C z C = M c x W y W z W (7),, 4 f, ( 1 ; 2 ; 3 ), (c x ;c y ), s x ) c T c, R c. 22
35 4 T c (t x ;t y ;t z ) R c (; ; ) f c x ; c y 1 ; 2 ; 3 s x 23
36 .,., 8., 3. 9,,,.,.,,.,.,, ,, [ 80] 2,, ,. 5.,.,,, 2.,. 24
37 z y ªªªªªªªªª ªªª Ladybug x ªªªªªªªªªª 8 9 ªª d 10 25
38 . Tsai [R. 87]., 3 3. [TMNH02].,, c(c = 0; 1; :::; 5) T c R c, M c. M c, m(m = 1; 2; :::) x m c(c = 0; 1;:::;5) u m, [ 90] M 0. c M0 c 12(r 1 ;r 2 ;:::;r 9 ;t x ;t y ;t z ) R c.,, 6 (; ; ; t x ;t y ;t z ) M c., v m u m (, ) M c. E c = X m ju m 0 v m j 2 (8) 26
39 2.3.2,., cos 4,,.,,., cos 4 [B. 86] [NAM96],, cos 4. Horn [B. 86], 11 l, I 0 I,,. I = l2 cos 4 f 2 I 0 (9) l f, cos 4. I L I = al + b, c c 0 I c I c 0. I c 0 = a c I c + b c (10) 27
40 I l θ I' f ˆ ªªª 11 cos 4 a c ;b c.,, c a c ;b c. 1., RGB., 2 a c ; b c,.,,,. 2. c c 0 i h c (i) h c0 (i) a c ;b c. RGB X ( e(a c ;b c )= h c0 (i) 0 1!) 2 i 0 b c h c (11) a i c a c, a c ;b c.,,. 3. (10), RGB. 28
41 2.4 12,,,.,,.,.,,,.,,,.,, S. G., S., S s I S (s),. c s u c c, s I S (s), s C(s). I S (s) = P c2c(s) c I c (u c ) Pc2C(s) c (12),, (; ), 13,,. 29
42 yt ªª jwt ˆ ªªªªªª ªªªªª { ªªªª jwªªªªª ªªªªª , S, x 2 c; c 0 u c ;u c 0, S s c ;s c 0, 2. S N, 6 s c Gs c 0 < 2 N, 1. 6 s c Gs c 0! 6 s c xs c 0, 6 T c xt c 0 < 2 N x. 2 d, x, 1. > d 2 tan N (13), Ladybug, 40mm, 3,340, 1, 20m. 30
43 x z φ θ y φ θ ˆ 13 ˆ S ªª c s c x u c T c G s c u c T c ªª c 14 31
44 Ladybug, Ladybug, 50cm, , 561.,, 170, LEICA TCR1105 XR,.,,. 16,., , 5., 17.,., 18.,.,,., 6,,. 18 Ladybug, 6,. 32
45 [1/mm 2 ] [1/mm 4 ] [1/mm 6 ] f [mm] s x c x [pixel] c y [pixel] [1/mm 2 ] [1/mm 4 ] [1/mm 6 ] f [mm] s x c x [pixel] c y [pixel]
46 15 34
47 16 (a) (b) 17 35
48 (a) (b) 18 6 [pixel]
49 (a) (b) ,,. 20,,.,., 21, 22, R, G, B., 0,.,.,,. 37
50 (a) (b) 20 38
51 Š ªª ªª ªª ªª ªª ªª (a) Š ªª ªª ªª ªª ªª ªª (b) 21 (R ) 39
52 Š ªª ªª ªª ªª ªª ªª (a) Š ªª ªª ªª ªª ªª ªª (b) 22 (G ) 40
53 Š ªª ªª ªª ªª ªª ªª (a) Š ªª ªª ªª ªª ªª ªª (b) 23 (B ) 41
54
55 25 Ladybug , 25 Ladybug. 26 ( : 76821,024)., 27., 27,.,,, 768pixel 5 3,340pixel.,, 1,670pixel. 27,,. 43
56 ( ( ) ( )) 44
57
58 ªª ,. 28 Ladybug. 29,. u c ;u c 0, s c Gs c 0., m, 30m, rad. 3, 6,. 3 46
59 [rad] , 30. 8,,, PC 3. 1, ,, 2,04821,024, JPEG PC. 30,,. Ladybug 15fps.,,,.,. 47
60 30 8 Elumens VisionStaion Microsoft SideWinder Game Pad Pro PC CPU:Intel Pentium4 1.7GHz, :1GB Nvidia GeForce4 48
61 2.7,,,.,,. Ladybug,,.,, 3,,. 49
62 GPS [ 05a] GPS GPS GPS GPS GPS GPS GPS 3.2 GPS
63 3.2 GPS GPS GPS c = 0 c M c i c N ic N ic = M c (M c ) 01 N i0 (c =0; 1; 2:::) (14) = 2 4 R ic t ic (15) R ic i c t ic (R i = R i0 ; t i = t i0 ) [FZ98, PKV + 00, SKYT02] 51
64 )25 Y X Z ÓÖÕ Ñ ÏÕÖ R i, t i Lp j qˆ ij w Φ ij )25 gi )25Ê w )25 LÊ q ij )25 Ê e )25 Ê e w d 31 [TPMF00] j i 8 ij ^q ij q ij 8 ij = jq ij 0 ^q ij j (j 2S i ) (16) S i i GPS GPS GPS 0 2 GPS i R i ; t i GPS g i 52
65 GPS d R i g i + t i = d (i 2F) (17) F GPS 31 GPS g i R i ; t i (17) R i ; t i GPS GPS GPS 9 i 9 i = jr i g i + t i 0 dj (18) 3.3 GPS GPS GPS GPS 32 (A) (B) k (C)GPS (C) (D) (C) GPS (C) (D) GPS (A) (D) GPS (16) 8 ij (18) GPS 9 i 53
66 (A) 特徴点の追跡 (1) 特徴点の候補位置の検出 (2) 特徴点の仮対応づけ (3) 暫定外部パラメータの推定 (4) 特徴点の再対応づけ (B) 外部パラメータの初期値推定 N i mod k = 0 Y (C) GPS 測位値を用いた狭区間最適化 (D) GPS 測位値を用いた広区間最適化 32 E E =! jfj X i2f 9 2 i + 1 P i js i j X i i X j2s i w j 8 2 ij (19) w j (A) j f8 0j ; 8 1j ; 111g i! 9 i 8 ij GPS GPS jf i j P i js i j! jf i j P i js i j (19) E R i ; t i p j GPS 54
67 33 E E i GPS i mm!!
68 w š 34! GPS ! =10 09! = GPS E 32 (A) (D) (A) (B) (C) (D) 1 (A) 56
69 1. Harris [HS88] LMeds [ 00] (B) (A) ( i ) R i t i X j w j 8 2 ij (20) 57
70 i-(k+2l)+1 i-l i ªª e lªªªª kªªªª lªªªª GPS e iªªªª ~ 35 [ 05b] (C) GPS (C) (A) (B) E GPS (A) 35 (A) (B) i i 0(k +2l)+1 i GPS E i0(k +l)+1 i 0 l k GPS (k ) k (C) k l GPS k 58
71 (D) GPS (D) (A) (C) (C) GPS (C) 2 (C) k +2l k 0 +2l 0 (C) k 0 l 0 l 0 GPS l 0 k (19) E (D) GPS GPS 59
72 b a (D) [ 05a] m 60
73 [ 05a] 36 a b 900 Ladybug GPS (60,-150,250)( mm) GPS GPS 1Hz 15 q ij GPS g i 9 R i ; t i 10 [ 05a] (19) E! (19) i GPS 1:0 GPS 2:0 1:0 GPS [ 05a] w ij GPS mm mm 0.020rad 61
74 15 GPS mm rad 30.7mm rad GPS GPS GPS GPS GPS 62
75 ªªªª w e 37 ªªªªªª ªªªªª ªªªªªª ªªªªª w ªªªª 38 63
76 39 GPS Ladybug GPS (Nikon LogPakII 63.0cm 64.0cm) 39 (Segway LOC Segway) 1.0km 7.6km 7800 RTK 1 GPS GPS GPS 300mm GPS (C) k =5 l =22 GPS 40 (a) 64
77 e e (a) ª ª ª ª ªª e ªªªª (b) 40 65
78
79
80 (b) GPS GPS 41 GPS GPS GPS RTK 43 GPS GPS (D) 44 l k mm mm Pentium4 3GHz, 2GB PC 14 3 l 0 68
81 d ªªªª 43 GPS 3.5 GPS GPS GPS GPS 69
82 ªªªª ªªª 44 w e ª ªªªª 45 70
83 71
84 (A) (D) 4 A B C D
85 [DTF + 01, MAS + 04, 01] (HMD) CAVE [CNSD + 92] CAVE CAVE 73
86 [KVJ03, 04] [CCEE98] CAVE [MAS + 04] [C. 02] 74
87 (a) (b), (c) (CPU: Intel Pentium4 1.8GHz, Graphics Card: Geforce4 Ti4600) (a) 47(a) ( 1.6m/sec) (Sick LMS200) 2 (CPU: Intel Pentium4 2.4GHz) 48 f y h y ( 47 (1)) ( 47 (2)) 48 LMS200 75Hz
88 (b) ÐÖÏÎ Ò ÐÖÏÎ Ò (c) ÒÏÑÔÖÏ ÔÖÑÏÐÑ gˆ ÑÐÖ Ö ÑÔÎÏ Ñ (3) (2) ste (1) (a) s Ï ÑÔÏ Ñ Ð Ó 47. LMS ; ; 2 y = (x + )
89 Ö Ð h y Ö Ñ ÔÎÏ Ñ ste b y f y Ö Ð y z Ö Ñ ÔÎÏ Ñ h b f y y y : e : ÔÖÒÊÑÖÏÒ : zê e 48. f y [Iwa99] b y v v = h y 0 b y v h y (b) 47 (b) 12 PC 100Mbps LAN h y 0 JPEG 77
90 ªªªª ªª 49 ªªªª ªª 50 78
91 (c) 47 (c) , (XGA)
92 H [h x ;h y ;h z ; 1] T P [p x ;p y ;p z ; 1] T P u P s P H s P 0 H P 0 P 00 [p x 0 h x ;p y 0 h y ;p z 0 h z ; 1=s] u = d c (N ci R i P 00 ) (21) d c (X) c X u s!1 P 00 R i i v i i(v) i(v) 1. Ladybug 2. 80
93 Ladybug (A) (D)
94 (a) (b) (c)
95 (a) 54(b) 54(a) 54(a) 50cm 150cm 54(c),(d) 83
96 (a) (b) (c) (d) 53 84
97 (a) (h x ;h y ) (75cm,0cm) (b) (h x ;h y ) (-75cm,0cm) (c) (h x ;h y ) (0cm,25cm) (d) (h x ;h y ) (0cm,-25cm) 54 85
98 17msec 1.6km/h 1.6m/sec 40fps 7.6m/sec 15fps (A), (B), (C), (D) 2 X Y X Y A D (A), (B), (C) 86
99 , (D) (A) (D) A (A) (B) (D) B (B) Ladybug (A) (C) (D) C (C) 0 2.0[km/h] (A) (B) (D) D (D) (A) (C) A C 5 (A) (B) (C) D 87
100 D (D) 10cm cm 32cm A B C D
101 (A), (B), (C), (D) (A) (C) 89
102
103
104 92
105 ,.,,.,,.,,.,..,,,.,.,,,.,.,. 93
106 [A. 80] A. Lippman. Movie-Map: An application of the optical video-disc to computer graphics. In Proc. SIGGRAPH, pp. 32{42, [B. 86] B. K. P. Horn. Robot Vision, chapter 10, pp. 206{209. Mit Press, [C. 02] [CB61] C. J. Taylor. VideoPlus: A method for capturing the structure and appearance of immersive environment. IEEE Trans. Visualization and Computer Graphics, Vol. 8, No. 2, pp. 171{182, C. Comeau and J. Bryan. Headsight television system provides remote surveillance. Electronics, pp. 86{90, [CCEE98] G. U. Carraro, M. Cortes, J. T. Edmark, and J. R. Ensor. The peloton bicycling simulator. In Proc. 3rd Symp. on Virtual Reality Modeling Language (VRML '98), pp. 63{70. ACM Press, [Che95] S. Chen. QuickTime VR: An image-based approach tovirtual environment navigation. In Proc. SIGGRAPH '95, pp. 29{38, [CMC03] A. I. Comport, E. Marchand, and F. Chaumette. A real-time tracker for markerless augmented reality. In Proc. 2nd ACM/IEEE Int. Symp. on Mixed and Augmented Reality (ISMAR2003), pp. 36{45, [CNSD + 92] C. Cruz-Neira, D. J. Sandin, T. A. DeFanti,R.V.Kenyo n,andj.c. Hart. The Cave - audio visual experience automatic virtual environment. Communication of the ACM, Vol. 35, No. 6, pp. 64{72, [Dav03] Andrew J. Davison. Real-time simultaneous localisation and mapping with a single camera. Proc. 9th IEEE Int. Conf. on Computer Vision (ICCV2003), Vol. 2, pp. 1403{1410, [DTF + 01] D. Kotake, T. Endo, F. Pighin, A. Katayama, H. Tamura, and M. Hirose. Cybercity Walker 2001 : Walking through and looking around 94
107 a realistic cyberspace reconstructed from the physical world. In Proc. 2nd IEEE and ACM Int. Symp. on Mixed Reality (ISMR2001), pp. 205{206, [FC200] FC [FZ98] A. W. Fitzgibbon and Andrew Zisserman. Automatic camera recovery for closed or open image sequences. In Proc. 5th European Conf. on Computer Vision, Vol. I, pp. 311 { 326, [GADS04] S. M. Goza, R. O. Ambrose, M. A. Diftler, and I. M. Spain. Telepresence control of the nasa/darpa robonaut on a mobility platform. In Proc. SIGCHI Conf. on Human factors in Computing Systems (CHI '04), pp. 623{629. ACM Press, [GF03] S. Guven and S. Feiner. Authoring 3D hypermedia for wearable augmented and virtual reality. In Proc. 7th IEEE Int. Symp. on Wearable Computers, pp. 118{126, [HKCY00] H. Tanahashi, K. Yamamoto, C. Wang, and Y. Niwa. Development of a stereo omnidirectional imageing system (SOS). In Proc. IEEE Int. Conf. on Industrial Electronics, Control and Instrumentation (IECON2000), pp. 289{294, [HS88] [Iwa99] [J. 93] C. Harris and M. Stephens. A combined corner and edge detector. In Proc. Alvey Vision Conf., pp. 147{151, H. Iwata. Walking about virtual environments on an innite oor. In Proc. IEEE Virtual Reality '99, pp. 286{293, J. Z. C. Lai. On the sensitivity of camera calibration. Image and Vision Computing, Vol. 11, No. 10, pp. 656{664,
108 [J. 04] J. Mulligan, X. Zabulis, N. Kelshikar, and K. Daniilidis. Stereo-based environment scanning for immersive telepresence. IEEE Trans. on Circuits and Systems for Video Technology, Vol. 14, No. 3, pp. 171{182, [JHNK00] J. Shimamura, H. Takemura, N. Yokoya, and K. Yamazawa. Construction of an immersive mixed environment using an omnidirectional stereo image sensor. In Proc. IEEE Workshop on Omnidirectional Vision, pp. 62{69, [Jou02] Norman P. Jouppi. First steps towards mutually-immersive mobile telepresence. In Proc ACM Conference on Computer Supported Cooperative Work (CSCW '02), pp. 354{363. ACM Press, [JXB + 91] J. Hong, X. Tan, B. Pinette, R. Weiss, and E. M. Riseman. Imagebased homing. In Proc. Int. Conf. on Robotics and Automation, pp. 620{625, [KHN02] K. Yamazawa, H. Takemura, and N. Yokoya. Telepresence system with an omnidirectional HD camera. In Proc. 5th Asian Conf. on Computer Vision (ACCV2002), Vol. 2, pp. 535{538, [KVJ03] K. J. Fernandes, V. Raja, and J. Eyre. Cybersphere: The fully immersive spherical projection system. Communications of the ACM, Vol. 46, No. 9, pp. 141{146, [LF92] B. Lauel and S. Fisher. Be there here. InterCommunication, [MAS + 04] M. Uyttendaele, A. Criminisi, S. B. Kang, S. Winder, R. Hartley, and R. Szeliski. High-quality image-based interactive exploration of real-world environments. IEEE Computer Graphics and Applications, [Min80] M. Minsky. Telepresence. Omni, Vol. 2, pp. 45{52,
109 [Mov99] Movingeye [NAM96] N. Asada, A. Amano, and M. Baba. Photometric calibration of zoom lens systems. In Proc. Int. Conf. Pattern Recognition, Vol. A, pp. 186{ 190, [PGA + 00] P. Peixoto, J. Goncalves, H. Antunes, J. Batista, and H. Araujo. A surveillance system integrating visual telepresence. In Proc. 15th Int. Conf. on Pattern Recognition (ICPR'00), Vol. 4, [PKV + 00] M. Pollefeys, R. Koch, M. Vergauwen, B. Deknuydt, and L. V. Gool. Three-dimentional scene reconstruction from images. In Proc. SPIE, Vol. 3958, pp. 215{226, [Poi02] Point Grey Research, Inc. Ladybug Omnidirectional Camera System User Guide version 1.0, [R. 87] [S. 97] [SE87] R. Y. Tsai. A versatile camera calibration technique for high-accuracy 3d machine vision metrology using o-the-shelf tv cameras and lenses. IEEE Jour. of Robotics and Automation,Vol. RA-3, No. 4, pp. 323{344, S. K. Nayar. Catadioptric omnidirectional camera. In Proc. Computer Vision and Pattern Recognition, pp.482{488, S. J. Oh and E. L. Hall. Guidanceo of a mobile robot using an omnidirectional vision navigation system. In Proc. Mobile Robots IISPIE 852, pp. 288{300, [SKN03] S. Morita, K. Yamazawa, and N. Yokoya. Internet telepresence by real-time view-dependent image generation with omnidirectional video camera. In Proc. SPIE Electronic Imaging, Vol. 5018, pp. 51{60, [SKYT02] T. Sato, M. Kanbara, N. Yokoya, and H. Takemura. Dense 3-D reconstruction of an outdoor scene by hundreds-baseline stereo using a 97
110 hand-held video csamera. Int. Jour. of Computer Vision, Vol. 47, No. 1-3, pp. 119{129, [TG00] A. Triesman and G. Gelade. A feature integration theory of attention. Cognitive Psycology, Vol. 12, pp. 298 { 375, [THKM00] T. Takahashi, H. Kawasaki, K. Ikeuchi, and M. Sakauchi. Virtual driving system with real-world image. In CD-ROM Proc. 7th World Congress on Intelligent Transportation Systems, [TKHN98] T. Kawanishi, K. Yamazawa, H. Takemura, and N. Yokoya. Generation of hight-resolution stereo panoramic images by omnidirectional imageing sensor using hexagonal pyramidal mirrors. In Proc. 14th Int. Conf. on Pattern Recognition (ICPR '98), Vol. 1, pp. 445{489, [TMNH02] T. Sato, M. Kanbara, N. Yokoya, and H. Takemura. Dense 3D reconstruction of an outdoor scene by hundreds-baseline stereo using a hand-held videocamera. Int. Journal of Computer Vision, Vol. 47, No. 1-3, pp. 110{129, [TMY98] T. Yamada, M. Hirose, and Y. Iida. Development of complete immersive display. In Proc. 4th Int. Conf. on Virtual Systems and Multimedia (VSMM'98), Vol. 2, pp. 522{527, [TPMF00] B. Triggs, R. Hartley P. McLauchlan, and A. Fitzgibbon. Bundle adjustment a modern synthesis. Vision Algorithms: Theory and Practice, pp. 298 { 375, [TPP97] T. Kanade, P. Rander, and P. J. Narayanan. Virtualized reality: Constructing virtual worlds from real scenes. IEEE MultiMedia, Vol. 4, No. 1, pp. 34{47, [VLF04] L. Vacchetti, V. Lepetit, and P. Fua. Combining edge and texture information for real-time accurate 3D camera tracking. In Proc. 3rd 98
111 IEEE and ACM Int. Symp. on Mixed and Augmented Reality (ISMAR '04), pp. 48{57, [YKNH98] Y. Onoe, K. Yamazawaand N. Yokoya, and H. Takemura. Telepresence by real-time view-dependent image generation from omnidirectional video streams. Computer Vision and Image Understanding, Vol. 71, No. 2, pp. 154{165, [YYM02] Y. Yagi, Y. Nishizawa, and M. Yachida. Estimating location and avoiding collision against unknown obstacle for the mobile robot using omnidirectional image sensor COPIS. In Proc. Int. Workshop on Intelligent Robots and Systems, pp.909{914, [ZSE86] Z. L. Cao, S. J. Oh, and E. L. Hall. Dynamic omnidirectional vision for mobile robots. Jour. Robotic Systems, Vol. 3, No. 1, pp. 5{17, [ 04],,,,.., Vol. J87-A, No. 1, pp. 87{95, [ 99],,,.. 4, pp. 211{212, [ 00],,,.., MVE99-82, [ 80]. 2., Vol. 4, No. J63-D, pp. 349{356, [ 00],.. (MIRU2000), Vol. I, pp. 65{70,
112 [ 03].. NAIST-IS-DT , [ 05a],,.. (D-II), No. 2, pp. 347{357, [ 05b],,.. D-II), Vol. J88-D-II, No. 2, pp. 347{357, [ 01],,,. CG., Vol. 42, No. SIG6(CVIM2), pp. 44{53, [ 02],,,,. (SOS)., Vol. 56, No. 4, pp. 603{610, [ 90]. pnp. 90, pp.41{50, [ 05],,,.., Vol. J88-D-II, No. 5, pp. 864{875, [ 98],,,.., Vol. J81-D-II, No. 5, pp. 880{887, [ 02],,. HyperOmni Vision. 100
113 , Vol. J79-D-II, No. 5, pp. 698{707, [ 00],,,.., PRMU ,
114 1.,, : \ ", Vol. 8, No. 4, pp , Dec ( 2 ) 2.,, : \ " D-II), Vol. J88-D-II, No. 2, pp , Feb ( 3 ) 3.,,,,, : \ " D-II), Vol. J88-D-II, No. 8, pp , Aug ,,, : \ GPS ", Vol. 47, No. SIG5 (CVIM13), pp , Mar ( 4 ) 1.,, : \ ", Vol. 1, pp , Sep ( 2 ) 2.,,,,,, : \ ", Vol. 2, pp , Sep
115 3.,,,,, : \ ", Vol. 3, pp , Sep ,, : \ ", Vol. 3, pp , Sep S. Ikeda, T. Sato, and N. Yokoya: \A calibration method for an omnidirectional multi-camera system" Proc. SPIE Electronic Imaging, Vol. 5006, pp , Jan ( 2 ) 2. S. Ikeda, T. Sato, and N. Yokoya: \High-resolution panoramic movie generation from video streams acquired by an omnidirectional multi-camera system" Proc. IEEE Int. Conf. on Multisensor Fusion and Integration for Intelligent System (MFI2003), pp , July ( 2 ) 3. S. Ikeda, T. Sato, and N. Yokoya: \Panoramic movie generation using an omnidirectional multi-camera system for telepresence" Proc. 13th Scandinavian Conf. on Image Analysis(SCIA2003), pp , July ( 2 ) 4. S. Ikeda, T. Sato, M. Kanbara, and N. Yokoya: \Telepresence system using high-resolution omnidirectional movies and a reactive display" Proc. IEEE and ACM Int. Symp. on Mixed Augmented Reality(ISMAR 03), pp , Oct ( 4 ) 5. T. Sato, S. Ikeda, M. Kanbara, A. Iketani, N. Nakajima, N. Yokoya, and K. Yamada: \High-resolution video mosaicing for documents and photos by estimating camera motion" Proc. SPIE Electronic Imaging, Vol. 5299, pp , Jan
116 6. S. Ikeda, T. Sato, M. Kanbara, and N. Yokoya: \Immersive telepresence system using high-resolution omnidirectional movies and a locomotion Interface" Proc. SPIE Electronic Imaging, Vol. 5291, pp , Jan ( 4 ) 7. T. Sato, S. Ikeda, and N. Yokoya: \Extrinsic camera parameter recovery of a moving omni-directional multi-camera system" Proc. Asian Conf. on Computer Vision (ACCV2004), Vol. I, pp , Jan ( 3 ) 8. T. Sato, S. Ikeda, and N. Yokoya: \Extrinsic camera parameter recovery from multiple image sequences captured by an omni-directional multicamera system" Proc. European Conf. on Computer Vision (ECCV2004), Vol. 2, pp , May ( 3 ) 9. S. Ikeda, T. Sato, M. Kanbara, and N. Yokoya: \An immersive telepresence system with a locomotion interface using high-resolution omnidirectional movies" Proc. 17th IAPR Int. Conf. on Pattern Recognition (ICPR2004), Vol. IV, pp , Aug ( 4 ) 10. A. Iketani, T. Sato, S. Ikeda, M. Kanbara, N. Nakajima, and N. Yokoya: \Super-resolved videomosaicing for documents by extrinsic camera parameter estimation" Proc. Int. Conf. on Computer Vision and Graphics, Sep K. Yamazawa, T. Ishikawa, T. Sato, S. Ikeda, Y. Nakamura, K. Fujikawa, H. Sunahara, and N. Yokoya: \Web-based telepresence system using omnidirectional video streams" Proc. 5th Pacic Rim Conf. on Multimedia, Vol. 3, pp , Dec S. Ikeda, T. Sato, M. Kanbara, and N. Yokoya: \Immersive telepresence system using high-resolution omnidirectional video with locomotion inter- 104
117 face" Proc. 14th Int. Conf. on Articial Reality and Telexistence (ICAT 2004), pp , Dec ( 4 ) 13. T. Ishikawa, K. Yamazawa, T. Sato, S. Ikeda, Y. Nakamura, K. Fujikawa, H. Sunahara, and N. Yokoya: \Networked telepresence system using web browsers and omnidirectional videostreams" Proc. SPIE Electronic Imaging, Vol. 5664, pp , Jan S. Ikeda, T. Sato, M. Kanbara, and N. Yokoya: \Immersive telepresence system with a locomotion interface using high-resolution omnidirectional videos" Proc. IAPR Conf. on Machine Vision Applications (MVA2005), pp , May ( 4 ) 15. A. Iketani, T. Sato, S. Ikeda, M. Kanbara, N. Nakajima, and N. Yokoya: \Video mosaicing for curved surface by 3D reconstruction using feature points" CD-ROM Proc. Int. Conf. on Computer Vision (ICCV2005), Demonstrations, Oct Y. Yokochi, S. Ikeda, T. Sato, and N. Yokoya: \Extrinsic camera parameter estimation based-on feature tracking and GPS data" Proc. Asian Conf. on Computer Vision (ACCV2006), Vol. 1, pp , Jan ( 3 ) 17. A. Iketani, T. Sato, S. Ikeda, M. Kanbara, N. Nakajima, and N. Yokoya: \Super-resolved video mosaicing for documents based on extrinsic camera parameter estimation" Proc. Asian Conf. on Computer Vision (ACCV2006), Vol. 2, pp , Jan ,, : \ ", PRMU , Dec ( 2 ) 105
118 2.,, : \ ", Vol. 8, No. 2, pp , May ( 2 ) 3.,, : \ ", CVIM141-13, Nov ( 3 ) 4.,,,,,, : \ ", PRMU , Feb ,,,,,, : \ " (MIRU2004), Vol. I, pp , July ,,,,,,, : \Web " (MIRU2004), Vol. I, pp , July ,,,,, : \ ", pp , Nov ,,, : \ GPS " CVIM147-12, Jan ( 3 ) 9.,, : \ " CVIM148-20, March
119 10.,,, : \ GPS " (MIRU2005), pp , July ( 3 ) 11.,,,,, : \ " (MIRU2005), pp , July ,,,,, : \ ", PRMU, Feb ( ) 1.,, : \ ", No.G13-27, Nov ( 2 ) 2.,,, : \ " (FIT), Vol. 3, No. K-098, Sep ( 4 ) 3.,,,,,, : \ " (FIT), Vol. 3, No. I-028, Sep ,,, : \Web " 2004, No.D , March ,, : \ " 2004, No. D , March
120 6.,,, : \ GPS " (FIT), Vol. 3, pp , Sep ( 3 ) 7.,, : \ " (FIT), Vol. 3, pp , Sep ,,,,, : \ " 2005, No.D-12-12, March ,,, : \ " 10, pp , Sep ,, : \ ", Vol. 16, No. 2, pp. 5-9, Feb ( 2 ) 108
mthesis.dvi
NAIST-IS-MT0151005 2003 2 7 ( ) 3,.,,.,.,..,,.,,.,,.,,. 3, NAIST-IS- MT0151005, 2003 2 7. i ,,, ii Generating a Panoramic Movie by Calibrating an Omnidirectional Multi-camera System 3 Sei IKEDA Abstract
More information(MIRU2007) GPS, GPS GPS structure-from-
(MIRU2007) 2007 7 GPS, 630 0192 8916 5 480 1192 41 1 E-mail: {sei-i,tomoka-s,yokoya}@is.naist.jp, yamaguchi@mosk.tytlabs.co.jp GPS GPS structure-from-motion GPS GPS Construction of Feature Landmark Database
More information1 Kinect for Windows M = [X Y Z] T M = [X Y Z ] T f (u,v) w 3.2 [11] [7] u = f X +u Z 0 δ u (X,Y,Z ) (5) v = f Y Z +v 0 δ v (X,Y,Z ) (6) w = Z +
3 3D 1,a) 1 1 Kinect (X, Y) 3D 3D 1. 2010 Microsoft Kinect for Windows SDK( (Kinect) SDK ) 3D [1], [2] [3] [4] [5] [10] 30fps [10] 3 Kinect 3 Kinect Kinect for Windows SDK 3 Microsoft 3 Kinect for Windows
More informationFig Measurement data combination. 2 Fig. 2. Ray vector. Fig (12) 1 2 R 1 r t 1 3 p 1,i i 2 3 Fig.2 R 2 t 2 p 2,i [u, v] T (1)(2) r R 1 R 2
IP 06 16 / IIS 06 32 3 3-D Environment Modeling from Images Acquired with an Omni-Directional Camera Mounted on a Mobile Robot Atsushi Yamashita, Tomoaki Harada, Ryosuke Kawanishi, Toru Kaneko (Shizuoka
More information*1 1 1 Augmented Telepresence Using Recorded Aerial Omnidirectional Videos Captured from Unmanned Airship Fumio Okura *1, Masayuki Kanbara 1 and Naoka
*1 1 1 Augmented Telepresence Using Recorded Aerial Omnidirectional Videos Captured from Unmanned Airship Fumio Okura *1, Masayuki Kanbara 1 and Naokazu Yokoya 1 Abstract This paper proposes an augmented
More information& Vol.5 No (Oct. 2015) TV 1,2,a) , Augmented TV TV AR Augmented Reality 3DCG TV Estimation of TV Screen Position and Ro
TV 1,2,a) 1 2 2015 1 26, 2015 5 21 Augmented TV TV AR Augmented Reality 3DCG TV Estimation of TV Screen Position and Rotation Using Mobile Device Hiroyuki Kawakita 1,2,a) Toshio Nakagawa 1 Makoto Sato
More information2003/3 Vol. J86 D II No.3 2.3. 4. 5. 6. 2. 1 1 Fig. 1 An exterior view of eye scanner. CCD [7] 640 480 1 CCD PC USB PC 2 334 PC USB RS-232C PC 3 2.1 2
Curved Document Imaging with Eye Scanner Toshiyuki AMANO, Tsutomu ABE, Osamu NISHIKAWA, Tetsuo IYODA, and Yukio SATO 1. Shape From Shading SFS [1] [2] 3 2 Department of Electrical and Computer Engineering,
More information258 5) GPS 1 GPS 6) GPS DP 7) 8) 10) GPS GPS 2 3 4 5 2. 2.1 3 1) GPS Global Positioning System
Vol. 52 No. 1 257 268 (Jan. 2011) 1 2, 1 1 measurement. In this paper, a dynamic road map making system is proposed. The proposition system uses probe-cars which has an in-vehicle camera and a GPS receiver.
More information(bundle adjustment) 8),9) ),6),7) GPS 8),9) GPS GPS 8) GPS GPS GPS GPS Anai 9) GPS GPS GPS GPS GPS GPS GPS Maier ) GPS GPS Anai 9) GPS GPS M GPS M inf
GPS GPS solve this problem, we propose ()novel model about GPS positioning which enables more robust estimation with extended bundle adjustment, and ()outlier removal for GPS positioning using video information.
More informationIPSJ SIG Technical Report Vol.2010-MPS-77 No /3/5 VR SIFT Virtual View Generation in Hallway of Cybercity Buildings from Video Sequen
VR 1 1 1 1 1 SIFT Virtual View Generation in Hallway of Cybercity Buildings from Video Sequences Sachiyo Yoshida, 1 Masami Takata 1 and Joe Kaduki 1 Appearance of Three-dimensional (3D) building model
More information28 Horizontal angle correction using straight line detection in an equirectangular image
28 Horizontal angle correction using straight line detection in an equirectangular image 1170283 2017 3 1 2 i Abstract Horizontal angle correction using straight line detection in an equirectangular image
More informationIPSJ SIG Technical Report iphone iphone,,., OpenGl ES 2.0 GLSL(OpenGL Shading Language), iphone GPGPU(General-Purpose Computing on Graphics Proc
iphone 1 1 1 iphone,,., OpenGl ES 2.0 GLSL(OpenGL Shading Language), iphone GPGPU(General-Purpose Computing on Graphics Processing Unit)., AR Realtime Natural Feature Tracking Library for iphone Makoto
More informationxx/xx Vol. Jxx A No. xx 1 Fig. 1 PAL(Panoramic Annular Lens) PAL(Panoramic Annular Lens) PAL (2) PAL PAL 2 PAL 3 2 PAL 1 PAL 3 PAL PAL 2. 1 PAL
PAL On the Precision of 3D Measurement by Stereo PAL Images Hiroyuki HASE,HirofumiKAWAI,FrankEKPAR, Masaaki YONEDA,andJien KATO PAL 3 PAL Panoramic Annular Lens 1985 Greguss PAL 1 PAL PAL 2 3 2 PAL DP
More informationVRSJ-SIG-MR_okada_79dce8c8.pdf
THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS TECHNICAL REPORT OF IEICE. 630-0192 8916-5 E-mail: {kaduya-o,takafumi-t,goshiro,uranishi,miyazaki,kato}@is.naist.jp,.,,.,,,.,,., CG.,,,
More informationIPSJ SIG Technical Report GPS LAN GPS LAN GPS LAN Location Identification by sphere image and hybrid sensing Takayuki Katahira, 1 Yoshio Iwai 1
1 1 1 GPS LAN GPS LAN GPS LAN Location Identification by sphere image and hybrid sensing Takayuki Katahira, 1 Yoshio Iwai 1 and Hiroshi Ishiguro 1 Self-location is very informative for wearable systems.
More information情報処理学会研究報告 IPSJ SIG Technical Report Vol.2014-GN-90 No.6 Vol.2014-CDS-9 No.6 Vol.2014-DCC-6 No /1/23 Bullet Time 1,a) 1 Bullet Time Bullet Time
Bullet Time 1,a) 1 Bullet Time Bullet Time Generation Technique and Eveluation on High-Resolution Bullet-Time Camera Work Ryuuki Sakamoto 1,a) Ding Chen 1 Abstract: The multi-camera environment have been
More informationIPSJ SIG Technical Report Vol.2012-CG-148 No /8/29 3DCG 1,a) On rigid body animation taking into account the 3D computer graphics came
3DCG 1,a) 2 2 2 2 3 On rigid body animation taking into account the 3D computer graphics camera viewpoint Abstract: In using computer graphics for making games or motion pictures, physics simulation is
More information( )
NAIST-IS-MT9951117 2001 2 9 ( ) 3 CG, VR.,,,.,,,,,.,, 2, 3 3,.,, 2, 3.,,,,,.,,,.,,.,,, 3, NAIST-IS- MT9951117, 2001 2 9. i Intaractive terrain generation within Immersive Modeling System 3 Ryutarou Morimoto
More information(3.6 ) (4.6 ) 2. [3], [6], [12] [7] [2], [5], [11] [14] [9] [8] [10] (1) Voodoo 3 : 3 Voodoo[1] 3 ( 3D ) (2) : Voodoo 3D (3) : 3D (Welc
1,a) 1,b) Obstacle Detection from Monocular On-Vehicle Camera in units of Delaunay Triangles Abstract: An algorithm to detect obstacles by using a monocular on-vehicle video camera is developed. Since
More informationIPSJ SIG Technical Report Vol.2009-DPS-141 No.20 Vol.2009-GN-73 No.20 Vol.2009-EIP-46 No /11/27 1. MIERUKEN 1 2 MIERUKEN MIERUKEN MIERUKEN: Spe
1. MIERUKEN 1 2 MIERUKEN MIERUKEN MIERUKEN: Speech Visualization System Based on Augmented Reality Yuichiro Nagano 1 and Takashi Yoshino 2 As the spread of the Augmented Reality(AR) technology and service,
More information2). 3) 4) 1.2 NICTNICT DCRA Dihedral Corner Reflector micro-arraysdcra DCRA DCRA DCRA 3D DCRA PC USB PC PC ON / OFF Velleman K8055 K8055 K8055
1 1 1 2 DCRA 1. 1.1 1) 1 Tactile Interface with Air Jets for Floating Images Aya Higuchi, 1 Nomin, 1 Sandor Markon 1 and Satoshi Maekawa 2 The new optical device DCRA can display floating images in free
More informationmiru2006_cr.dvi
y;yy yy y y;yy yy y;yy y 630 0192 8916 5 yy NEC 630 0101 8916 47 E-mail: yftomoka-s,sei-i,kanbara,yokoyag@is.naist.jp, yyiketani@cp, n-nakajima@ay.jp.nec.com (structure from motion), structure from motion,,,
More information光学
Range Image Sensors Using Active Stereo Methods Kazunori UMEDA and Kenji TERABAYASHI Active stereo methods, which include the traditional light-section method and the talked-about Kinect sensor, are typical
More information光学
Fundamentals of Projector-Camera Systems and Their Calibration Methods Takayuki OKATANI To make the images projected by projector s appear as desired, it is e ective and sometimes an only choice to capture
More information3 3 3 Knecht (2-3fps) AR [3] 2. 2 Debevec High Dynamic Range( HDR) [4] HDR Derek [5] 2. 3 [6] 3. [6] x E(x) E(x) = 2π π 2 V (x, θ i, ϕ i )L(θ
(MIRU212) 212 8 RGB-D 223 8522 3 14 1 E-mail: {ikeda,charmie,saito}@hvrl.ics.keio.ac.jp, sugimoto@ics.keio.ac.jp RGB-D Lambert RGB-D 1. Augmented Reality AR [1] AR AR 2 [2], [3] [4], [5] [6] RGB-D RGB-D
More informationIPSJ SIG Technical Report Vol.2012-CG-149 No.13 Vol.2012-CVIM-184 No /12/4 3 1,a) ( ) DB 3D DB 2D,,,, PnP(Perspective n-point), Ransa
3,a) 3 3 ( ) DB 3D DB 2D,,,, PnP(Perspective n-point), Ransac. DB [] [2] 3 DB Web Web DB Web NTT NTT Media Intelligence Laboratories, - Hikarinooka Yokosuka-Shi, Kanagawa 239-0847 Japan a) yabushita.hiroko@lab.ntt.co.jp
More information11) 13) 11),12) 13) Y c Z c Image plane Y m iy O m Z m Marker coordinate system T, d X m f O c X c Camera coordinate system 1 Coordinates and problem
1 1 1 Posture Esimation by Using 2-D Fourier Transform Yuya Ono, 1 Yoshio Iwai 1 and Hiroshi Ishiguro 1 Recently, research fields of augmented reality and robot navigation are actively investigated. Estimating
More information(a) 1 (b) 3. Gilbert Pernicka[2] Treibitz Schechner[3] Narasimhan [4] Kim [5] Nayar [6] [7][8][9] 2. X X X [10] [11] L L t L s L = L t + L s
1 1 1, Extraction of Transmitted Light using Parallel High-frequency Illumination Kenichiro Tanaka 1 Yasuhiro Mukaigawa 1 Yasushi Yagi 1 Abstract: We propose a new sharpening method of transmitted scene
More information24 21 21115025 i 1 1 2 5 2.1.................................. 6 2.1.1........................... 6 2.1.2........................... 7 2.2...................................... 8 2.3............................
More information,,.,.,,.,.,.,.,,.,..,,,, i
22 A person recognition using color information 1110372 2011 2 13 ,,.,.,,.,.,.,.,,.,..,,,, i Abstract A person recognition using color information Tatsumo HOJI Recently, for the purpose of collection of
More information4. C i k = 2 k-means C 1 i, C 2 i 5. C i x i p [ f(θ i ; x) = (2π) p 2 Vi 1 2 exp (x µ ] i) t V 1 i (x µ i ) 2 BIC BIC = 2 log L( ˆθ i ; x i C i ) + q
x-means 1 2 2 x-means, x-means k-means Bayesian Information Criterion BIC Watershed x-means Moving Object Extraction Using the Number of Clusters Determined by X-means Clustering Naoki Kubo, 1 Kousuke
More information27 VR Effects of the position of viewpoint on self body in VR environment
27 VR Effects of the position of viewpoint on self body in VR environment 1160298 2015 2 25 VR (HMD), HMD (VR). VR,.. HMD,., VR,.,.,,,,., VR,. HMD VR i Abstract Effects of the position of viewpoint on
More informationproc.dvi
M. D. Wheler Cyra Technologies, Inc. 3 3 CAD albedo Mapping textures on 3D geometric model using reflectance image Ryo Kurazume M. D. Wheler Katsushi Ikeuchi The University oftokyo Cyra Technologies, Inc.
More informationMicrosoft Word - GraphLayout1-Journal-ver2.doc
ÕÒÖÎ ÆÉ ÐÖÔÒ Ñ ˆ e Ê j ÉÏÏÔÐÏÒuu ËÊ o y * ÎÏ Ó ÏÕ( ) (* É ) An Improvement of Force-directed Hierarchical Graph Layout And Its Application to Web Site Visualization Jun DOI Takayuki ITOH IBM Research,
More informationHuman-Agent Interaction Simposium A Heterogeneous Robot System U
Human-Agent Interaction Simposium 2006 2A-3 277-8561 5 1-5 113-8656 7-3-1 E-mail: {hosoi,mori,sugi}@itl.t.u-tokyo.ac.jp 3 Heterogeneous Robot System Using Blimps Kazuhiro HOSOI, Akihiro MORI, and Masanori
More information1 3DCG [2] 3DCG CG 3DCG [3] 3DCG 3 3 API 2 3DCG 3 (1) Saito [4] (a) 1920x1080 (b) 1280x720 (c) 640x360 (d) 320x G-Buffer Decaudin[5] G-Buffer D
3DCG 1) ( ) 2) 2) 1) 2) Real-Time Line Drawing Using Image Processing and Deforming Process Together in 3DCG Takeshi Okuya 1) Katsuaki Tanaka 2) Shigekazu Sakai 2) 1) Department of Intermedia Art and Science,
More information本文6(599) (Page 601)
(MIRU2008) 2008 7 525 8577 1 1 1 E-mail: matsuzaki@i.ci.ritsumei.ac.jp, shimada@ci.ritsumei.ac.jp Object Recognition by Observing Grasping Scene from Image Sequence Hironori KASAHARA, Jun MATSUZAKI, Nobutaka
More informationWeb Social Networking Service Virtual Private Network 84
Promising business utilized five senses information media through the Next Generation Network Toshio ASANO Next Generation Network 2004 11 2010 6,000 3,000 2006 12 2008 83 Web Social Networking Service
More informationdews2004-final.dvi
DEWS2004 I-10-04 606 8501 E-mail: {akahoshi,hirotanaka,tanaka}@dl.kuis.kyoto-u.ac.jp A Basic Study on Ubiquitous Hypermedia Model Yuhei AKAHOSHI, Hiroya TANAKA, and Katsumi TANAKA Graduate School of Informatics,
More information194 6 日本バーチャルリアリティ学会誌第 17 巻 4 号 2012 年 12 月 VR Virtual Reality VR 3D 3D VR 3D 30 3D 3D VR 3D 19 3D D D D D 3D VR 2
194 6 日本バーチャルリアリティ学会誌第 17 巻 4 号 2012 年 12 月 VR Virtual Reality VR 3D 3D VR 3D 30 3D 3D VR 3D 19 3D 1920 3D 1950 3D 1980 2 3D 2010 3 3D 3D 10 1960 VR 2 3D 10 1990 VR 3D 10 VR 2 VR 2020 3D 3D 1838 Binocular
More information「hoge」
ICS-06M-404 255 1 7 1.1................................... 7 1.1.1........................... 7 1.1.2........................ 8 1.1.3............................ 9 1.2..................................
More information1 Table 1: Identification by color of voxel Voxel Mode of expression Nothing Other 1 Orange 2 Blue 3 Yellow 4 SSL Humanoid SSL-Vision 3 3 [, 21] 8 325
社団法人人工知能学会 Japanese Society for Artificial Intelligence 人工知能学会研究会資料 JSAI Technical Report SIG-Challenge-B3 (5/5) RoboCup SSL Humanoid A Proposal and its Application of Color Voxel Server for RoboCup SSL
More informationVol. 48 No. 1 Jan MR, MR A Sharing Method of Real Objects Differ in Syntax each other Based on a Virtual Sheet between Remote Mixed Reality Spac
Vol. 48 No. 1 Jan. 2007 MR, MR A Sharing Method of Real Objects Differ in Syntax each other Based on a Virtual Sheet between Remote Mixed Reality Spaces Kazuhiro Miyasa, Yuichi Bannai,, Yuji Suzuki, Hidekazu
More informationIEEE HDD RAID MPI MPU/CPU GPGPU GPU cm I m cm /g I I n/ cm 2 s X n/ cm s cm g/cm
Neutron Visual Sensing Techniques Making Good Use of Computer Science J-PARC CT CT-PET TB IEEE HDD RAID MPI MPU/CPU GPGPU GPU cm I m cm /g I I n/ cm 2 s X n/ cm s cm g/cm cm cm barn cm thn/ cm s n/ cm
More information(MIRU2008) HOG Histograms of Oriented Gradients (HOG)
(MIRU2008) 2008 7 HOG - - E-mail: katsu0920@me.cs.scitec.kobe-u.ac.jp, {takigu,ariki}@kobe-u.ac.jp Histograms of Oriented Gradients (HOG) HOG Shape Contexts HOG 5.5 Histograms of Oriented Gradients D Human
More informationNAIST-IS-MT
NAIST-IS-MT1251002 2014 3 13 ( ) Augmented Reality AR AR AR AR AR (1) (2) (3) AR AR, NAIST-IS-MT1251002, 2014 3 13. i AR AR AR ii Augmented Reality Using Pre-captured Images Considering Change of Real-world
More information2007/8 Vol. J90 D No. 8 Stauffer [7] 2 2 I 1 I 2 2 (I 1(x),I 2(x)) 2 [13] I 2 = CI 1 (C >0) (I 1,I 2) (I 1,I 2) Field Monitoring Server
a) Change Detection Using Joint Intensity Histogram Yasuyo KITA a) 2 (0 255) (I 1 (x),i 2 (x)) I 2 = CI 1 (C>0) (I 1,I 2 ) (I 1,I 2 ) 2 1. [1] 2 [2] [3] [5] [6] [8] Intelligent Systems Research Institute,
More informationVol. 23 No. 4 Oct. 2006 37 2 Kitchen of the Future 1 Kitchen of the Future 1 1 Kitchen of the Future LCD [7], [8] (Kitchen of the Future ) WWW [7], [3
36 Kitchen of the Future: Kitchen of the Future Kitchen of the Future A kitchen is a place of food production, education, and communication. As it is more active place than other parts of a house, there
More informationB HNS 7)8) HNS ( ( ) 7)8) (SOA) HNS HNS 4) HNS ( ) ( ) 1 TV power, channel, volume power true( ON) false( OFF) boolean channel volume int
SOA 1 1 1 1 (HNS) HNS SOA SOA 3 3 A Service-Oriented Platform for Feature Interaction Detection and Resolution in Home Network System Yuhei Yoshimura, 1 Takuya Inada Hiroshi Igaki 1, 1 and Masahide Nakamura
More informationIPSJ SIG Technical Report Vol.2014-GN-90 No.16 Vol.2014-CDS-9 No.16 Vol.2014-DCC-6 No /1/24 1,a) 2,b) 2,c) 1,d) QUMARION QUMARION Kinect Kinect
1,a) 2,b) 2,c) 1,d) QUMARION QUMARION Kinect Kinect Using a Human-Shaped Input Device for Remote Pose Instruction Yuki Tayama 1,a) Yoshiaki Ando 2,b) Misaki Hagino 2,c) Ken-ichi Okada 1,d) Abstract: There
More information九州大学学術情報リポジトリ Kyushu University Institutional Repository 多視点動画像処理による 3 次元モデル復元に基づく自由視点画像生成のオンライン化 : PC クラスタを用いた実現法 上田, 恵九州大学システム情報科学研究院知能システム学部門 有田, 大
九州大学学術情報リポジトリ Kyushu University Institutional Repository 多視点動画像処理による 3 次元モデル復元に基づく自由視点画像生成のオンライン化 : PC クラスタを用いた実現法 上田, 恵九州大学システム情報科学研究院知能システム学部門 有田, 大作九州大学システム情報科学研究院知能システム学部門 谷口, 倫一郎九州大学システム情報科学研究院知能システム学部門
More information25 AR 3 Property of three-dimensional perception in the wearable AR environment
25 AR 3 Property of three-dimensional perception in the wearable AR environment 1140378 2014 2 28 AR 3 AR.. AR,. AR. 2, [2]., [3]., AR. AR. 3D 3D,,., 3D..,,,,. AR,, HMD,, 3 i Abstract Property of three-dimensional
More informationInput image Initialize variables Loop for period of oscillation Update height map Make shade image Change property of image Output image Change time L
1,a) 1,b) 1/f β Generation Method of Animation from Pictures with Natural Flicker Abstract: Some methods to create animation automatically from one picture have been proposed. There is a method that gives
More informationIPSJ SIG Technical Report Vol.2014-HCI-158 No /5/22 1,a) 2 2 3,b) Development of visualization technique expressing rainfall changing conditions
1,a) 2 2 3,b) Development of visualization technique expressing rainfall changing conditions with a still picture Yuuki Hyougo 1,a) Hiroko Suzuki 2 Tadanobu Furukawa 2 Kazuo Misue 3,b) Abstract: In order
More information2 T. SICE Vol.41 No.12 December Figure PC (HMD) (Figure 2) HMD 9.8[m/s 2 ] 0 (Figure 3-(a)) RS-232C NTSC HMD DSC PC Converter Controlle
Vol.41, No.12, 1/6 2005 AR-Based Assistance System to Search Disaster Victims Using Teleoperated Unmanned Helicopter Masanao Koeda,YoshioMatsumoto and Tsukasa Ogasawara In this paper, we introduce an immersive
More information25 D Effects of viewpoints of head mounted wearable 3D display on human task performance
25 D Effects of viewpoints of head mounted wearable 3D display on human task performance 1140322 2014 2 28 D HMD HMD HMD HMD 3D HMD HMD HMD HMD i Abstract Effects of viewpoints of head mounted wearable
More informationmain.dvi
A 1/4 1 1/ 1/1 1 9 6 (Vergence) (Convergence) (Divergence) ( ) ( ) 97 1) S. Fukushima, M. Takahashi, and H. Yoshikawa: A STUDY ON VR-BASED MUTUAL ADAPTIVE CAI SYSTEM FOR NUCLEAR POWER PLANT, Proc. of FIFTH
More information14 2 5
14 2 5 i ii Surface Reconstruction from Point Cloud of Human Body in Arbitrary Postures Isao MORO Abstract We propose a method for surface reconstruction from point cloud of human body in arbitrary postures.
More informationÅÇÈÉ ÂÃ ÂÃÄ Â ÉÂÄ ÉÂÄ Â!Â" ÄÅÇÈÉ ÈÈ Â ÂÃ 2 1 ARCHEOGUIDE[14, 15] AR CG [16, 17] PDA ÂÃÄ ÅÇÈÉ 2 [18] CG
630-0192 8916-5 E-mail: {ryuhei-t, kanbara, yokoya}@is.naist.jp PDA Nara Palace Site Navigator Mobile Tour Guide System Using Multimedia Contents Ryuhei TENMOKU Masayuki KANBARA and Naokazu YOKOYA Graduate
More informationVol. 42 No. SIG 8(TOD 10) July HTML 100 Development of Authoring and Delivery System for Synchronized Contents and Experiment on High Spe
Vol. 42 No. SIG 8(TOD 10) July 2001 1 2 3 4 HTML 100 Development of Authoring and Delivery System for Synchronized Contents and Experiment on High Speed Networks Yutaka Kidawara, 1 Tomoaki Kawaguchi, 2
More information22_05.dvi
Vol. 1 No. 2 41 49 (July 2008) 3 1 1 3 2 1 1 3 Person-independent Monocular Tracking of Face and Facial Actions Yusuke Sugano 1 and Yoichi Sato 1 This paper presents a monocular method of tracking faces
More informationJournal of Geography 116 (6) Configuration of Rapid Digital Mapping System Using Tablet PC and its Application to Obtaining Ground Truth
Journal of Geography 116 (6) 749-758 2007 Configuration of Rapid Digital Mapping System Using Tablet PC and its Application to Obtaining Ground Truth Data: A Case Study of a Snow Survey in Chuetsu District,
More informationIPSJ SIG Technical Report Vol.2014-CG-155 No /6/28 1,a) 1,2,3 1 3,4 CG An Interpolation Method of Different Flow Fields using Polar Inter
,a),2,3 3,4 CG 2 2 2 An Interpolation Method of Different Flow Fields using Polar Interpolation Syuhei Sato,a) Yoshinori Dobashi,2,3 Tsuyoshi Yamamoto Tomoyuki Nishita 3,4 Abstract: Recently, realistic
More informationA Navigation Algorithm for Avoidance of Moving and Stationary Obstacles for Mobile Robot Masaaki TOMITA*3 and Motoji YAMAMOTO Department of Production
A Navigation Algorithm for Avoidance of Moving and Stationary Obstacles for Mobile Robot Masaaki TOMITA*3 and Motoji YAMAMOTO Department of Production System Engineering, Kyushu Polytecnic College, 1665-1
More information(a) (b) (c) Fig. 2 2 (a) ; (b) ; (c) (a)configuration of the proposed system; (b)processing flow of the system; (c)the system in use 1 GPGPU (
1 1 1 (a) (b) imperceptible A Realtime and Adaptive Technique for Projection onto Non-Flat Surfaces Using a Mobile Projector Camera System Eiji Seki, 1 Dao Vinh Ninh 1 and Masanori Sugimoto 1 In this paper,
More information(a) (b) 1 JavaScript Web Web Web CGI Web Web JavaScript Web mixi facebook SNS Web URL ID Web 1 JavaScript Web 1(a) 1(b) JavaScript & Web Web Web Webji
Webjig Web 1 1 1 1 Webjig / Web Web Web Web Web / Web Webjig Web DOM Web Webjig / Web Web Webjig: a visualization tool for analyzing user behaviors in dynamic web sites Mikio Kiura, 1 Masao Ohira, 1 Hidetake
More informationProceedings of the 61st Annual Conference of the Institute of Systems, Control and Information Engineers (ISCIE), Kyoto, May 23-25, 2017 The Visual Se
The Visual Servo Control of Drone in Consideration of Dead Time,, Junpei Shirai and Takashi Yamaguchi and Kiyotsugu Takaba Ritsumeikan University Abstract Recently, the use of drones has been expected
More informationLED a) A New LED Array Acquisition Method Focusing on Time-Gradient and Space- Gradient Values for Road to Vehicle Visible Light Communication Syunsuk
VOL. J97-B NO. 7 JULY 2014 本 PDF の扱いは 電子情報通信学会著作権規定に従うこと なお 本 PDF は研究教育目的 ( 非営利 ) に限り 著者が第三者に直接配布することができる 著者以外からの配布は禁じられている LED a) A New LED Array Acquisition Method Focusing on Time-Gradient and Space-
More information28 TCG SURF Card recognition using SURF in TCG play video
28 TCG SURF Card recognition using SURF in TCG play video 1170374 2017 3 2 TCG SURF TCG TCG OCG SURF Bof 20 20 30 10 1 SURF Bag of features i Abstract Card recognition using SURF in TCG play video Haruka
More informationOptical Flow t t + δt 1 Motion Field 3 3 1) 2) 3) Lucas-Kanade 4) 1 t (x, y) I(x, y, t)
http://wwwieice-hbkborg/ 2 2 4 2 -- 2 4 2010 9 3 3 4-1 Lucas-Kanade 4-2 Mean Shift 3 4-3 2 c 2013 1/(18) http://wwwieice-hbkborg/ 2 2 4 2 -- 2 -- 4 4--1 2010 9 4--1--1 Optical Flow t t + δt 1 Motion Field
More informationVol. 44 No. SIG 9(CVIM 7) ) 2) 1) 1 2) 3 7) 1) 2) 3 3) 4) 5) (a) (d) (g) (b) (e) (h) No Convergence? End (f) (c) Yes * ** * ** 1
Vol. 44 No. SIG 9(CVIM 7) July 2003, Robby T. Tan, 1 Estimating Illumination Position, Color and Surface Reflectance Properties from a Single Image Kenji Hara,, Robby T. Tan, Ko Nishino, Atsushi Nakazawa,
More informationVirtual Window System Virtual Window System Virtual Window System Virtual Window System Virtual Window System Virtual Window System Social Networking
23 An attribute expression of the virtual window system communicators 1120265 2012 3 1 Virtual Window System Virtual Window System Virtual Window System Virtual Window System Virtual Window System Virtual
More informationIP IIS Construction of Overhead View Images by Estimating Intrinsic and Extrinsic Camera Parameters of Multiple Fish-Eye Cameras Shota Kas
I-08- IIS-08- Construction of Overead View Images by Estimating Intrinsic and Extrinsic Camera arameters of Multiple Fis-Eye Cameras Sota Kase, Ryota Okutsu, Hisanori Mitsumoto (Cuo University) Yoei Aragaki,
More informationipod touch 1 2 Apple ipod touch ipod touch 3 ( ) ipod touch ( 1 ) Apple ( 2 ) Web 1),2) 3. ipod touch 1 2 ipod touch x y z i
ipod touch 1 1 ipod touch. 1) 6 2) 3) A library for detecting movements of an ipod touch by 3D acceleration Akira Kotaki 1 and Mariko Sasakura 1 The aim of this study is to develop a library for detecting
More information17 Multiple video streams control for the synchronous delivery and playback 1085404 2006 3 10 Web IP 1 1 1 3,,, i Abstract Multiple video streams control for the synchronous delivery and playback Yoshiyuki
More information(a) (b) 2 2 (Bosch, IR Illuminator 850 nm, UFLED30-8BD) ( 7[m] 6[m]) 3 (PointGrey Research Inc.Grasshopper2 M/C) Hz (a) (b
(MIRU202) 202 8 AdrianStoica 89 0395 744 89 0395 744 Jet Propulsion Laboratory 4800 Oak Grove Drive, Pasadena, CA 909, USA E-mail: uchino@irvs.ait.kyushu-u.ac.jp, {yumi,kurazume}@ait.kyushu-u.ac.jp 2 nearest
More information1 Fig. 1 Extraction of motion,.,,, 4,,, 3., 1, 2. 2.,. CHLAC,. 2.1,. (256 ).,., CHLAC. CHLAC, HLAC. 2.3 (HLAC ) r,.,. HLAC. N. 2 HLAC Fig. 2
CHLAC 1 2 3 3,. (CHLAC), 1).,.,, CHLAC,.,. Suspicious Behavior Detection based on CHLAC Method Hideaki Imanishi, 1 Toyohiro Hayashi, 2 Shuichi Enokida 3 and Toshiaki Ejima 3 We have proposed a method for
More informationSilhouette on Image Object Silhouette on Images Object 1 Fig. 1 Visual cone Fig. 2 2 Volume intersection method Fig. 3 3 Background subtraction Fig. 4
Image-based Modeling 1 1 Object Extraction Method for Image-based Modeling using Projection Transformation of Multi-viewpoint Images Masanori Ibaraki 1 and Yuji Sakamoto 1 The volume intersection method
More informationMicrosoft Word - GrCadSymp1999.doc
u u Ê É Îf ÈÉ uõòñõçí uõòñõëêi oy * ÎÏ Ó ÏÕ( ) **Ï ÓÐ ÕÖ *** ÎÏ Ó ÏÕ( ) APÑÖÕ ÑÕ { itot, inoue, furuhata} @trl.ibm.co.jp shimada@cmu.edu Automated Conversion of Triangular Mesh to Quadrilateral Mesh with
More information2.2 (a) = 1, M = 9, p i 1 = p i = p i+1 = 0 (b) = 1, M = 9, p i 1 = 0, p i = 1, p i+1 = 1 1: M 2 M 2 w i [j] w i [j] = 1 j= w i w i = (w i [ ],, w i [
RI-002 Encoding-oriented video generation algorithm based on control with high temporal resolution Yukihiro BANDOH, Seishi TAKAMURA, Atsushi SHIMIZU 1 1T / CMOS [1] 4K (4096 2160 /) 900 Hz 50Hz,60Hz 240Hz
More informationスライド 1
swk(at)ic.is.tohoku.ac.jp 2 Outline 3 ? 4 S/N CCD 5 Q Q V 6 CMOS 1 7 1 2 N 1 2 N 8 CCD: CMOS: 9 : / 10 A-D A D C A D C A D C A D C A D C A D C ADC 11 A-D ADC ADC ADC ADC ADC ADC ADC ADC ADC A-D 12 ADC
More information橡上野先生訂正2
(SIS) NII) 101-8430 tel 03-4212-2516 E-mail ueno@nii.ac.jp 1 NII 2 (symbiosis) 2 (parasitism) 2 Knowledge Creation The Symbiotic partnership of University, Government and Industry, Proc. Information Environment
More information3 webui [1] 3 3 3D e- 3D 1 1a 1b 3 2. AR 3 3 AR Autodesk 123D Catch [3] Autodesk 3 Martin [4] Shape From Sillhouette 3 [5] 3 3 Watanabe [6]
情 報 処 理 学 会 インタラクション 2013 IPSJ Interaction 2013 2013-Interaction (3EXB-50) 2013/3/2 1,a) 2 2 e- 2 2 ( ) e- 3 Intuitive Recording and Viewing of Multiple Images for E-Commerce Tatsuhito Oe 1,a) Shigaku
More informationfi¡ŒØ.dvi
(2001) 49 1 77 107 ( 2000 10 18 2001 1 18 ) * 2 2 3 Structure from motion, 1. 1.1 3 2 2 3 2 3 3 2 1 * 305 8568 1 1 1 2. 78 49 1 2001 3 3 2 2 3 3 2 3 2 2 3 3 2 3 2 3 2 3 3 2 3 3 Image-Based Rendering, Virtualized
More informationHIS-CCBASEver2
Information Access Interface in the Immersive Virtual World Tetsuro Ogi, *1*2*3 Koji Yamamoto, *3*4 Tadashi Yamanouchi *3 and Michitaka Hirose *2 Abstract - In this study, in order to access database server
More information3 2 2 (1) (2) (3) (4) 4 4 AdaBoost 2. [11] Onishi&Yoda [8] Iwashita&Stoica [5] 4 [3] 3. 3 (1) (2) (3)
(MIRU2012) 2012 8 820-8502 680-4 E-mail: {d kouno,shimada,endo}@pluto.ai.kyutech.ac.jp (1) (2) (3) (4) 4 AdaBoost 1. Kanade [6] CLAFIC [12] EigenFace [10] 1 1 2 1 [7] 3 2 2 (1) (2) (3) (4) 4 4 AdaBoost
More information(4) ω t(x) = 1 ω min Ω ( (I C (y))) min 0 < ω < C A C = 1 (5) ω (5) t transmission map tmap 1 4(a) 2. 3 2. 2 t 4(a) t tmap RGB 2 (a) RGB (A), (B), (C)
(MIRU2011) 2011 7 890 0065 1 21 40 105-6691 1 1 1 731 3194 3 4 1 338 8570 255 346 8524 1836 1 E-mail: {fukumoto,kawasaki}@ibe.kagoshima-u.ac.jp, ryo-f@hiroshima-cu.ac.jp, fukuda@cv.ics.saitama-u.ac.jp,
More informationSICE東北支部研究集会資料(2012年)
77 (..3) 77- A study on disturbance compensation control of a wheeled inverted pendulum robot during arm manipulation using Extended State Observer Luis Canete Takuma Sato, Kenta Nagano,Luis Canete,Takayuki
More information5 インチ PDP カメラ (a) (b) 1 Fig. 1 Information display. (a) f=25mm (b) f=16mm 2 UXGA Fig. 2 Examples of captured image. [3] [4] 1 [5] [7] 1 3pixel 5 1 7pi
THE INSTITUTE OF ELECTRONICS INFORMATION AND COMMUNICATION ENGINEERS TECHNICAL REPORT OF IEICE. 619 289 3 5 66 851 E-mail: {j-satakeakihiro-k}@nict.go.jp {hirayamakawashimatm}@i.kyoto-u.ac.jp UXGA 3fps
More information20 Method for Recognizing Expression Considering Fuzzy Based on Optical Flow
20 Method for Recognizing Expression Considering Fuzzy Based on Optical Flow 1115084 2009 3 5 3.,,,.., HCI(Human Computer Interaction),.,,.,,.,.,,..,. i Abstract Method for Recognizing Expression Considering
More information卒業論文2.dvi
15 GUI A study on the system to transfer a GUI sub-picture to the enlarging viewer for operational support 1040270 2004 2 27 GUI PC PC GUI Graphical User Interface PC GUI GUI PC GUI PC PC GUI i Abstract
More informationsoturon.dvi
12 Exploration Method of Various Routes with Genetic Algorithm 1010369 2001 2 5 ( Genetic Algorithm: GA ) GA 2 3 Dijkstra Dijkstra i Abstract Exploration Method of Various Routes with Genetic Algorithm
More information1 Web [2] Web [3] [4] [5], [6] [7] [8] S.W. [9] 3. MeetingShelf Web MeetingShelf MeetingShelf (1) (2) (3) (4) (5) Web MeetingShelf
1,a) 2,b) 4,c) 3,d) 4,e) Web A Review Supporting System for Whiteboard Logging Movies Based on Notes Timeline Taniguchi Yoshihide 1,a) Horiguchi Satoshi 2,b) Inoue Akifumi 4,c) Igaki Hiroshi 3,d) Hoshi
More informationDEIM Forum 2012 E Web Extracting Modification of Objec
DEIM Forum 2012 E4-2 670 0092 1 1 12 E-mail: nd11g028@stshse.u-hyogo.ac.jp, {dkitayama,sumiya}@shse.u-hyogo.ac.jp Web Extracting Modification of Objects for Supporting Map Browsing Junki MATSUO, Daisuke
More information24 Depth scaling of binocular stereopsis by observer s own movements
24 Depth scaling of binocular stereopsis by observer s own movements 1130313 2013 3 1 3D 3D 3D 2 2 i Abstract Depth scaling of binocular stereopsis by observer s own movements It will become more usual
More informationHonda 3) Fujii 4) 5) Agrawala 6) Osaragi 7) Grabler 8) Web Web c 2010 Information Processing Society of Japan
1 1 1 1 2 Geographical Feature Extraction for Retrieval of Modified Maps Junki Matsuo, 1 Daisuke Kitayama, 1 Ryong Lee 1 and Kazutoshi Sumiya 1 Digital maps available on the Web are widely used for obtaining
More informationIPSJ SIG Technical Report Vol.2012-HCI-149 No /7/20 1 1,2 1 (HMD: Head Mounted Display) HMD HMD,,,, An Information Presentation Method for Weara
1 1,2 1 (: Head Mounted Display),,,, An Information Presentation Method for Wearable Displays Considering Surrounding Conditions in Wearable Computing Environments Masayuki Nakao 1 Tsutomu Terada 1,2 Masahiko
More informationIPSJ SIG Technical Report Vol.2014-MBL-70 No.49 Vol.2014-UBI-41 No /3/15 2,a) 2,b) 2,c) 2,d),e) WiFi WiFi WiFi 1. SNS GPS Twitter Facebook Twit
2,a) 2,b) 2,c) 2,d),e) WiFi WiFi WiFi 1. SNS GPS Twitter Facebook Twitter Ustream 1 Graduate School of Information Science and Technology, Osaka University, Japan 2 Cybermedia Center, Osaka University,
More information(a) (b) (c) Canny (d) 1 ( x α, y α ) 3 (x α, y α ) (a) A 2 + B 2 + C 2 + D 2 + E 2 + F 2 = 1 (3) u ξ α u (A, B, C, D, E, F ) (4) ξ α (x 2 α, 2x α y α,
[II] Optimization Computation for 3-D Understanding of Images [II]: Ellipse Fitting 1. (1) 2. (2) (edge detection) (edge) (zero-crossing) Canny (Canny operator) (3) 1(a) [I] [II] [III] [IV ] E-mail sugaya@iim.ics.tut.ac.jp
More information理工ジャーナル 23‐1☆/1.外村
Yoshinobu TONOMURA Professor, Department of Media Informatics 1 10 YouTube 2 1900 100 1 3 2 3 3 3 1 2 3 4 90 1 90 MIT Project Athena 1983 1991 2 3 4 5 6 7 8 9 10 2 90 11 12 7 13 14 15 16 17 18 19 390 5
More information