proc.dvi

Similar documents
proc.dvi

proc.dvi

1 Kinect for Windows M = [X Y Z] T M = [X Y Z ] T f (u,v) w 3.2 [11] [7] u = f X +u Z 0 δ u (X,Y,Z ) (5) v = f Y Z +v 0 δ v (X,Y,Z ) (6) w = Z +


miru02_merging.dvi

28 Horizontal angle correction using straight line detection in an equirectangular image

258 5) GPS 1 GPS 6) GPS DP 7) 8) 10) GPS GPS ) GPS Global Positioning System

8.dvi

IPSJ SIG Technical Report Vol.2014-CG-155 No /6/28 1,a) 1,2,3 1 3,4 CG An Interpolation Method of Different Flow Fields using Polar Inter

Vol. 44 No. SIG 9(CVIM 7) ) 2) 1) 1 2) 3 7) 1) 2) 3 3) 4) 5) (a) (d) (g) (b) (e) (h) No Convergence? End (f) (c) Yes * ** * ** 1

2003/3 Vol. J86 D II No Fig. 1 An exterior view of eye scanner. CCD [7] CCD PC USB PC PC USB RS-232C PC

IPSJ SIG Technical Report Vol.2015-CVIM-196 No /3/6 1,a) 1,b) 1,c) U,,,, The Camera Position Alignment on a Gimbal Head for Fixed Viewpoint Swi

4. C i k = 2 k-means C 1 i, C 2 i 5. C i x i p [ f(θ i ; x) = (2π) p 2 Vi 1 2 exp (x µ ] i) t V 1 i (x µ i ) 2 BIC BIC = 2 log L( ˆθ i ; x i C i ) + q

VRSJ-SIG-MR_okada_79dce8c8.pdf

2). 3) 4) 1.2 NICTNICT DCRA Dihedral Corner Reflector micro-arraysdcra DCRA DCRA DCRA 3D DCRA PC USB PC PC ON / OFF Velleman K8055 K8055 K8055

(MIRU2008) HOG Histograms of Oriented Gradients (HOG)

1., 1 COOKPAD 2, Web.,,,,,,.,, [1]., 5.,, [2].,,.,.,, 5, [3].,,,.,, [4], 33,.,,.,,.. 2.,, 3.., 4., 5., ,. 1.,,., 2.,. 1,,

(a) 1 (b) 3. Gilbert Pernicka[2] Treibitz Schechner[3] Narasimhan [4] Kim [5] Nayar [6] [7][8][9] 2. X X X [10] [11] L L t L s L = L t + L s

xx/xx Vol. Jxx A No. xx 1 Fig. 1 PAL(Panoramic Annular Lens) PAL(Panoramic Annular Lens) PAL (2) PAL PAL 2 PAL 3 2 PAL 1 PAL 3 PAL PAL 2. 1 PAL

tenpyou.final.dvi

2007/8 Vol. J90 D No. 8 Stauffer [7] 2 2 I 1 I 2 2 (I 1(x),I 2(x)) 2 [13] I 2 = CI 1 (C >0) (I 1,I 2) (I 1,I 2) Field Monitoring Server

IPSJ SIG Technical Report Vol.2010-MPS-77 No /3/5 VR SIFT Virtual View Generation in Hallway of Cybercity Buildings from Video Sequen

Miyazaki-3DForum dvi

1 Table 1: Identification by color of voxel Voxel Mode of expression Nothing Other 1 Orange 2 Blue 3 Yellow 4 SSL Humanoid SSL-Vision 3 3 [, 21] 8 325

& Vol.5 No (Oct. 2015) TV 1,2,a) , Augmented TV TV AR Augmented Reality 3DCG TV Estimation of TV Screen Position and Ro

,,.,.,,.,.,.,.,,.,..,,,, i

IPSJ SIG Technical Report iphone iphone,,., OpenGl ES 2.0 GLSL(OpenGL Shading Language), iphone GPGPU(General-Purpose Computing on Graphics Proc

IPSJ SIG Technical Report Vol.2012-CG-148 No /8/29 3DCG 1,a) On rigid body animation taking into account the 3D computer graphics came

IPSJ SIG Technical Report Vol.2011-EC-19 No /3/ ,.,., Peg-Scope Viewer,,.,,,,. Utilization of Watching Logs for Support of Multi-

A Feasibility Study of Direct-Mapping-Type Parallel Processing Method to Solve Linear Equations in Load Flow Calculations Hiroaki Inayoshi, Non-member

24 Depth scaling of binocular stereopsis by observer s own movements

Journal of Geography 116 (6) Configuration of Rapid Digital Mapping System Using Tablet PC and its Application to Obtaining Ground Truth

「hoge」

塗装深み感の要因解析

特-3.indd

Silhouette on Image Object Silhouette on Images Object 1 Fig. 1 Visual cone Fig. 2 2 Volume intersection method Fig. 3 3 Background subtraction Fig. 4

Sobel Canny i

Gaze Head Eye (a) deg (b) 45 deg (c) 9 deg 1: - 1(b) - [5], [6] [7] Stahl [8], [9] Fang [1], [11] Itti [12] Itti [13] [7] Fang [1],

[2] OCR [3], [4] [5] [6] [4], [7] [8], [9] 1 [10] Fig. 1 Current arrangement and size of ruby. 2 Fig. 2 Typography combined with printing

Fig Measurement data combination. 2 Fig. 2. Ray vector. Fig (12) 1 2 R 1 r t 1 3 p 1,i i 2 3 Fig.2 R 2 t 2 p 2,i [u, v] T (1)(2) r R 1 R 2

IPSJ SIG Technical Report Vol.2009-DPS-141 No.20 Vol.2009-GN-73 No.20 Vol.2009-EIP-46 No /11/27 1. MIERUKEN 1 2 MIERUKEN MIERUKEN MIERUKEN: Spe

IPSJ SIG Technical Report Vol.2010-CVIM-170 No /1/ Visual Recognition of Wire Harnesses for Automated Wiring Masaki Yoneda, 1 Ta


IHI Robust Path Planning against Position Error for UGVs in Rough Terrain Yuki DOI, Yonghoon JI, Yusuke TAMURA(University of Tokyo), Yuki IKEDA, Atsus

THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS TECHNICAL REPORT OF IEICE.

Visual Evaluation of Polka-dot Patterns Yoojin LEE and Nobuko NARUSE * Granduate School of Bunka Women's University, and * Faculty of Fashion Science,

1(a) (b),(c) - [5], [6] Itti [12] [13] gaze eyeball head 2: [time] [7] Stahl [8], [9] Fang [1], [11] 3 -

DEIM Forum 2012 E Web Extracting Modification of Objec

Input image Initialize variables Loop for period of oscillation Update height map Make shade image Change property of image Output image Change time L

DPA,, ShareLog 3) 4) 2.2 Strino Strino STRain-based user Interface with tacticle of elastic Natural ObjectsStrino 1 Strino ) PC Log-Log (2007 6)

光学

光学

( )

14 2 5

第62巻 第1号 平成24年4月/石こうを用いた木材ペレット

SICE東北支部研究集会資料(2004年)

MmUm+FopX m Mm+Mop F-Mm(Fop-Mopum)M m+mop MSuS+FX S M S+MOb Fs-Ms(Mobus-Fex)M s+mob Fig. 1 Particle model of single degree of freedom master/ slave sy

HASC2012corpus HASC Challenge 2010,2011 HASC2011corpus( 116, 4898), HASC2012corpus( 136, 7668) HASC2012corpus HASC2012corpus

IPSJ SIG Technical Report Vol.2014-MBL-70 No.49 Vol.2014-UBI-41 No /3/15 2,a) 2,b) 2,c) 2,d),e) WiFi WiFi WiFi 1. SNS GPS Twitter Facebook Twit

光学

EQUIVALENT TRANSFORMATION TECHNIQUE FOR ISLANDING DETECTION METHODS OF SYNCHRONOUS GENERATOR -REACTIVE POWER PERTURBATION METHODS USING AVR OR SVC- Ju

日立金属技報 Vol.34

paper.dvi

Honda 3) Fujii 4) 5) Agrawala 6) Osaragi 7) Grabler 8) Web Web c 2010 Information Processing Society of Japan

1 Web [2] Web [3] [4] [5], [6] [7] [8] S.W. [9] 3. MeetingShelf Web MeetingShelf MeetingShelf (1) (2) (3) (4) (5) Web MeetingShelf

IPSJ SIG Technical Report Vol.2012-CG-149 No.13 Vol.2012-CVIM-184 No /12/4 3 1,a) ( ) DB 3D DB 2D,,,, PnP(Perspective n-point), Ransa

IS1-09 第 回画像センシングシンポジウム, 横浜,14 年 6 月 2 Hough Forest Hough Forest[6] Random Forest( [5]) Random Forest Hough Forest Hough Forest 2.1 Hough Forest 1 2.2

1234 Vol. 25 No. 8, pp , 2007 CPS SLAM Study on CPS SLAM 3D Laser Measurement System for Large Scale Architectures Ryo Kurazume,Yukihiro Toba

Microsoft Word - toyoshima-deim2011.doc

JFE.dvi

1 Fig. 1 Extraction of motion,.,,, 4,,, 3., 1, 2. 2.,. CHLAC,. 2.1,. (256 ).,., CHLAC. CHLAC, HLAC. 2.3 (HLAC ) r,.,. HLAC. N. 2 HLAC Fig. 2

2 X a) 3D Reconstruction of Femoral Shape Using a Two 2D Radiographs and Statistical Parametric Model Ryo KURAZUME a), Kahori NAKAMURA, Toshiyuki OKAD

IPSJ SIG Technical Report Vol.2014-HCI-158 No /5/22 1,a) 2 2 3,b) Development of visualization technique expressing rainfall changing conditions

2. CABAC CABAC CABAC 1 1 CABAC Figure 1 Overview of CABAC 2 DCT 2 0/ /1 CABAC [3] 3. 2 値化部 コンテキスト計算部 2 値算術符号化部 CABAC CABAC

untitled

3 2 2 (1) (2) (3) (4) 4 4 AdaBoost 2. [11] Onishi&Yoda [8] Iwashita&Stoica [5] 4 [3] 3. 3 (1) (2) (3)

Worm Hole View 2 ( ) ( ) Evaluation of a Presentation Method of Multilevel Urban Structures using Panorama Views Yohei Abe, Ismail Arai and Nobuhiko N

1 3DCG [2] 3DCG CG 3DCG [3] 3DCG 3 3 API 2 3DCG 3 (1) Saito [4] (a) 1920x1080 (b) 1280x720 (c) 640x360 (d) 320x G-Buffer Decaudin[5] G-Buffer D

Table 1. Reluctance equalization design. Fig. 2. Voltage vector of LSynRM. Fig. 4. Analytical model. Table 2. Specifications of analytical models. Fig

IPSJ SIG Technical Report GPS LAN GPS LAN GPS LAN Location Identification by sphere image and hybrid sensing Takayuki Katahira, 1 Yoshio Iwai 1

本文6(599) (Page 601)

11) 13) 11),12) 13) Y c Z c Image plane Y m iy O m Z m Marker coordinate system T, d X m f O c X c Camera coordinate system 1 Coordinates and problem

IPSJ-CVIM

ï\éÜA4*

Vol.55 No (Jan. 2014) saccess 6 saccess 7 saccess 2. [3] p.33 * B (A) (B) (C) (D) (E) (F) *1 [3], [4] Web PDF a m

Spin Image [3] 3D Shape Context [4] Spin Image 2 3D Shape Context Shape Index[5] Local Surface Patch[6] DAI [7], [8] [9], [10] Reference Frame SHO[11]

IPSJ SIG Technical Report Pitman-Yor 1 1 Pitman-Yor n-gram A proposal of the melody generation method using hierarchical pitman-yor language model Aki

main.dvi

24312.dvi

28 TCG SURF Card recognition using SURF in TCG play video

(a) (b) (c) Fig. 2 2 (a) ; (b) ; (c) (a)configuration of the proposed system; (b)processing flow of the system; (c)the system in use 1 GPGPU (

橡上野先生訂正2

yoo_graduation_thesis.dvi

IPSJ SIG Technical Report An Evaluation Method for the Degree of Strain of an Action Scene Mao Kuroda, 1 Takeshi Takai 1 and Takashi Matsuyama 1

3D UbiCode (Ubiquitous+Code) RFID ResBe (Remote entertainment space Behavior evaluation) 2 UbiCode Fig. 2 UbiCode 2. UbiCode 2. 1 UbiCode UbiCode 2. 2

[6] DoN DoN DDoN(Donuts DoN) DoN 4(2) DoN DDoN 3.2 RDoN(Ring DoN) 4(1) DoN 4(3) DoN RDoN 2 DoN 2.2 DoN PCA DoN DoN 2 DoN PCA 0 DoN 3. DoN

( ) [1] [4] ( ) 2. [5] [6] Piano Tutor[7] [1], [2], [8], [9] Radiobaton[10] Two Finger Piano[11] Coloring-in Piano[12] ism[13] MIDI MIDI 1 Fig. 1 Syst

3 1 Table 1 1 Feature classification of frames included in a comic magazine Type A Type B Type C Others 81.5% 10.3% 5.0% 3.2% Fig. 1 A co

dsample.dvi

29 Short-time prediction of time series data for binary option trade

Transcription:

M. D. Wheler Cyra Technologies, Inc. 3 3 CAD albedo Mapping textures on 3D geometric model using reflectance image Ryo Kurazume M. D. Wheler Katsushi Ikeuchi The University oftokyo Cyra Technologies, Inc. The University oftokyo Texture mapping, that is the method to map current color images on a 3D geometric model measured by a range sensor, is the key technique of the photometric modeling for the Virtual Reality. Usually range and color images are obtained from different viewing positions, through two independent range and color sensors. Thus, in order to map those color images, current textures, on the geometric model, it is necessary to determine relative relations between these two viewpoints. In this paper, we propose a new calibration method for the texture mapping using reflectance images and the iterative pose estimation using robust M-estimator. 1

1 (VR) "modelingfrom-reality (MFR)" 3 1. 2. 3. (1) (2) (3) 3 [1] [2] [3] 2 [4] OGIS Cyberwares 3 2 3 [10], [9] 2 Viola[8] Allen [7] [5] albedo Elstrom [6] 1. 2. 3. Closed-form 4. 3 2

ERIM Perceptron Cyrax Canny 3 Cyrax Cyrax G 3 3 ffl 3 3 ffl 3 3 ffl 2 2 Fig.1 3 3 2 2 3D -2D Edge line Sample points Figure 1: Edge sampling 3D -2D 3 1. 3 3 2. 3 2 3. 3.1 Canny 3

3 P i = ρ visible n v 0 invisible otherwise (1) n v 3 P i = ρ edge 0 <n v» t not edge otherwise (2) t w(z) w(z) = 1 @ρ (6) z @z @E @P = X i w(z i )z i @z i @P =0 (7) w(z) w(z) = 1+ 1 z 2 1 (8) 2 ff P 3.2 3 2 3 2 2 3 2D Color Edge θ Z (x,y,z) d y u 3D Edge Projection of 3D Edge 3.3 M 2 3 M 2 3 3 Fig.2 2 3 3 3 z i z i = Z i sin (3) Z i 3 3 E E(P )= X i ρ(z i ) (4) ρ P E(P ) P @E @P = X i @ρ(z i ) @z i @z i @P =0 (5) Figure 2: 2D distance and 3D distance 4 4.1 3D -2D 10mm 20mm 1m 3 300dpi 70mm Fig.3 Fig.4 50mm 20 10 y Table 1 Table 1 z 4

y x z Figure 3: Simulation model Fig. 7 M CAD Fig. 8 3D edge 2D edge (1) Initial position (2) After 1 iteration (3) After 2 iterations (4) After 5 iterations Figure 4: Simulation results Figure 5: Reflectance image of the dish Eq.2 3D -2D Figure 6: Texture image of the dish 4.2 Cyrax 3 Fig. 5 Fig. 6 (Nikon, D1) Fig. 5 Table 1: Position errors [mm (pixel)] x y z [deg.] Average 0.14 (0.12) -0.20 (0.16) -967.81 4.0 STD. 0.13 (0.11) 1.89 (1.56) 5.94 4.1 Figure 7: edges Aligned intensity edges with reflectance 5

Figure 8: Aligned color texture on the dish 4.3 Figure 10: Reflectance image of the Kamakura Great Buddha 3 [11] Fig. 9 Figure 11: Buddha Texture image of the Kamakura Great Figure 9: Geometric model of the Kamakura Great Buddha Fig. 10 Fig. 11 Fig. 12 M Fig. 10 Fig. 11 Fig. 13 Fig. 14 5 Canny 3 3 3 2 3 2 3 M CREST References [1],,, Vol.16, No.6, pp.29-32, 1998. 6

radiance distribution to superimpose virtual objects onto a real scene," IEEE Trans Visualization and Computer Graphics, Vol. 5, No. 1, pp.1-12, January 1999. [5] D-II, Vol,J83-D-II, No.2, pp.525-534, 2000. [6] Mark D. Elstrom and Philip W. Smith,Stereo- Based Registration of Multi-Sensor Imagery for Enhanced Visualization of Remote Environments,Proceedings of the 1999 IEEE International Conference on Robotics & Automation, pp.1948-1953, 1999. Figure 12: Aligned intensity edges with reflectance edges [2] Y. Sato, M. D. Wheeler, and K. Ikeuchi, "Object shape and reflectance modeling from observation", Proceedings of ACM SIGGRAPH 97, In Computer Graphics Proceedings, Annual Conference Series 1997, ACM SIGGRAPH, pp.379-387, August 1997. [3] K.Nishino, Y.Sato and K.Ikeuchi, "Eigen- Texture Method: Appearance Compression based on 3D Model", in Proc. of Computer Vision and Pattern Recognition '99, vol.1, pp.618-624, Jun., 1999. [4] I. Sato, Y. Sato, and K. Ikeuchi, "Acquiring a [7] Ioannis Stamos and Peter K. Allen, Integration of Range and Image Sensing for Photorealistic 3D Modeling, Proceedings of the 2000 IEEE International Conference on Robotics and Automation, pp.1435-1440, 2000. [8] P. Viola and W.M. Wells III, Alignment by maximization of mutual information, International Journal of Computer Vision, Vol.24, No.2, pp.137-154, 1997. [9] M. D. Wheeler, "Automatic Modeling and Localization for Object Recognition", Technical Report (Ph.D. Thesis), CMU-CS-96-188, School of Computer Science, Carnegie Mellon University, October, 1996. [10] M. D. Wheeler and Katsushi Ikeuchi, "Sensor Modeling, Probabilistic Hypothesis Generation, and Robust Localization for Object Recognition", IEEE Transactions on Pattern Analysis and Machine Intelligence,Vol. 17, No. 3, March 1995. [11] Daisuke Miyazaki, Takeshi Ooishi, Taku Nishikawa, Ryusuke Sagawa, Ko Nishino, Takashi Tomomatsu, Yutaka Takase, Katsushi Ikeuchi,The Great Buddha Project: Modelling Cultural Heritage through Observation,VSMM2000 (6th international conference on virtual systems and multimedia), pp.138-145, 2000. Figure 13: Aligned color texture on the 3D geometric model 7

Figure 14: The Kamakura Great Buddha with the color texture image 8