M. D. Wheler Cyra Technologies, Inc. 3 3 CAD albedo Mapping textures on 3D geometric model using reflectance image Ryo Kurazume M. D. Wheler Katsushi Ikeuchi The University oftokyo Cyra Technologies, Inc. The University oftokyo Texture mapping, that is the method to map current color images on a 3D geometric model measured by a range sensor, is the key technique of the photometric modeling for the Virtual Reality. Usually range and color images are obtained from different viewing positions, through two independent range and color sensors. Thus, in order to map those color images, current textures, on the geometric model, it is necessary to determine relative relations between these two viewpoints. In this paper, we propose a new calibration method for the texture mapping using reflectance images and the iterative pose estimation using robust M-estimator. 1
1 (VR) "modelingfrom-reality (MFR)" 3 1. 2. 3. (1) (2) (3) 3 [1] [2] [3] 2 [4] OGIS Cyberwares 3 2 3 [10], [9] 2 Viola[8] Allen [7] [5] albedo Elstrom [6] 1. 2. 3. Closed-form 4. 3 2
ERIM Perceptron Cyrax Canny 3 Cyrax Cyrax G 3 3 ffl 3 3 ffl 3 3 ffl 2 2 Fig.1 3 3 2 2 3D -2D Edge line Sample points Figure 1: Edge sampling 3D -2D 3 1. 3 3 2. 3 2 3. 3.1 Canny 3
3 P i = ρ visible n v 0 invisible otherwise (1) n v 3 P i = ρ edge 0 <n v» t not edge otherwise (2) t w(z) w(z) = 1 @ρ (6) z @z @E @P = X i w(z i )z i @z i @P =0 (7) w(z) w(z) = 1+ 1 z 2 1 (8) 2 ff P 3.2 3 2 3 2 2 3 2D Color Edge θ Z (x,y,z) d y u 3D Edge Projection of 3D Edge 3.3 M 2 3 M 2 3 3 Fig.2 2 3 3 3 z i z i = Z i sin (3) Z i 3 3 E E(P )= X i ρ(z i ) (4) ρ P E(P ) P @E @P = X i @ρ(z i ) @z i @z i @P =0 (5) Figure 2: 2D distance and 3D distance 4 4.1 3D -2D 10mm 20mm 1m 3 300dpi 70mm Fig.3 Fig.4 50mm 20 10 y Table 1 Table 1 z 4
y x z Figure 3: Simulation model Fig. 7 M CAD Fig. 8 3D edge 2D edge (1) Initial position (2) After 1 iteration (3) After 2 iterations (4) After 5 iterations Figure 4: Simulation results Figure 5: Reflectance image of the dish Eq.2 3D -2D Figure 6: Texture image of the dish 4.2 Cyrax 3 Fig. 5 Fig. 6 (Nikon, D1) Fig. 5 Table 1: Position errors [mm (pixel)] x y z [deg.] Average 0.14 (0.12) -0.20 (0.16) -967.81 4.0 STD. 0.13 (0.11) 1.89 (1.56) 5.94 4.1 Figure 7: edges Aligned intensity edges with reflectance 5
Figure 8: Aligned color texture on the dish 4.3 Figure 10: Reflectance image of the Kamakura Great Buddha 3 [11] Fig. 9 Figure 11: Buddha Texture image of the Kamakura Great Figure 9: Geometric model of the Kamakura Great Buddha Fig. 10 Fig. 11 Fig. 12 M Fig. 10 Fig. 11 Fig. 13 Fig. 14 5 Canny 3 3 3 2 3 2 3 M CREST References [1],,, Vol.16, No.6, pp.29-32, 1998. 6
radiance distribution to superimpose virtual objects onto a real scene," IEEE Trans Visualization and Computer Graphics, Vol. 5, No. 1, pp.1-12, January 1999. [5] D-II, Vol,J83-D-II, No.2, pp.525-534, 2000. [6] Mark D. Elstrom and Philip W. Smith,Stereo- Based Registration of Multi-Sensor Imagery for Enhanced Visualization of Remote Environments,Proceedings of the 1999 IEEE International Conference on Robotics & Automation, pp.1948-1953, 1999. Figure 12: Aligned intensity edges with reflectance edges [2] Y. Sato, M. D. Wheeler, and K. Ikeuchi, "Object shape and reflectance modeling from observation", Proceedings of ACM SIGGRAPH 97, In Computer Graphics Proceedings, Annual Conference Series 1997, ACM SIGGRAPH, pp.379-387, August 1997. [3] K.Nishino, Y.Sato and K.Ikeuchi, "Eigen- Texture Method: Appearance Compression based on 3D Model", in Proc. of Computer Vision and Pattern Recognition '99, vol.1, pp.618-624, Jun., 1999. [4] I. Sato, Y. Sato, and K. Ikeuchi, "Acquiring a [7] Ioannis Stamos and Peter K. Allen, Integration of Range and Image Sensing for Photorealistic 3D Modeling, Proceedings of the 2000 IEEE International Conference on Robotics and Automation, pp.1435-1440, 2000. [8] P. Viola and W.M. Wells III, Alignment by maximization of mutual information, International Journal of Computer Vision, Vol.24, No.2, pp.137-154, 1997. [9] M. D. Wheeler, "Automatic Modeling and Localization for Object Recognition", Technical Report (Ph.D. Thesis), CMU-CS-96-188, School of Computer Science, Carnegie Mellon University, October, 1996. [10] M. D. Wheeler and Katsushi Ikeuchi, "Sensor Modeling, Probabilistic Hypothesis Generation, and Robust Localization for Object Recognition", IEEE Transactions on Pattern Analysis and Machine Intelligence,Vol. 17, No. 3, March 1995. [11] Daisuke Miyazaki, Takeshi Ooishi, Taku Nishikawa, Ryusuke Sagawa, Ko Nishino, Takashi Tomomatsu, Yutaka Takase, Katsushi Ikeuchi,The Great Buddha Project: Modelling Cultural Heritage through Observation,VSMM2000 (6th international conference on virtual systems and multimedia), pp.138-145, 2000. Figure 13: Aligned color texture on the 3D geometric model 7
Figure 14: The Kamakura Great Buddha with the color texture image 8