DFD that uses focus changes during an image integration time for engineering the PSF. We can capture higher SNR input images, since we can control the PSF with wide aperture setting unlike coded aperture. We confirmed the effectiveness of the method in comparison with the previous DFD and coded aperture approached in experiments. 1 1 1 DFDDepth From DefocusDFD PSF PSF DFD DFD PSF SN DFD PSF DFD Focus Sweep Imaging for Depth From Defocus SHUHEI MATSUI, 1 HAJIME NAGAHARA 1 and RIN-ICHIRO TANIGUCHI 1 Depth From Defocus (DFD) is to recover a scene depth from defocus appearances in images. DFD usually uses two different focus images, one is near focus and the other is far focus, and estimates the size of depth blur from the captured images. However, the depth estimation is not so accurate, since a point spread function (PSF) caused by regular circular aperture moderately changes the size or shape along the depth. In recent years, coded aperture technique, that uses special pattern as an aperture for engineering the PSF, has been used for improving the accuracy. It is often required for recovering an all in focus image as well as the depth estimation in DFD applications. Coded aperture has an disadvantage in terms of image deblurring, since the deblurring requires higher SNR of captured images. The aperture always attenuates an incoming light for controlling PSF and decreases an input image SNR as a result. In this paper, we propose a new DFD approach for DFD 1. 3D DFD 1),2). DFD 2 PSF PSF 3) Levin 4) PSF PSF Zhou 5) 2 PSF SN 10) 11) 1 Graduate school of information science and electrical engineering, Kyushu University 1 c 2010 Information Processing Society of Japan
PSF SN 2 DFD PSF SN PSF PSF PSF 2 2 PSF 2 PSF PSF 2 3 PSF 4 DFD 56 2. PSF PSF 3) DFD Levin 4) PSF DFD KL KL PSF DFD 2 Zhou 5) 2 DFD DFD DFD DFD Zhou 2 7),8) Green 9) 4 SN Levin 6) DFD 2 Zhou 5) 2 Green 9) DFD PSF SN PSF 2 c 2010 Information Processing Society of Japan
PSF Dowski 10) Levin 4) Levin 11) PSF PSF PSF SN 12),13) PSF 12),13) PSF DFD PSF 2 2 Zhou 5) PSF SN 3. 12) PSF DFD 2 M u Aperture Lens a Fig. 1 v p 1 Projective geometory of lens m Image sensor 2 PSF 1 1 f a p u M v m 1 f = 1 u + 1 (1) v v = p 1 p v M m b b b(p) = a (v p) (2) v PSF PSF P (r, u, p) P (r, u, p) = 4 ( r πb 2 b ) (3) r PSF m (x) x 1/2 1 0 PSF m b 3 c 2010 Information Processing Society of Japan
p 2 p f 1 exposure f 2 exposure e 1 e 2 Far Far Optical axis Lens p 0 p 1 p 2 p 1 Scene Depth Scene Depth Image sensor p 0 t 0 t 1 t 2 Time a. Sensor motion b. Sweep motion and image integrations 2 Fig. 2 Half sweep imaging t Near Near h 1 h 2 h all H 1 H 2 H all a. PSF profile b. Log of power spectrum 2-a 2 p 0 p 2 p 0 p 2 p(t) =st + p0 2-b 2 f 1 f 2 e 1e 2 t 0 t 1 t 1 t 2 f 1 f 2 2-a p 0 p 1 p 1 p 2 2 2 f 1f 2 PSF f 1f 2 f i = h i f 0 + ξ, i =1, 2 (4) f 0 PSF h i ξ h i p 0 p 1 p 1 p 2 pi h i(r, u) = P (r, u, p)dp (5) p i 1 3 PSF Fig. 3 Half sweep PSF 3 PSF uf h i(r, u) = ( λp + i 1 λp i 2λp i 1 (u f)πasp i r b(p 2λp i i 1) b(p ) (6) i) b(p) 2 p λ p b(p) 2r 1 0 6 PSF 3-a PSF 3 2 PSF h 1h 2 4 v p 0 p 2 4 h all =(h 1 + h 2)/2 PSF PSF 12),13) PSF h 1 h 2 PSF PSF h all PSF PSF 3-b log H 1H 2H all h 1h 2h all 3-a H 1 4 c 2010 Information Processing Society of Japan
H 2 PSF PSF H all PSF Levin 4) H all PSF 12),13) 2 Zhou 5) DFD PSF PSF 4. DFD 2 4 F (d) i = F 0 H (d) i + N (7) 2 F i F 0 d PSF H (d) i N DFD F 0 d F 0 F H ˆF 0 = (8) H 2 + C 2 F PSF H ˆF 0 H H H 2 = H H C SN 8 3 h 1 h 2 h all PSF F all = F1 + F2, H (d) all 2 = H (d) 1 + H (d) 2 2 (9) 9 F all H all 8 (d) (F 1 + F 2)(H ˆF (d) 0 = 1 + H (d) 2 ) (10) H (d) 1 + H (d) 2 2 +4 C 2 d W (d) = IFFT( ˆF (d) ( 0 H ˆd) i F i) (11) i=1,2 ˆF (d) 0 10 IFFT 2D d (x, y) d U U(x, y) =argminw (d) (x, y) (12) d D U I I(x, y) = ˆF 0 (U(x,y)) (x, y) (13) 5. PSF 2 DFD Zhou 5) DFD 3 9mmf/1.4 1 u v p u=832000mm 1 v=9.0410.09mm v 20 20 u v 1 20 v Δv=0.055 p v Δv 0.5 5 c 2010 Information Processing Society of Japan
1 (f=9mm) Table 1 Relation between object depth and focus position Object depth :u[mm] 2000.0 803.1 524.6 390.7 312.1 260.3 223.6 196.3 175.1 158.2 Focus position :v[mm] 9.04 9.10 9.15 9.21 9.26 9.32 9.37 9.43 9.48 9.54 Object depth :u[mm] 144.5 133.1 123.4 115.1 108.0 101.8 96.2 91.4 87.0 83.0 Focus position :v[mm] 9.59 9.65 9.70 9.76 9.81 9.87 9.92 9.98 10.03 10.09 a. Ground Truth b. Conventional DFD c. Coded aperture pair d. Half Sweep 10 5-a 2 4-a 20 4 Jet 2-a p=9.04mm p 0 p=10.09mm p 2 p 0 p 2 p 1 DFD 2 2 p 0 p 2 Zhou 5) 2 2 p 2 p 0 p 1 p 1 p 2 2 PSF 6 PSF h 1h 2 20 2 PSF DFD 3 DFD Zhou 5) 4 4-a 4-b DFD 4-c Zhou 5) DFD 4-d 4 Fig. 4 Estimated depth map a. True Texture b. Conventional DFD c. Coded aperture pair d. Half Sweep 5 Fig. 5 Error map of deblurred image Jet 1 u v mm 4-b DFD 4-c 4-d 3 5-bcd 2 RMSRoot Mean Square PSNRPeak Signal-to-Noise RatioRMS PSNR DFD 4 6 c 2010 Information Processing Society of Japan
2 Table 2 Depth and deblurring error DepthMap(RMS) Texture(PSNR) Conventional DFD 26.98 30.21[dB] Coded aperture pair 25.91 32.24[dB] Half sweep 7.81 39.98[dB] RMS focal stack backward forward f 1 6. 6 PSF PSF Canon EOS 20D 30mm f/# =1.4 12),13) 14 7 7 f 1 f 2 u=4840mm u=671mm 1 v=30.231.4mm v 14 14 v=30.8mm p 0=30.2mm p 1=30.8mmp 2=31.4mm p 0 p 1 p 1 p 2 PSF 6 7 7-ab f 1f 2 7-c 7-d 6 Fig. 6 Simulated half sweep imaging from focal stack 7-ef 7-c 5 DFD ( 7-ab) f 1f 2 ( 7-d) ( 7-ef) 7. PSF 2 PSF PSF DFD f 2 7 c 2010 Information Processing Society of Japan
2 DFDZhou 5) DFD a. Input image: f1 b. Input image: f2 c. DepthMap d. All in focus image e. Close up: backward f. Close up: forward 1) A.Pentland: A New Sense for Depth of Field, IEEE PAMI, 9(4): 423-430, 1987. 2) M. Subbarao and N. Gurumoorthy: Depth recovery from blurred edges. In CVPR, pages 498-503, 1988. 3) :,, Vol. J82-D-II, No. 11, pp. 1912 1920, 1999. 4) A. Levin, R.Fergus, F.Durand, and W.Freeman: Image and depth from a conventional camera with a coded aperture, ACM Transactions on Graphics, no. 3, 2007. 5) C. Zhou, S. Lin, and S. Nayar: Coded Aperture Pairs for Depth from Defocus, IEEE International Conference on Computer Vision, 2009. 6) A. Levin: Analyzing Depth from Coded Aperture Sets, Proc. European Conference on Computer Vision, Sep. 2010. 7) H. Nagahara, C. Zhou, T. Watanabe, H. Ishiguro, S. K. Nayar: Programmable Aperture Camera Using LCoS, Proc. European Conference on Computer Vision, Sep. 2010. 8) C. Zhou, S. K. Nayar:, Vol. CVIM174, no.28, 2010. 9) P. Green, W. Sun, W. Matusik, F. Durand: Multiple-Aperture Photography, Proc. ACM SIG- GRAPH, 2007 10) E. R. Dowski, and W. T. Cathey: Single-lens single-image incoherent passive-ranging systems, Applied Optics, Vol. 33, No. 29, Oct. 1994. 11) A. Levin, S. Hasinoff, P. Green, F. Durand, and W. T. Freeman: 4D Frequency Analysis of Computational Cameras for Depth of Field Extension, SIGGRAPH, ACM Transactions on Graphics, 2009. 12) H. Nagahara, S. Kuthirummal, C. Zhou and S. Nayar: Flexible Depth of Field Photography, European Conference on Computer Vision, 2008. 13) S. Kuthirummal, H. Nagahara, C. Zhou, S. K. Nayar: Flexible Depth of Field Photography, IEEE Trans. Pattern Analysis and Machine Intelligence, Vol. 33, 2011 (will appear). 7 Fig. 7 Experimental results of real scene 8 c 2010 Information Processing Society of Japan