PSF SN 2 DFD PSF SN PSF PSF PSF 2 2 PSF 2 PSF PSF 2 3 PSF 4 DFD PSF PSF 3) DFD Levin 4) PSF DFD KL KL PSF DFD 2 Zhou 5) 2 DFD DFD DFD DFD Zhou 2

Similar documents
IPSJ SIG Technical Report Vol.2010-CVIM-171 No /3/18 Scene Point M Aperture 1 u v m m b Depth from defocus (PSF) Coded Imaging Hajime Nagahara

Vol1-CVIM-172 No.7 21/5/ Shan 1) 2 2)3) Yuan 4) Ancuti 5) Agrawal 6) 2.4 Ben-Ezra 7)8) Raskar 9) Image domain Blur image l PSF b / = F(

(a) 1 (b) 3. Gilbert Pernicka[2] Treibitz Schechner[3] Narasimhan [4] Kim [5] Nayar [6] [7][8][9] 2. X X X [10] [11] L L t L s L = L t + L s

1 Fig. 1 Extraction of motion,.,,, 4,,, 3., 1, 2. 2.,. CHLAC,. 2.1,. (256 ).,., CHLAC. CHLAC, HLAC. 2.3 (HLAC ) r,.,. HLAC. N. 2 HLAC Fig. 2

258 5) GPS 1 GPS 6) GPS DP 7) 8) 10) GPS GPS ) GPS Global Positioning System

Input image Initialize variables Loop for period of oscillation Update height map Make shade image Change property of image Output image Change time L

IPSJ SIG Technical Report Vol.2012-CG-148 No /8/29 3DCG 1,a) On rigid body animation taking into account the 3D computer graphics came

(MIRU2008) HOG Histograms of Oriented Gradients (HOG)

,,.,.,,.,.,.,.,,.,..,,,, i

VRSJ-SIG-MR_okada_79dce8c8.pdf

28 Horizontal angle correction using straight line detection in an equirectangular image

1 Kinect for Windows M = [X Y Z] T M = [X Y Z ] T f (u,v) w 3.2 [11] [7] u = f X +u Z 0 δ u (X,Y,Z ) (5) v = f Y Z +v 0 δ v (X,Y,Z ) (6) w = Z +

4. C i k = 2 k-means C 1 i, C 2 i 5. C i x i p [ f(θ i ; x) = (2π) p 2 Vi 1 2 exp (x µ ] i) t V 1 i (x µ i ) 2 BIC BIC = 2 log L( ˆθ i ; x i C i ) + q

1., 1 COOKPAD 2, Web.,,,,,,.,, [1]., 5.,, [2].,,.,.,, 5, [3].,,,.,, [4], 33,.,,.,,.. 2.,, 3.., 4., 5., ,. 1.,,., 2.,. 1,,

IPSJ-CVIM

11) 13) 11),12) 13) Y c Z c Image plane Y m iy O m Z m Marker coordinate system T, d X m f O c X c Camera coordinate system 1 Coordinates and problem

1(a) (b),(c) - [5], [6] Itti [12] [13] gaze eyeball head 2: [time] [7] Stahl [8], [9] Fang [1], [11] 3 -

1 Web [2] Web [3] [4] [5], [6] [7] [8] S.W. [9] 3. MeetingShelf Web MeetingShelf MeetingShelf (1) (2) (3) (4) (5) Web MeetingShelf

IPSJ SIG Technical Report Vol.2009-CVIM-167 No /6/10 Real AdaBoost HOG 1 1 1, 2 1 Real AdaBoost HOG HOG Real AdaBoost HOG A Method for Reducing

Gaze Head Eye (a) deg (b) 45 deg (c) 9 deg 1: - 1(b) - [5], [6] [7] Stahl [8], [9] Fang [1], [11] Itti [12] Itti [13] [7] Fang [1],

IPSJ SIG Technical Report Vol.2014-CG-155 No /6/28 1,a) 1,2,3 1 3,4 CG An Interpolation Method of Different Flow Fields using Polar Inter

& Vol.5 No (Oct. 2015) TV 1,2,a) , Augmented TV TV AR Augmented Reality 3DCG TV Estimation of TV Screen Position and Ro

IPSJ SIG Technical Report Vol.2015-CVIM-196 No /3/6 1,a) 1,b) 1,c) U,,,, The Camera Position Alignment on a Gimbal Head for Fixed Viewpoint Swi

Optical Flow t t + δt 1 Motion Field 3 3 1) 2) 3) Lucas-Kanade 4) 1 t (x, y) I(x, y, t)

3_23.dvi

光学

DPA,, ShareLog 3) 4) 2.2 Strino Strino STRain-based user Interface with tacticle of elastic Natural ObjectsStrino 1 Strino ) PC Log-Log (2007 6)

Visual Evaluation of Polka-dot Patterns Yoojin LEE and Nobuko NARUSE * Granduate School of Bunka Women's University, and * Faculty of Fashion Science,

1 Table 1: Identification by color of voxel Voxel Mode of expression Nothing Other 1 Orange 2 Blue 3 Yellow 4 SSL Humanoid SSL-Vision 3 3 [, 21] 8 325

IPSJ SIG Technical Report Vol.2009-BIO-17 No /5/26 DNA 1 1 DNA DNA DNA DNA Correcting read errors on DNA sequences determined by Pyrosequencing

149 (Newell [5]) Newell [5], [1], [1], [11] Li,Ryu, and Song [2], [11] Li,Ryu, and Song [2], [1] 1) 2) ( ) ( ) 3) T : 2 a : 3 a 1 :

Fig. 3 Flow diagram of image processing. Black rectangle in the photo indicates the processing area (128 x 32 pixels).

THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS TECHNICAL REPORT OF IEICE.

IPSJ SIG Technical Report 1, Instrument Separation in Reverberant Environments Using Crystal Microphone Arrays Nobutaka ITO, 1, 2 Yu KITANO, 1

EQUIVALENT TRANSFORMATION TECHNIQUE FOR ISLANDING DETECTION METHODS OF SYNCHRONOUS GENERATOR -REACTIVE POWER PERTURBATION METHODS USING AVR OR SVC- Ju

塗装深み感の要因解析

(3.6 ) (4.6 ) 2. [3], [6], [12] [7] [2], [5], [11] [14] [9] [8] [10] (1) Voodoo 3 : 3 Voodoo[1] 3 ( 3D ) (2) : Voodoo 3D (3) : 3D (Welc

17 Proposal of an Algorithm of Image Extraction and Research on Improvement of a Man-machine Interface of Food Intake Measuring System

Sobel Canny i

2003/3 Vol. J86 D II No Fig. 1 An exterior view of eye scanner. CCD [7] CCD PC USB PC PC USB RS-232C PC

(4) ω t(x) = 1 ω min Ω ( (I C (y))) min 0 < ω < C A C = 1 (5) ω (5) t transmission map tmap 1 4(a) t 4(a) t tmap RGB 2 (a) RGB (A), (B), (C)

xx/xx Vol. Jxx A No. xx 1 Fig. 1 PAL(Panoramic Annular Lens) PAL(Panoramic Annular Lens) PAL (2) PAL PAL 2 PAL 3 2 PAL 1 PAL 3 PAL PAL 2. 1 PAL

A Study on Throw Simulation for Baseball Pitching Machine with Rollers and Its Optimization Shinobu SAKAI*5, Yuichiro KITAGAWA, Ryo KANAI and Juhachi

Vol. 44 No. SIG 9(CVIM 7) ) 2) 1) 1 2) 3 7) 1) 2) 3 3) 4) 5) (a) (d) (g) (b) (e) (h) No Convergence? End (f) (c) Yes * ** * ** 1

013858,繊維学会誌ファイバー1月/報文-02-古金谷

IPSJ SIG Technical Report Vol.2009-DPS-141 No.20 Vol.2009-GN-73 No.20 Vol.2009-EIP-46 No /11/27 1. MIERUKEN 1 2 MIERUKEN MIERUKEN MIERUKEN: Spe

TCP/IP IEEE Bluetooth LAN TCP TCP BEC FEC M T M R M T 2. 2 [5] AODV [4]DSR [3] 1 MS 100m 5 /100m 2 MD 2 c 2009 Information Processing Society of

3 2 2 (1) (2) (3) (4) 4 4 AdaBoost 2. [11] Onishi&Yoda [8] Iwashita&Stoica [5] 4 [3] 3. 3 (1) (2) (3)

Silhouette on Image Object Silhouette on Images Object 1 Fig. 1 Visual cone Fig. 2 2 Volume intersection method Fig. 3 3 Background subtraction Fig. 4

Fig. 1 Hammer Two video cameras Object Overview of hammering test (14) (8) T s T s 2

2). 3) 4) 1.2 NICTNICT DCRA Dihedral Corner Reflector micro-arraysdcra DCRA DCRA DCRA 3D DCRA PC USB PC PC ON / OFF Velleman K8055 K8055 K8055

日本感性工学会論文誌

[1] SBS [2] SBS Random Forests[3] Random Forests ii

Fig. 2 Signal plane divided into cell of DWT Fig. 1 Schematic diagram for the monitoring system

IPSJ SIG Technical Report Vol.2013-CVIM-188 No /9/2 1,a) D. Marr D. Marr 1. (feature-based) (area-based) (Dense Stereo Vision) van der Ma

2 Fig D human model. 1 Fig. 1 The flow of proposed method )9)10) 2.2 3)4)7) 5)11)12)13)14) TOF 1 3 TOF 3 2 c 2011 Information

す 局所領域 ωk において 線形変換に用いる係数 (ak 画素の係数 (ak bk ) を算出し 入力画像の信号成分を bk ) は次式のコスト関数 E を最小化するように最適化 有さない画素に対して 式 (2) より画素値を算出する される これにより 低解像度な画像から補間によるアップサ E(

(a) (b) (c) Canny (d) 1 ( x α, y α ) 3 (x α, y α ) (a) A 2 + B 2 + C 2 + D 2 + E 2 + F 2 = 1 (3) u ξ α u (A, B, C, D, E, F ) (4) ξ α (x 2 α, 2x α y α,

IPSJ SIG Technical Report Vol.2010-CVIM-170 No /1/ Visual Recognition of Wire Harnesses for Automated Wiring Masaki Yoneda, 1 Ta

14 2 5

Fig. 1 Schematic construction of a PWS vehicle Fig. 2 Main power circuit of an inverter system for two motors drive

IPSJ SIG Technical Report Vol.2011-MUS-91 No /7/ , 3 1 Design and Implementation on a System for Learning Songs by Presenting Musical St

IPSJ SIG Technical Report Secret Tap Secret Tap Secret Flick 1 An Examination of Icon-based User Authentication Method Using Flick Input for

第62巻 第1号 平成24年4月/石こうを用いた木材ペレット

2.2 6).,.,.,. Yang, 7).,,.,,. 2.3 SIFT SIFT (Scale-Invariant Feature Transform) 8).,. SIFT,,. SIFT, Mean-Shift 9)., SIFT,., SIFT,. 3.,.,,,,,.,,,., 1,

IPSJ SIG Technical Report GPS LAN GPS LAN GPS LAN Location Identification by sphere image and hybrid sensing Takayuki Katahira, 1 Yoshio Iwai 1

DEIM Forum 2009 B4-6, Str

IPSJ SIG Technical Report iphone iphone,,., OpenGl ES 2.0 GLSL(OpenGL Shading Language), iphone GPGPU(General-Purpose Computing on Graphics Proc

[2] 2. [3 5] 3D [6 8] Morishima [9] N n 24 24FPS k k = 1, 2,..., N i i = 1, 2,..., n Algorithm 1 N io user-specified number of inbetween omis

Vol.55 No (Jan. 2014) saccess 6 saccess 7 saccess 2. [3] p.33 * B (A) (B) (C) (D) (E) (F) *1 [3], [4] Web PDF a m

1611 原著 論文受付 2009 年 6 月 2 日 論文受理 2009 年 9 月 18 日 Code No. 733 ピクセル開口率の向上による医用画像表示用カラー液晶モニタの物理特性の変化 澤田道人 石川晃則 1) 松永沙代子 1) 1) 石川陽子 有限会社ムツダ商会 1) 安城更生病院放射

IPSJ SIG Technical Report An Evaluation Method for the Degree of Strain of an Action Scene Mao Kuroda, 1 Takeshi Takai 1 and Takashi Matsuyama 1

IPSJ SIG Technical Report Vol.2016-CE-137 No /12/ e β /α α β β / α A judgment method of difficulty of task for a learner using simple

a) Extraction of Similarities and Differences in Human Behavior Using Singular Value Decomposition Kenichi MISHIMA, Sayaka KANATA, Hiroaki NAKANISHI a

3D UbiCode (Ubiquitous+Code) RFID ResBe (Remote entertainment space Behavior evaluation) 2 UbiCode Fig. 2 UbiCode 2. UbiCode 2. 1 UbiCode UbiCode 2. 2

IPSJ SIG Technical Report Vol.2013-GN-86 No.35 Vol.2013-CDS-6 No /1/17 1,a) 2,b) (1) (2) (3) Development of Mobile Multilingual Medical

光学

IPSJ SIG Technical Report Vol.2011-EC-19 No /3/ ,.,., Peg-Scope Viewer,,.,,,,. Utilization of Watching Logs for Support of Multi-

A Feasibility Study of Direct-Mapping-Type Parallel Processing Method to Solve Linear Equations in Load Flow Calculations Hiroaki Inayoshi, Non-member

24 Depth scaling of binocular stereopsis by observer s own movements

36 581/2 2012

1 1 CodeDrummer CodeMusician CodeDrummer Fig. 1 Overview of proposal system c

社団法人人工知能学会 Japanese Society for Artificial Intelligence 人工知能学会研究会資料 JSAI Technical Report SIG-Challenge-B30 (5/5) A Method to Estimate Ball s State of

Virtual Window System Virtual Window System Virtual Window System Virtual Window System Virtual Window System Virtual Window System Social Networking

1_26.dvi

TA3-4 31st Fuzzy System Symposium (Chofu, September 2-4, 2015) Interactive Recommendation System LeonardoKen Orihara, 1 Tomonori Hashiyama, 1

2 ( ) i

20 Method for Recognizing Expression Considering Fuzzy Based on Optical Flow

IPSJ SIG Technical Report 1,a) 1,b) 1,c) 1,d) 2,e) 2,f) 2,g) 1. [1] [2] 2 [3] Osaka Prefecture University 1 1, Gakuencho, Naka, Sakai,

Studies of Foot Form for Footwear Design (Part 9) : Characteristics of the Foot Form of Young and Elder Women Based on their Sizes of Ball Joint Girth

Lytro [11] The Franken Camera [12] 2.2 Creative Coding Community Creative Coding Community [13]-[19] Sketch Fork 2.3 [20]-[23] 3. ourcam 3.1 ou

Vol.54 No (July 2013) [9] [10] [11] [12], [13] 1 Fig. 1 Flowchart of the proposed system. c 2013 Information

Q [4] 2. [3] [5] ϵ- Q Q CO CO [4] Q Q [1] i = X ln n i + C (1) n i i n n i i i n i = n X i i C exploration exploitation [4] Q Q Q ϵ 1 ϵ 3. [3] [5] [4]

soturon.dvi

, (GPS: Global Positioning Systemg),.,, (LBS: Local Based Services).. GPS,.,. RFID LAN,.,.,.,,,.,..,.,.,,, i

IPSJ SIG Technical Report Vol.2017-ARC-225 No.12 Vol.2017-SLDM-179 No.12 Vol.2017-EMB-44 No /3/9 1 1 RTOS DefensiveZone DefensiveZone MPU RTOS

Vol. 43 No. 2 Feb. 2002,, MIDI A Probabilistic-model-based Quantization Method for Estimating the Position of Onset Time in a Score Masatoshi Hamanaka

こんにちは由美子です

IPSJ SIG Technical Report Vol.2014-HCI-158 No /5/22 1,a) 2 2 3,b) Development of visualization technique expressing rainfall changing conditions

Fig. 1. Example of characters superimposed on delivery slip.

Transcription:

DFD that uses focus changes during an image integration time for engineering the PSF. We can capture higher SNR input images, since we can control the PSF with wide aperture setting unlike coded aperture. We confirmed the effectiveness of the method in comparison with the previous DFD and coded aperture approached in experiments. 1 1 1 DFDDepth From DefocusDFD PSF PSF DFD DFD PSF SN DFD PSF DFD Focus Sweep Imaging for Depth From Defocus SHUHEI MATSUI, 1 HAJIME NAGAHARA 1 and RIN-ICHIRO TANIGUCHI 1 Depth From Defocus (DFD) is to recover a scene depth from defocus appearances in images. DFD usually uses two different focus images, one is near focus and the other is far focus, and estimates the size of depth blur from the captured images. However, the depth estimation is not so accurate, since a point spread function (PSF) caused by regular circular aperture moderately changes the size or shape along the depth. In recent years, coded aperture technique, that uses special pattern as an aperture for engineering the PSF, has been used for improving the accuracy. It is often required for recovering an all in focus image as well as the depth estimation in DFD applications. Coded aperture has an disadvantage in terms of image deblurring, since the deblurring requires higher SNR of captured images. The aperture always attenuates an incoming light for controlling PSF and decreases an input image SNR as a result. In this paper, we propose a new DFD approach for DFD 1. 3D DFD 1),2). DFD 2 PSF PSF 3) Levin 4) PSF PSF Zhou 5) 2 PSF SN 10) 11) 1 Graduate school of information science and electrical engineering, Kyushu University 1 c 2010 Information Processing Society of Japan

PSF SN 2 DFD PSF SN PSF PSF PSF 2 2 PSF 2 PSF PSF 2 3 PSF 4 DFD 56 2. PSF PSF 3) DFD Levin 4) PSF DFD KL KL PSF DFD 2 Zhou 5) 2 DFD DFD DFD DFD Zhou 2 7),8) Green 9) 4 SN Levin 6) DFD 2 Zhou 5) 2 Green 9) DFD PSF SN PSF 2 c 2010 Information Processing Society of Japan

PSF Dowski 10) Levin 4) Levin 11) PSF PSF PSF SN 12),13) PSF 12),13) PSF DFD PSF 2 2 Zhou 5) PSF SN 3. 12) PSF DFD 2 M u Aperture Lens a Fig. 1 v p 1 Projective geometory of lens m Image sensor 2 PSF 1 1 f a p u M v m 1 f = 1 u + 1 (1) v v = p 1 p v M m b b b(p) = a (v p) (2) v PSF PSF P (r, u, p) P (r, u, p) = 4 ( r πb 2 b ) (3) r PSF m (x) x 1/2 1 0 PSF m b 3 c 2010 Information Processing Society of Japan

p 2 p f 1 exposure f 2 exposure e 1 e 2 Far Far Optical axis Lens p 0 p 1 p 2 p 1 Scene Depth Scene Depth Image sensor p 0 t 0 t 1 t 2 Time a. Sensor motion b. Sweep motion and image integrations 2 Fig. 2 Half sweep imaging t Near Near h 1 h 2 h all H 1 H 2 H all a. PSF profile b. Log of power spectrum 2-a 2 p 0 p 2 p 0 p 2 p(t) =st + p0 2-b 2 f 1 f 2 e 1e 2 t 0 t 1 t 1 t 2 f 1 f 2 2-a p 0 p 1 p 1 p 2 2 2 f 1f 2 PSF f 1f 2 f i = h i f 0 + ξ, i =1, 2 (4) f 0 PSF h i ξ h i p 0 p 1 p 1 p 2 pi h i(r, u) = P (r, u, p)dp (5) p i 1 3 PSF Fig. 3 Half sweep PSF 3 PSF uf h i(r, u) = ( λp + i 1 λp i 2λp i 1 (u f)πasp i r b(p 2λp i i 1) b(p ) (6) i) b(p) 2 p λ p b(p) 2r 1 0 6 PSF 3-a PSF 3 2 PSF h 1h 2 4 v p 0 p 2 4 h all =(h 1 + h 2)/2 PSF PSF 12),13) PSF h 1 h 2 PSF PSF h all PSF PSF 3-b log H 1H 2H all h 1h 2h all 3-a H 1 4 c 2010 Information Processing Society of Japan

H 2 PSF PSF H all PSF Levin 4) H all PSF 12),13) 2 Zhou 5) DFD PSF PSF 4. DFD 2 4 F (d) i = F 0 H (d) i + N (7) 2 F i F 0 d PSF H (d) i N DFD F 0 d F 0 F H ˆF 0 = (8) H 2 + C 2 F PSF H ˆF 0 H H H 2 = H H C SN 8 3 h 1 h 2 h all PSF F all = F1 + F2, H (d) all 2 = H (d) 1 + H (d) 2 2 (9) 9 F all H all 8 (d) (F 1 + F 2)(H ˆF (d) 0 = 1 + H (d) 2 ) (10) H (d) 1 + H (d) 2 2 +4 C 2 d W (d) = IFFT( ˆF (d) ( 0 H ˆd) i F i) (11) i=1,2 ˆF (d) 0 10 IFFT 2D d (x, y) d U U(x, y) =argminw (d) (x, y) (12) d D U I I(x, y) = ˆF 0 (U(x,y)) (x, y) (13) 5. PSF 2 DFD Zhou 5) DFD 3 9mmf/1.4 1 u v p u=832000mm 1 v=9.0410.09mm v 20 20 u v 1 20 v Δv=0.055 p v Δv 0.5 5 c 2010 Information Processing Society of Japan

1 (f=9mm) Table 1 Relation between object depth and focus position Object depth :u[mm] 2000.0 803.1 524.6 390.7 312.1 260.3 223.6 196.3 175.1 158.2 Focus position :v[mm] 9.04 9.10 9.15 9.21 9.26 9.32 9.37 9.43 9.48 9.54 Object depth :u[mm] 144.5 133.1 123.4 115.1 108.0 101.8 96.2 91.4 87.0 83.0 Focus position :v[mm] 9.59 9.65 9.70 9.76 9.81 9.87 9.92 9.98 10.03 10.09 a. Ground Truth b. Conventional DFD c. Coded aperture pair d. Half Sweep 10 5-a 2 4-a 20 4 Jet 2-a p=9.04mm p 0 p=10.09mm p 2 p 0 p 2 p 1 DFD 2 2 p 0 p 2 Zhou 5) 2 2 p 2 p 0 p 1 p 1 p 2 2 PSF 6 PSF h 1h 2 20 2 PSF DFD 3 DFD Zhou 5) 4 4-a 4-b DFD 4-c Zhou 5) DFD 4-d 4 Fig. 4 Estimated depth map a. True Texture b. Conventional DFD c. Coded aperture pair d. Half Sweep 5 Fig. 5 Error map of deblurred image Jet 1 u v mm 4-b DFD 4-c 4-d 3 5-bcd 2 RMSRoot Mean Square PSNRPeak Signal-to-Noise RatioRMS PSNR DFD 4 6 c 2010 Information Processing Society of Japan

2 Table 2 Depth and deblurring error DepthMap(RMS) Texture(PSNR) Conventional DFD 26.98 30.21[dB] Coded aperture pair 25.91 32.24[dB] Half sweep 7.81 39.98[dB] RMS focal stack backward forward f 1 6. 6 PSF PSF Canon EOS 20D 30mm f/# =1.4 12),13) 14 7 7 f 1 f 2 u=4840mm u=671mm 1 v=30.231.4mm v 14 14 v=30.8mm p 0=30.2mm p 1=30.8mmp 2=31.4mm p 0 p 1 p 1 p 2 PSF 6 7 7-ab f 1f 2 7-c 7-d 6 Fig. 6 Simulated half sweep imaging from focal stack 7-ef 7-c 5 DFD ( 7-ab) f 1f 2 ( 7-d) ( 7-ef) 7. PSF 2 PSF PSF DFD f 2 7 c 2010 Information Processing Society of Japan

2 DFDZhou 5) DFD a. Input image: f1 b. Input image: f2 c. DepthMap d. All in focus image e. Close up: backward f. Close up: forward 1) A.Pentland: A New Sense for Depth of Field, IEEE PAMI, 9(4): 423-430, 1987. 2) M. Subbarao and N. Gurumoorthy: Depth recovery from blurred edges. In CVPR, pages 498-503, 1988. 3) :,, Vol. J82-D-II, No. 11, pp. 1912 1920, 1999. 4) A. Levin, R.Fergus, F.Durand, and W.Freeman: Image and depth from a conventional camera with a coded aperture, ACM Transactions on Graphics, no. 3, 2007. 5) C. Zhou, S. Lin, and S. Nayar: Coded Aperture Pairs for Depth from Defocus, IEEE International Conference on Computer Vision, 2009. 6) A. Levin: Analyzing Depth from Coded Aperture Sets, Proc. European Conference on Computer Vision, Sep. 2010. 7) H. Nagahara, C. Zhou, T. Watanabe, H. Ishiguro, S. K. Nayar: Programmable Aperture Camera Using LCoS, Proc. European Conference on Computer Vision, Sep. 2010. 8) C. Zhou, S. K. Nayar:, Vol. CVIM174, no.28, 2010. 9) P. Green, W. Sun, W. Matusik, F. Durand: Multiple-Aperture Photography, Proc. ACM SIG- GRAPH, 2007 10) E. R. Dowski, and W. T. Cathey: Single-lens single-image incoherent passive-ranging systems, Applied Optics, Vol. 33, No. 29, Oct. 1994. 11) A. Levin, S. Hasinoff, P. Green, F. Durand, and W. T. Freeman: 4D Frequency Analysis of Computational Cameras for Depth of Field Extension, SIGGRAPH, ACM Transactions on Graphics, 2009. 12) H. Nagahara, S. Kuthirummal, C. Zhou and S. Nayar: Flexible Depth of Field Photography, European Conference on Computer Vision, 2008. 13) S. Kuthirummal, H. Nagahara, C. Zhou, S. K. Nayar: Flexible Depth of Field Photography, IEEE Trans. Pattern Analysis and Machine Intelligence, Vol. 33, 2011 (will appear). 7 Fig. 7 Experimental results of real scene 8 c 2010 Information Processing Society of Japan