This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
Flexible Depth of Field Photography
January 2011 (vol. 33 no. 1)
pp. 58-71
Sujit Kuthirummal, Sarnoff Corporation, Princeton
Hajime Nagahara, Osaka University, Osaka
Changyin Zhou, Columbia University, New York
Shree K. Nayar, Columbia University, New York
The range of scene depths that appear focused in an image is known as the depth of field (DOF). Conventional cameras are limited by a fundamental trade-off between depth of field and signal-to-noise ratio (SNR). For a dark scene, the aperture of the lens must be opened up to maintain SNR, which causes the DOF to reduce. Also, today's cameras have DOFs that correspond to a single slab that is perpendicular to the optical axis. In this paper, we present an imaging system that enables one to control the DOF in new and powerful ways. Our approach is to vary the position and/or orientation of the image detector during the integration time of a single photograph. Even when the detector motion is very small (tens of microns), a large range of scene depths (several meters) is captured, both in and out of focus. Our prototype camera uses a micro-actuator to translate the detector along the optical axis during image integration. Using this device, we demonstrate four applications of flexible DOF. First, we describe extended DOF where a large depth range is captured with a very wide aperture (low noise) but with nearly depth-independent defocus blur. Deconvolving a captured image with a single blur kernel gives an image with extended DOF and high SNR. Next, we show the capture of images with discontinuous DOFs. For instance, near and far objects can be imaged with sharpness, while objects in between are severely blurred. Third, we show that our camera can capture images with tilted DOFs (Scheimpflug imaging) without tilting the image detector. Finally, we demonstrate how our camera can be used to realize nonplanar DOFs. We believe flexible DOF imaging can open a new creative dimension in photography and lead to new capabilities in scientific imaging, vision, and graphics.

[1] G. Hausler, "A Method to Increase the Depth of Focus by Two Step Image Processing," Optics Comm., vol. 6, no. 1, pp. 38-42, 1972.
[2] H. Merklinger, "Focusing the View Camera," 1996.
[3] A. Krishnan and N. Ahuja, "Range Estimation from Focus Using a Non-Frontal Imaging Camera," Int'l J. Computer Vision, vol. 20, no. 3, pp. 169-185, 1996.
[4] T. Scheimpflug, "Improved Method and Apparatus for the Systematic Alteration or Distortion of Plane Pictures and Images by Means of Lenses and Mirrors for Photography and for Other Purposes," GB patent, 1904.
[5] H. Nagahara, S. Kuthirummal, C. Zhou, and S.K. Nayar, "Flexible Depth of Field Photography," Proc. European Conf. Computer Vision, pp. 60-73, 2008.
[6] E.R. Dowski and W. Cathey, "Extended Depth of Field Through Wavefront Coding," Applied Optics, vol. 34, pp. 1859-1866, 1995.
[7] N. George and W. Chi, "Extended Depth of Field Using a Logarithmic Asphere," J. Optics A: Pure and Applied Optics, vol. 5, pp. 157-163, 2003.
[8] A. Castro and J. Ojeda-Castaneda, "Asymmetric Phase Masks for Extended Depth of Field," Applied Optics, vol. 43, pp. 3474-3479, 2004.
[9] A. Levin, R. Fergus, F. Durand, and B. Freeman, "Image and Depth from a Conventional Camera with a Coded Aperture," Proc. ACM SIGGRAPH, 2007.
[10] A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin, "Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture," Proc. ACM SIGGRAPH, 2007.
[11] E. Adelson and J. Wang, "Single Lens Stereo with a Plenoptic Camera," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 14, no. 2, pp. 99-106, Feb. 1992.
[12] R. Ng, M. Levoy, M. Brdif, G. Duval, M. Horowitz, and P. Hanrahan, "Light Field Photography with a Hand-Held Plenoptic Camera," technical report, Stanford Univ., 2005.
[13] T. Georgiev, C. Zheng, B. Curless, D. Salesin, S.K. Nayar, and C. Intwala, "Spatio-Angular Resolution Tradeoff in Integral Photography," Proc. Eurographics Symp. Rendering, pp. 263-272, 2006.
[14] T. Darrell and K. Wohn, "Pyramid Based Depth from Focus," Proc. IEEE CS Conf. Computer Vision and Pattern Recognition, pp. 504-509, 1988.
[15] S.K. Nayar, "Shape from Focus System," Proc. IEEE CS Conf. Computer Vision and Pattern Recognition, pp. 302-308, 1992.
[16] M. Subbarao and T. Choi, "Accurate Recovery of Three-Dimensional Shape from Image Focus," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 17, no. 3, pp. 266-274, Mar. 1995.
[17] S.W. Hasinoff and K.N. Kutulakos, "Light-Efficient Photography," Proc. European Conf. Computer Vision, pp. 45-59, 2008.
[18] A. Agarwala, M. Dontcheva, M. Agrawala, S. Drucker, A. Colburn, B. Curless, D. Salesin, and M. Cohen, "Interactive Digital Photomontage," Proc. ACM SIGGRAPH, pp. 294-302, 2004.
[19] A. Levin, P. Sand, T.S. Cho, F. Durand, and W.T. Freeman, "Motion-Invarient Photography," Proc. ACM SIGGRAPH, 2008.
[20] M. Ben-Ezra, A. Zomet, and S. Nayar, "Jitter Camera: High Resolution Video from a Low Resolution Detector," Proc. IEEE CS Conf. Computer Vision and Pattern Recognition, pp. 135-142, 2004.
[21] O. Ait-Aider, N. Andreff, J.-M. Lavest, and P. Martinet, "Simultaneous Object Pose and Velocity Computation Using a Single View from a Rolling Shutter Camera," Proc. European Conf. Computer Vision, pp. 56-68, 2006.
[22] D. Field, "Relations between the Statistics of Natural Images and the Response Properties of Cortical Cells," J. Optical Soc. of Am., vol. 4, pp. 2379-2394, 1987.
[23] H. Hopkins, "The Frequency Response of a Defocused Optical System," Proc. Royal Soc. of London Series A, Math. and Physical Sciences, vol. 231, pp. 91-103, 1955.
[24] P.A. Jansson, Deconvolution of Images and Spectra. Academic Press, 1997.
[25] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, "Image Restoration by Sparse 3D Transform-Domain Collaborative Filtering," Proc. SPIE Electronic Imaging, 2008.
[26] L. Yuan, J. Sun, L. Quan, and H.-Y. Shum, "Progressive Inter-Scale and Intra-Scale Non-Blind Image Deconvolution," Proc. ACM SIGGRAPH, 2008.
[27] www.cs.columbia.edu/CAVE/projectsflexible_dof , 2010.
[28] P. Burt and R. Kolczynski, "Enhanced Image Capture through Fusion," Proc. Fourth IEEE Int'l Conf. Computer Vision, pp. 173-182, 1993.
[29] P. Haeberli, "Grafica Obscura," www.sgi.comgrafica/, 1994.
[30] Q. Shan, J. Jia, and A. Agarwala, "High-quality Motion Deblurring from a Single Image," Proc. ACM SIGGRAPH, 2008.
[31] N. Asada, H. Fujiwara, and T. Matsuyama, "Seeing Behind the Scene: Analysis of Photometric Properties of Occluding Edges by the Reversed Projection Blurring Model," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 20, no. 2, pp. 155-167, Feb. 1998.
[32] S. Bhasin and S. Chaudhuri, "Depth from Defocus in Presence of Partial Self Occlusion," Proc. Eighth IEEE Int'l Conf. Computer Vision, pp. 488-493, 2001.

Index Terms:
Imaging geometry, programmable depth of field, detector motion, depth-independent defocus blur.
Citation:
Sujit Kuthirummal, Hajime Nagahara, Changyin Zhou, Shree K. Nayar, "Flexible Depth of Field Photography," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 1, pp. 58-71, Jan. 2011, doi:10.1109/TPAMI.2010.66
Usage of this product signifies your acceptance of the Terms of Use.