The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.03 - March (2014 vol.36)
pp: 606-619
Sven Wanner , Heidelberg Collaboratory for Image Process. (HCI), Univ. of Heidelberg, Heidelberg, Germany
Bastian Goldluecke , Heidelberg Collaboratory for Image Process. (HCI), Univ. of Heidelberg, Heidelberg, Germany
ABSTRACT
We develop a continuous framework for the analysis of 4D light fields, and describe novel variational methods for disparity reconstruction as well as spatial and angular super-resolution. Disparity maps are estimated locally using epipolar plane image analysis without the need for expensive matching cost minimization. The method works fast and with inherent subpixel accuracy since no discretization of the disparity space is necessary. In a variational framework, we employ the disparity maps to generate super-resolved novel views of a scene, which corresponds to increasing the sampling rate of the 4D light field in spatial as well as angular direction. In contrast to previous work, we formulate the problem of view synthesis as a continuous inverse problem, which allows us to correctly take into account foreshortening effects caused by scene geometry transformations. All optimization problems are solved with state-of-the-art convex relaxation techniques. We test our algorithms on a number of real-world examples as well as our new benchmark data set for light fields, and compare results to a multiview stereo method. The proposed method is both faster as well as more accurate. Data sets and source code are provided online for additional evaluation.
INDEX TERMS
Spatial resolution, Cameras, Estimation, Geometry, Tensile stress, Image reconstruction,variational methods, Light fields, epipolar plane images, 3D reconstruction, super-resolution, view interpolation
CITATION
Sven Wanner, Bastian Goldluecke, "Variational Light Field Analysis for Disparity Estimation and Super-Resolution", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol.36, no. 3, pp. 606-619, March 2014, doi:10.1109/TPAMI.2013.147
REFERENCES
[1] E. Adelson and J. Bergen, "The Plenoptic Function and the Elements of Early Vision," Computational Models of Visual Processing, vol. 1, MIT Press, 1991.
[2] S. Baker and T. Kanade, "Limits on Super-Resolution and How to Break Them," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 24, no. 9, pp. 1167-1183, Sept. 2002.
[3] A. Beck and M. Teboulle, "Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems," SIAM J. Imaging Sciences, vol. 2, pp. 183-202, 2009.
[4] J. Berent and P. Dragotti, "Segmentation of Epipolar-Plane Image Volumes with Occlusion and Disocclusion Competition," Proc. Eighth IEEE Workshop Multimedia Signal Processing, pp. 182-185, 2006.
[5] J. Bigün and G.H. Granlund, "Optimal Orientation Detection of Linear Symmetry," Proc. IEEE Int'l Conf. Computer Vision, pp. 433-438, 1987.
[6] T. Bishop and P. Favaro, "Full-Resolution Depth Map Estimation from an Aliased Plenoptic Light Field," Proc. 10th Asian Conf. Computer Vision (ACCV '10), vol. 1, pp. 186-200, 2011.
[7] R. Bolles, H. Baker, and D. Marimont, "Epipolar-Plane Image Analysis: An Approach to Determining Structure from Motion," Int'l J. Computer Vision, vol. 1, no. 1, pp. 7-55, 1987.
[8] J.-X. Chai, X. Tong, S.-C. Chany, and H.-Y. Shum, "Plenoptic Sampling," Proc. ACM SIGGRAPH, pp. 307-318, 2000.
[9] A. Chambolle, "An Algorithm for Total Variation Minimization and Applications," J. Math. Imaging and Vision, vol. 20, nos. 1/2, pp. 89-97, 2004.
[10] A. Criminisi, S. Kang, R. Swaminathan, R. Szeliski, and P. Anandan, "Extracting Layers and Analyzing Their Specular Properties Using Epipolar-Plane-Image Analysis," Computer Vision and Image Understanding, vol. 97, no. 1, pp. 51-85, 2005.
[11] T. Georgiev and A. Lumsdaine, "Focused Plenoptic Camera and Rendering," J. Electronic Imaging, vol. 19, p. 021106, 2010.
[12] I. Geys, T.P. Koninckx, and L.V. Gool, "Fast Interpolated Cameras by Combining a GPU Based Plane Sweep with a Max-Flow Regularisation Algorithm," Proc. Second Int'l Symp. 3D Data Processing, Visualization, and Transmission (3DPVT), pp. 534-541, 2004.
[13] B. Goldluecke, "cocolib—A Library for Continuous Convex Optimization," http:/cocolib.net, 2013.
[14] B. Goldluecke and D. Cremers, "Superresolution Texture Maps for Multiview Reconstruction," Proc. IEEE Int'l Conf. Computer Vision, 2009.
[15] S. Gortler, R. Grzeszczuk, R. Szeliski, and M. Cohen, "The Lumigraph," Proc. ACM SIGGRAPH, pp. 43-54, 1996.
[16] A. Katayama, K. Tanaka, T. Oshino, and H. Tamura, "Viewpoint-Dependent Stereoscopic Display Using Interpolation of Multiviewpoint Images," Proc. SPIE, vol. 2409, p. 11, 1995.
[17] A. Kubota, K. Aizawa, and T. Chen, "Reconstructing Dense Light Field from Array of Multifocus Images for Novel View Synthesis," IEEE Trans. Image Processing, vol. 16, no. 1, pp. 269-279, Jan. 2007.
[18] M. Levoy, "Light Fields and Computational Imaging," Computer, vol. 39, no. 8, pp. 46-55, Aug. 2006.
[19] M. Levoy and P. Hanrahan, "Light Field Rendering," Proc. ACM SIGGRAPH, pp. 31-42, 1996.
[20] M. Matoušek, T. Werner, and V. Hlavác, "Accurate Correspondences from Epipolar Plane Images," Proc. Computer Vision Winter Workshop, pp. 181-189, 2001.
[21] L. McMillan and G. Bishop, "Plenoptic Modeling: An Image-Based Rendering System," Proc. ACM SIGGRAPH, pp. 39-46, 1995.
[22] R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, "Light Field Photography with a Hand-Held Plenoptic Camera," Technical Report CSTR 2005-02, Stanford Univ., 2005.
[23] C. Perwass and L. Wietzke, "The Next Generation of Photography," www.raytrix.de, 2010.
[24] T. Pock, D. Cremers, H. Bischof, and A. Chambolle, "Global Solutions of Variational Models with Convex Regularization," SIAM J. Imaging Sciences, vol. 3, pp. 1122-1145, 2010.
[25] M. Protter and M. Elad, "Super-Resolution with Probabilistic Motion Estimation," IEEE Trans. Image Processing, vol. 18, no. 8, pp. 1899-1904, Aug. 2009.
[26] S. Seitz and C. Dyer, "Physically-Valid View Synthesis by Image Interpolation," Proc. IEEE Workshop Representation of Visual Scenes, pp. 18-25, 1995.
[27] H. Shum, S. Chan, and S. Kang, Image-Based Rendering. Springer-Verlag, 2007.
[28] A. Siu and E. Lau, "Image Registration for Image-Based Rendering," IEEE Trans. Image Processing, vol. 14, no. 1, pp. 241-252, Feb. 2005.
[29] E. Strekalovskiy and D. Cremers, "Generalized Ordering Constraints for Multilabel Optimization," Proc. IEEE Int'l Conf. Computer Vision (ICCV), 2011.
[30] V. Vaish and A. Adams, "The (New) Stanford Light Field Archive," http:/lightfield.stanford.edu, 2008.
[31] V. Vaish, B. Wilburn, N. Joshi, and M. Levoy, "Using Plane + Parallax for Calibrating Dense Camera Arrays," Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR), 2004.
[32] Z. Wang and A. Bovik, "Mean Squared Error: Love It or Leave It?" IEEE Signal Processing Magazine, vol. 26, no. 1, pp. 98-117, Jan. 2009.
[33] S. Wanner, "HCI Light Field Archive," http:/lightfield-analysis.net, 2012.
[34] S. Wanner, J. Fehr, and B. Jähne, "Generating EPI Representations of 4D Light Fields with a Single Lens Focused Plenoptic Camera," Proc. Seventh Int'l Conf. Advances in Visual Computing, pp. 90-101, 2011.
[35] S. Wanner and B. Goldluecke, "Globally Consistent Depth Labeling of 4D Light Fields," Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR), pp. 41-48, 2012.
[36] S. Wanner and B. Goldluecke, "Spatial and Angular Variational Super-Resolution of 4D Light Fields," Proc. European Conf. Computer Vision, 2012.
52 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool