This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
Shape Estimation Using Polarization and Shading from Two Views
November 2007 (vol. 29 no. 11)
pp. 2001-2017
This paper presents a novel method for 3D surface reconstruction that uses polarization and shading information from two views. The method relies on polarization data acquired using a standard digital camera and a linear polarizer. Fresnel theory is used to process the raw images and to obtain initial estimates of surface normals, assuming that the reflection type is diffuse. Based on this idea, the paper presents two novel contributions to the problem of surface reconstruction. The first is a technique to enhance the surface normal estimates by incorporating shading information into the method. This is done using robust statistics to estimate how the measured pixel brightnesses depend on the surface orientation. This gives an estimate of the object material reflectance function, which is used to refine the estimates of the surface normals. The second contribution is to use the refined estimates to establish correspondence between two views of an object. To do this, a set of patches are extracted from each view and are aligned by minimizing an energy functional based on the surface normal estimates and local topographic properties. The optimum alignment parameters for different patch pairs are then used to establish stereo correspondence. This process results in an unambiguous field of surface normals, which can be integrated to recover the surface depth. Our technique is most suited to smooth, non-metallic surfaces. It complements existing stereo algorithms since it does not require salient surface features to obtain correspondences. An extensive set of experiments, yielding reconstructed objects and reflectance functions, are presented and compared to ground truth.

[1] R. Zhang, P.S. Tsai, J.E. Cryer, and M. Shah, “Shape from Shading: A Survey,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 21, pp. 690-706, 1999.
[2] R.T. Frankot and R. Chellappa, “A Method for Enforcing Integrability in Shape from Shading Algorithms,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 10, pp. 439-451, 1988.
[3] A. Agrawal, R. Chellappa, and R. Raskar, “An Algebraic Approach to Surface Reconstruction from Gradient Fields,” Proc. Int'l Conf. Computer Vision, pp. 174-181, 2005.
[4] P.N. Belhumeur, D.J. Kriegman, and A.L. Yuille, “The Bas-Relief Ambiguity,” Proc. Computer Vision and Pattern Recognition, pp.1060-1066, 1997.
[5] L.B. Wolff, S.K. Nayar, and M. Oren, “Improved Diffuse Reflection Models for Computer Vision,” Int'l J. Computer Vision, vol. 30, pp. 55-71, 1998.
[6] A. Treuille, A. Hertzmann, and S. Seitz, “Example-Based Stereo with General BRDFs,” Proc. European Conf. Computer Vision, pp.457-469, 2004.
[7] A. Robles-Kelly and E.R. Hancock, “Estimating the Surface Radiance Function from Single Images,” Graphical Models, vol. 67, pp. 518-548, 2005.
[8] H. Ragheb and E.R. Hancock, “Surface Radiance Correction for Shape from Shading,” Pattern Recognition, vol. 38, pp. 1574-1595, 2005.
[9] R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision. Cambridge Univ. Press, 2000.
[10] M.Z. Brown and G.D. Hager, “Advances in Computational Stereo,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 25, pp. 993-1008, 2003.
[11] H. Jin, A. Yezzi, and S. Soatto, “Stereoscopic Shading: Integrating Shape Cues in a Variational Framework,” Proc. Computer Vision and Pattern Recognitio, pp. 169-176, 2000.
[12] D.G. Lowe, “Distinctive Image Features from Scale-Invariant Keypoints,” Int'l J. Computer Vision, vol. 60, pp. 91-110, 2004.
[13] R.J. Woodham, “Photometric Method for Determining Surface Orientation from Multiple Images,” Optical Eng., vol. 19, pp. 139-144, 1980.
[14] T. Zickler, P.N. Belhumeur, and D.J. Kriegman, “Helmholtz Stereopsis: Exploiting Reciprocity for Surface Reconstruction,” Int'l J. Computer Vision, vol. 49, pp. 215-227, 2002.
[15] E. Hecht, Optics, third ed. Addison Wesley Longman, 1998.
[16] L.B. Wolff and T.E. Boult, “Constraining Object Features Using a Polarization Reflectance Model,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 13, pp. 635-657, 1991.
[17] D. Miyazaki, M. Kagesawa, and K. Ikeuchi, “Transparent Surface Modelling from a Pair of Polarization Images,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 26, pp. 73-82, 2004.
[18] D. Miyazaki, M. Saito, Y. Sato, and K. Ikeuchi, “Determining Surface Orientations of Transparent Objects Based on Polarisation Degrees in Visible and Infrared Wavelengths,” Optical Soc. Am. J.A, vol. 19, pp. 687-694, 2002.
[19] L.B. Wolff, “Surface Orientation from Two Camera Stereo with Polarizers,” Proc. SPIE Conf. Optics, Illumination, Image Sensing for Machine Vision IV, vol. 1194, pp. 287-297, 1989.
[20] D. Miyazaki, R.T. Tan, K. Hara, and K. Ikeuchi, “Polarization-Based Inverse Rendering from a Single View,” Proc. Int'l Conf. Computer Vision, vol. 2, pp. 982-987, 2003.
[21] O. Drbohlav and R. Šára, “Using Polarization to Determine Intrinsic Surface Properties,” Proc. SPIE Conf. Polarization and Color Techniques in Industrial Inspection, pp. 253-263, 1999.
[22] S. Rahmann and N. Canterakis, “Reconstruction of Specular Surfaces Using Polarization Imaging,” Proc. Computer Vision and Pattern Recognition, pp. 149-155, 2001.
[23] S. Rahmann, “Reconstruction of Quadrics from Two Polarization Views,” Proc. Iberian Pattern Recognition and Image Analysis, pp.810-820, 2003.
[24] S.K. Nayar, X. Fang, and T. Boult, “Separation of Reflectance Components Using Colour and Polarization,” Int'l J. Computer Vision, vol. 21, pp. 163-186, 1997.
[25] S. Umeyama and G. Godin, “Separation of Diffuse and Specular Components of Surface Reflection by Use of Polarization and Statistical Analysis of Images,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 26, pp. 639-647, 2004.
[26] O. Drbohlav and R. Šára, “Unambiguous Determination of Shape from Photometric Stereo with Unknown Light Sources,” Proc. Int'l Conf. Computer Vision, pp. 581-586, 2001.
[27] J. Clark, E. Trucco, and L.B. Wolff, “Using Light Polarization in Laser Scanning,” Image and Vision Computing J., vol. 15, pp. 107-117, 1997.
[28] Y.Y. Schechner, S.G. Narashimhan, and S.K. Nayar, “Polarization-Based Vision through Haze,” Applied Optics, vol. 42, pp. 511-525, 2003.
[29] Y.Y. Schechner and N. Karpel, “Clear Underwater Vision,” Proc. Computer Vision and Pattern Recognition, pp. 536-543, 2004.
[30] T. Shibata, T. Takahashi, D. Miyazaki, Y. Sato, and K. Ikeuchi, “Creating Photorealistic Virtual Model with Polarization Based Vision System,” Proc. SPIE Int'l Symp. Optics and Photonics, vol. 5888, pp. 25-35, 2005.
[31] L.B. Wolff, “Polarization Vision: A New Sensory Approach to Image Understanding,” Image and Vision Computing J., vol. 15, pp.81-93, 1997.
[32] D. Miyazaki, N. Takashima, A. Yoshida, E. Harashima, and K. Ikeuchi, “Polarization-Based Shape Estimation of Transparent Objects by Using Raytracing and PLZT Camera,” Proc. SPIE Int'l Symp. Optics and Photonics, vol. 5888, pp. 1-14, 2005.
[33] J.J. Koenderink and A.J. van Doorn, “Surface Shape and Curvature Scales,” Image and Vision Computing J., vol. 10, pp. 557-565, 1992.
[34] D.A. Forsyth and J. Ponce, Computer Vision, A Modern Approach. Prentice Hall, 2003.
[35] S. Kotz and S. Nadarajah, Extreme Value Distributions: Theory and Applications. Imperial College Press, 2000.
[36] L.B. Wolff, “Diffuse-Reflectance Model for Smooth Dielectric Surfaces,” Optical Soc. Am. J. A, vol. 11, pp. 2956-2968, 1994.
[37] A.D.J. Cross and E.R. Hancock, “Graph Matching with a Dual-Step EM Algorithm,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 20, pp. 1236-1253, 1998.
[38] H. Chui and A. Rangarajan, “A New Point Matching Algorithm for Non-Rigid Registration,” Computer Vision and Image Understanding, vol. 89, pp. 114-141, 2003.
[39] S. Gold, A. Rangarajan, C.P. Lu, S. Pappu, and E. Mjolsness, “New Algorithms for 2D and 3D Point Matching: Pose Estimation and Correspondence,” Pattern Recognition, vol. 31, pp. 1019-1031, 1998.
[40] P.J. Besl and N.D. McKay, “A Method for Registration of 3D Shapes,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 14, pp. 239-256, 1992.
[41] R.J. Woodham, “Gradient and Curvature from Photometric Stereo Including Local Confidence Estimation,” Optical Soc. Am. J. A, vol. 11, pp. 3050-3068, 1994.
[42] J.C. Lagarias, J.A. Reeds, M.H. Wright, and P.E. Wright, “Convergence Properties of the Nelder-Mead Simplex Method in Low Dimensions,” SIAM J. Optimization, vol. 9, pp. 112-147, 1998.
[43] F.N. Fritsch and R.E. Carlson, “Monotone Piecewise Cubic Interpolation,” SIAM J. Numerical Analysis, vol. 17, pp. 238-246, 1980.

Index Terms:
Polarization imaging, surface shape recovery, stereo, reflectance function estimation, patch alignment
Citation:
Gary A. Atkinson, Edwin R. Hancock, "Shape Estimation Using Polarization and Shading from Two Views," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 11, pp. 2001-2017, Nov. 2007, doi:10.1109/TPAMI.2007.1099
Usage of this product signifies your acceptance of the Terms of Use.