This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
Orientation in Manhattan: Equiprojective Classes and Sequential Estimation
May 2005 (vol. 27 no. 5)
pp. 822-826
The problem of inferring 3D orientation of a camera from video sequences has been mostly addressed by first computing correspondences of image features. This intermediate step is now seen as the main bottleneck of those approaches. In this paper, we propose a new 3D orientation estimation method for urban (indoor and outdoor) environments, which avoids correspondences between frames. The scene property exploited by our method is that many edges are oriented along three orthogonal directions; this is the recently introduced Manhattan world (MW) assumption. The main contributions of this paper are: the definition of equivalence classes of equiprojective orientations, the introduction of a new small rotation model, formalizing the fact that the camera moves smoothly, and the decoupling of elevation and twist angle estimation from that of the compass angle. We build a probabilistic sequential orientation estimation method, based on an MW likelihood model, with the above-listed contributions allowing a drastic reduction of the search space for each orientation estimate. We demonstrate the performance of our method using real video sequences.

[1] A. Martins, P. Aguiar, and M. Figueiredo, “Navigating in Manhattan: 3D Orientation from Video without Correspondences,” Proc. IEEE Int'l Conf. Image Processing, 2003.
[2] O. Faugeras, Three-Dimensional Computer Vision. Cambridge, Mass.: MIT Press, 1993.
[3] R. Hartley and A. Zisserman, Multiple View Geometry. Cambridge Univ. Press, 2000.
[4] Geometric Invariants in Computer Vision, J. Mundy and A. Zisserman, eds. Cambridge, Mass.: MIT Press, 1992.
[5] E. Lutton, H. Maitre, and J. Lopez-Krahe, “Contribution to the Determination of Vanishing Points Using Hough Transform,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 16, no. 4, pp. 430-438, Apr. 1994.
[6] S. Utcke, “Grouping Based on Projective Geometry Constraints and Uncertainty,” Proc. IEEE Int'l Conf. Computer Vision, 1998.
[7] J. Kosecka and W. Zhang, “Video Compass,” Proc. European Conf. Computer Vision, 2002.
[8] B. Horn and E. WeldonJr., “Direct Methods for Recovering Motion,” Int'l J. Computer Vision, vol. 2, no. 1, pp. 51-76, 1988.
[9] G.P. Stein and A. Shashua, “Model-Based Brightness Constraints: On Direct Estimation of Structure and Motion,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 22, no. 9, pp. 992-1015, Sept. 2000.
[10] J. Coughlan and A. Yuille, “Manhattan World: Compass Direction from a Single Image by Bayesian Inference,” Proc. IEEE Int'l Conf. Computer Vision, 1999.
[11] J. Coughlan and A. Yuille, “The Manhattan World Assumption: Regularities in Scene Statistics which Enable Bayesian Inference,” Proc. Neural Information Processing Systems, 2000.
[12] J. Deutscher, M. Isard, and J. MacCormick, “Automatic Camera Calibration from a Single Manhattan Image,” Proc. European Conf. Computer Vision, 2002.
[13] G. Schindler and F. Dellaert, “Atlanta World: An Expectation-Maximization Framework for Simultaneous Low-Level Edge Grouping and Camera Calibration in Complex Man-Made Environments,” Proc. IEEE CS Conf. Computer Vision and Pattern Recognition, 2004.
[14] N. Gordon, A. Doucet, and N. Freitas, Sequential Monte Carlo Methods in Practice. New York: Springer-Verlag, 2001.
[15] M. Isard and A. Blake, “CONDENSATION–Conditional Density Propagation for Visual Tracking,” Int'l J. Computer Vision, vol. 29, no. 3, pp. 5-28, 1998.

Index Terms:
Camera orientation, sequential estimation, Manhattan world assumption, camera calibration.
Citation:
Andr? T. Martins, Pedro M.Q. Aguiar, M?rio A.T. Figueiredo, "Orientation in Manhattan: Equiprojective Classes and Sequential Estimation," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 5, pp. 822-826, May 2005, doi:10.1109/TPAMI.2005.107
Usage of this product signifies your acceptance of the Terms of Use.