This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
Reactive Control of Zoom while Fixating Using Perspective and Affine Cameras
January 2004 (vol. 26 no. 1)
pp. 98-112

Abstract—This paper describes reactive visual methods of controlling the zoom setting of the lens of an active camera while fixating upon an object. The first method assumes a perspective projection and adjusts zoom to preserve the ratio of focal length to scene depth. The active camera is constrained to rotate, permitting self-calibration from the image motion of points on the static background. A planar structure from motion algorithm is used to recover the depth of the foreground. The foreground-background segmentation exploits the properties of the two different interimage homographies which are observed. The fixation point is updated by transfer via the observed planar structure. The planar method is shown to work on real imagery, but results from simulated data suggest that its extension to general 3D structure is problematical under realistic viewing and noise regimes. The second method assumes an affine projection. It requires no self-calibration and the zooming camera may move generally. Fixation is again updated using transfer, but now via the affine structure recovered by factorization. Analysis of the projection matrices allows the relative scale of the affine bases in different views to be found in a number of ways and, hence, controlled to unity. The various ways are compared and the best used on real imagery captured from an active camera fitted with a controllable zoom lens in both look-move and continuous operation.

[1] A. Bobick and C. Pinhanez, “Using Approximate Models as Source of Contextual Information for Vision Processing,” Proc Int'l Conf. Computer Vision Workshop Context-Based Vision, pp. 13-21, July 1995.
[2] M.L. Gleicher, R.M. Heck, and M.N. Wallick, “A Framework for Virtual Videography,” Proc Second Int'l Symp. Smart Graphics, June 2002.
[3] T. Viéville, E. Clergue, R. Enciso, and H. Mathieu, “Experimentating with 3D Vision on a Robotic Head,” Robotics and Autonomous Systems, vol. 14, pp. 1-27, 1995.
[4] J.A. Fayman, O. Sudarsky, and E. Rivlin, “Zoom Tracking,” Proc IEEE Int'l Conf. Robotics and Automation, vol. 4, pp. 2783-2788, May 1998.
[5] B. Tordoff and D.W. Murray, “Resolution vs. Tracking Error: Zoom as a Gain Controller,” Proc IEEE Conf. Computer Vision and Pattern Recognition, pp. 273-280, June 2003.
[6] L. de Agapito, E. Hayman, and I. Reid, “Self-Calibration of a Rotating Camera with Varying Intrinsic Parameters,” Proc Ninth British Machine Vision Conf., pp. 105-114, 1998.
[7] L. de Agapito, R.I. Hartley, and E. Hayman, “Linear Self-Calibration of a Rotating and Zooming Camera,” Proc IEEE Conf. Computer Vision and Pattern Recognition, pp. 15-21, June 1999.
[8] O.D. Faugeras and F. Lustman, “Motion and Structure from Motion in a Piecewise Planar Environment,” Int'l J. Pattern Recognition and Artificial Intelligence, vol. 2, pp. 485-508, 1988.
[9] I.D. Reid and D.W. Murray, “Active Tracking of Foveated Feature Clusters Using Affine Structure,” Int'l J. Computer Vision, vol. 18, no. 1, pp. 41-60, Apr. 1996.
[10] E. Hayman, I.D. Reid, and D.W. Murray, “Zooming While Tracking Using Affine Transfer,” Proc Seventh British Machine Vision Conf., 1996.
[11] F. Kahl and A. Heyden, “Robust Self-Calibration and Euclidean Reconstruction via Affine Approximation,” Proc 14th Int'l Conf. Pattern Recognition, 1998.
[12] B.J. Tordoff, “Active Control of Zoom for Computer Vision,” DPhil thesis, Univ. of Oxford, 2002.
[13] H.C. Longuet-Higgins, “The Visual Ambiguity of a Moving Plane,” Proc. Royal Soc., vol. B223, pp. 165-175, 1984.
[14] D.W. Murray and L.S. Shapiro, “Dynamic Updating of Planar Structure and Motion: The Case of Constant Motion,” Computer Vision and Image Understanding, vol. 63, no. 1, pp. 169-181, 1996.
[15] S. Maybank and O.D. Faugeras, “A Theory of Self-Calibration of a Moving Camera,” Int'l J. Computer Vision, vol. 8, no. 2, pp. 123-151, Aug. 1992.
[16] M. Pollefeys, R. Koch, and L. VanGool, “Self-Calibration and Metric Reconstruction in Spite of Varying and Unknown Internal Camera Parameters,” Proc Sixth Int'l Conf. Computer Vision, pp. 90-95, 1998.
[17] R.I. Hartley, “Self-Calibration from Multiple Views with a Rotating Camera,” Proc Third European Conf. Computer Vision, vol. A, pp. 471-478, 1994.
[18] Y. Seo and K.S. Hong, “Auto-Calibration of a Rotating and Zooming Camera,” Proc Fifth IAPR Workshop Machine Vision Applications, pp. 274-277, Nov. 1998.
[19] Q.-T. Luong and T. Viéville, “Canonic Representations for the Geometries of Multiple Projective Views,” Proc. Third European Conf. Computer Vision, pp. 589-599, May 1994.
[20] E. Hayman and D.W. Murray, “The Effects of Translational Misalignment when Self-Calibrating Rotating and Zooming Cameras,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 25, pp. 1015-1020, 2003.
[21] B. Tordoff and D.W. Murray, “Violating Rotating Camera Geometry: The Effect of Radial Distortion on Self-Calibration,” Proc 15th Int'l Conf. Pattern Recognition, vol. 1, pp. 423-427, 2000.
[22] C.G. Harris and M. Stephens, “A Combined Corner and Edge Detector,” Proc. Fourth Alvey Vision Conf., pp. 147-151, 1988.
[23] M.A. Fischler and R.C. Bolles, “Random Sample Concensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography,” Comm. ACM, vol. 24, no. 6, pp. 381-395, 1981.
[24] P.H.S. Torr and A. Zisserman, “MLESAC: A New Robust Estimator with Application to Estimating Image Geometry,” Computer Vision and Image Understanding, vol. 78, pp. 138-156, 2000.
[25] B. Tordoff and D.W. Murray, “Guided Sampling and Consensus for Motion Estimation,” Proc Seventh European Conf. Computer Vision, June 2002.
[26] J.L. Mundy and A.P. Zisserman, Geometric Invariance in Computer Vision. MIT Press, 1992.
[27] R. Hartley and A. Zisserman, Multiple View Geometry, first ed. Cambridge Univ. Press, 2000.
[28] C. Tomasi and T. Kanade, “Shape and Motion from Image Streams Under Orthography: A Factorization Method,” Int'l J. Computer Vision, vol. 9, no. 2, pp. 137-154, 1992.
[29] I.D. Reid and D.W. Murray, “Tracking Foveated Corner Clusters Using Affine Structure,” Proc. Fourth Int'l Conf. Computer Vision, pp. 76-83, 1993.
[30] J. Koenderink and A.J. van Doorn, “Affine Structure from Motion,” Optical Soc. of Am. — Annals, vol. 8, no. 2, pp. 377-385, 1991.
[31] L.S. Shapiro, A. Zisserman, and M. Brady, “3D Motion Recovery via Affine Epipolar Geometry,” Int'l J. Computer Vision, vol. 16, pp. 147-182, 1995.
[32] L. Quan and T. Kanade, “Affine Structure from Line Correspondences with Uncalibrated Affine Cameras,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 19, no. 8, pp. 834-845, 1997.
[33] F. Kahl and A. Heyden, “Structure and Motion from Points, Lines and Conics with Affine Cameras,” Proc. Fifth European Conf. Computer Vision, pp. 327-341, May 1998.
[34] T. Thórhallsson and D.W. Murray, “The Tensors of Three Affine Views,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 450-456, June 1999.

Index Terms:
Active vision, zoom control, fixation, tracking, self-calibration, perspective projection, affine projection.
Citation:
Ben Tordoff, David Murray, "Reactive Control of Zoom while Fixating Using Perspective and Affine Cameras," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26, no. 1, pp. 98-112, Jan. 2004, doi:10.1109/TPAMI.2004.10000
Usage of this product signifies your acceptance of the Terms of Use.