This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
Information Theoretic Sensor Data Selection for Active Object Recognition and State Estimation
February 2002 (vol. 24 no. 2)
pp. 145-157

Abstract—We introduce a formalism for optimal sensor parameter selection for iterative state estimation in static systems. Our optimality criterion is the reduction of uncertainty in the state estimation process, rather than an estimator-specific metric (e.g., minimum mean squared estimate error). The claim is that state estimation becomes more reliable if the uncertainty and ambiguity in the estimation process can be reduced. We use Shannon's information theory to select information-gathering actions that maximize mutual information, thus optimizing the information that the data conveys about the true state of the system. The technique explicitly takes into account the a priori probabilities governing the computation of the mutual information. Thus, a sequential decision process can be formed by treating the a priori probability at a certain time step in the decision process as the a posteriori probability of the previous time step. We demonstrate the benefits of our approach in an object recognition application using an active camera for sequential gaze control and viewpoint selection. We describe experiments with discrete and continuous density representations that suggest the effectiveness of the approach.

[1] T. Arbel and F.P. Ferrie, “Viewpoint Selection by Navigation through Entropy Maps,” Proc. Seventh Int'l Conf. Computer Vision, 1999.
[2] H. Borotschnig, L. Paletta, M. Prantl, and A. Pinz, “Active Object Recognition in Parametric Eigenspace,” Proc. British Machine Vision Conf., vol. 2, pp. 629-638, 1998.
[3] H. Borotschnig, L. Paletta, M. Prantl, and A. Pinz, “Appearance Based Active Object Recognition,” Image and Vision Computing, vol. 18, pp. 715-727, 2000.
[4] D.A. Cohn, A. Ghahramani, and M.I. Jordan, “Active Learning with Statistical Models,” J. Artificial Intelligence Research, vol. 4, pp. 129-145, 1996.
[5] T.M. Cover and J.A. Thomas, Elements of Information Theory. John Wiley&Sons, 1991.
[6] F. Deinzer, J. Denzler, and H. Niemann, “Viewpoint Selection—A Classifier Independent Learning Approach,” Proc. IEEE Southwest Symp. Image Analysis and Interpretation, pp. 209-213, 2000.
[7] J. Denzler and C. Brown, “Optimal Selection of Camera Parameters for State Estimation of Static Systems: An Information Theoretic Approach,” Technical Report TR-732, Computer Science Dept., Univ. of Rochester, 2000.
[8] J. Fisher and J.C. Principe, “A Nonparametric Method for Information Theoretic Feature Extraction,” Proc. Defense Advance Research Projects Agency (DARPA) Image Understanding Workshop, 1997.
[9] D. Fox, W. Burgard, and S. Thrun, “Active Markov Localization for Mobile Robots,” Technical Report, Carnegie Mellon Univ., 1998.
[10] M. Isard and A. Blake, “Contour Tracking by Stochastic Propagation of Conditional Density,” Proc. European Conf. Computer Vision, pp. 343-356, 1996.
[11] L.P. Kaelbling, M.L. Littman, and A.R. Cassadra, “Planning and Acting in Partially Observable Stochastic Domains,” Artificial Intelligence, vol. 101, nos. 1-2, pp. 99-134, 1998.
[12] R.E. Kalman, “A New Approach to Linear Filtering and Prediction Problems,” J. Basic Eng., pp. 35-44, 1960.
[13] J.M. Manyika and H.F. Durran-Whyte, “On Sensor Management in Decentralized Data Fusion,” Proc. Conf. Decision and Control, pp. 3506-3507, 1992.
[14] H. Murase and S.K. Nayar, “Visual Learning and Recognition of 3-D Objects from Appearance,” Int'l J. Computer Vision, vol. 14, pp. 5-24, 1995.
[15] H. Niemann, Pattern Analysis and Understanding. vol. 4,Berlin, Heidelberg: Springer, 1990.
[16] C.A. Noonan and K.J. Orford, “Entropy Measures of Multi-Sensor Fusion Performance,” Proc. IEE Colloqium Target Tracking and Data Fusion, pp. 15/1-15/5, 1996.
[17] L. Paletta and A. Pinz, “Active Object Recognition by View Integration and Reinforcement Learning,” Robotics and Autonomous Systems, vol. 31, pp. 71-86, 2000.
[18] L. Paletta, M. Prantl, and A. Pinz, “Learning Temporal Context in Active Object Recognition Using Bayesian Analysis,” Proc. Int'l Conf. Pattern Recognition, vol. 3, pp. 695-699, 2000.
[19] B. Schiele and J.L. Crowley, “Transinformation for Active Object Recognition,” Proc. Sixth Int'l Conf. Computer Vision, 1998.
[20] R.S. Sutton and A.G. Barto, Reinforcement Learning. Cambridge, London: Bradford, 1998.
[21] M.A. Tanner, Tools for Statistical Inference. London: Springer Verlag, 1993.
[22] M.E. Tipping and C.M. Bishop, “Mixtures of Probabilistic Principal Component Analysers,” Neural Computation, vol. 11, no. 2, pp. 443-482, 1999.
[23] P. Viola and W.M. WellsIII, “Alignment by Maximization of Mutual Information,” Int'l J. Computer Vision, vol. 24, no. 2, pp. 137-154, 1997.
[24] P.A. Viola, “Alignment by Maximization of Mutual Information,” AI Technical Report No. 1548, MIT Artificial Intelligence Lab., 1995.
[25] D. Wilkes, “Active Object Recognition,” Technical Report RBCV-TR-94-45, Dept. Computer Science, Univ. of Toronto, 1994.
[26] Y. Ye, “Sensor Planning for Object Search,” PhD thesis, Dept. Computer Science, Univ. of Toronto, 1997.

Index Terms:
Computer vision, active camera control, state estimation, information theory.
Citation:
Joachim Denzler, Christopher M. Brown, "Information Theoretic Sensor Data Selection for Active Object Recognition and State Estimation," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 2, pp. 145-157, Feb. 2002, doi:10.1109/34.982896
Usage of this product signifies your acceptance of the Terms of Use.