Issue No. 08 - August (2005 vol. 27)
John M. Galbraith , IEEE
Richard W. Ziolkowski , IEEE
A population coded algorithm, built on established models of motion processing in the primate visual system, computes the time-to-collision of a mobile robot to real-world environmental objects from video imagery. A set of four transformations starts with motion energy, a spatiotemporal frequency based computation of motion features. The following processing stages extract image velocity features similar to, but distinct from, optic flow; "translation” features, which account for velocity errors including those resulting from the aperture problem; and finally, estimate the time-to-collision. Biologically motivated population coding distinguishes this approach from previous methods based on optic flow. A comparison of the population coded approach with the popular optic flow algorithm of Lucas and Kanade against three types of approaching objects shows that the proposed method produces more robust time-to-collision information from a real world input stimulus in the presence of the aperture problem and other noise sources. The improved performance comes with increased computational cost, which would ideally be mitigated by special purpose hardware architectures.
Index Terms- Motion processing, autonomous robotics, neuromorphic computing, computer vision, depth cues, time-to-collision, optic flow.
John M. Galbraith, Garrett T. Kenyon, Richard W. Ziolkowski, "Time-to-Collision Estimation from Motion Based on Primate Visual Processing", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 27, no. , pp. 1279-1291, August 2005, doi:10.1109/TPAMI.2005.168