The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.09 - September (2008 vol.30)
pp: 1618-1631
ABSTRACT
We develop a novel method for class-based feature matching across large changes in viewing conditions. The method is based on the property that when objects share a similar part, the similarity is preserved across viewing conditions. Given a feature and a training set of object images, we first identify the subset of objects that share this feature. The transformation of the feature's appearance across viewing conditions is determined mainly by properties of the feature, rather than of the object in which it is embedded. Therefore, the transformed feature will be shared by approximately the same set of objects. Based on this consistency requirement, corresponding features can be reliably identified from a set of candidate matches. Unlike previous approaches, the proposed scheme compares feature appearances only in similar viewing conditions, rather than across different viewing conditions. As a result, the scheme is not restricted to locally planar objects or affine transformations. The approach also does not require examples of correct matches. We show that by using the proposed method, a dense set of accurate correspondences can be obtained. Experimental comparisons demonstrate that matching accuracy is significantly improved over previous schemes. Finally, we show that the scheme can be successfully used for invariant object recognition.
INDEX TERMS
Feature matching, invariant recognition, parts.
CITATION
Evgeniy Bart, Shimon Ullman, "Class-Based Feature Matching Across Unrestricted Transformations", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol.30, no. 9, pp. 1618-1631, September 2008, doi:10.1109/TPAMI.2007.70818
REFERENCES
[1] C. Tomasi and T. Kanade, “Detection and Tracking of Point Features,” Technical Report CMU-CS-91-132, Carnegie Mellon Univ., Apr. 1991.
[2] T. Tuytelaars and L.V. Gool, “Wide Baseline Stereo Matching Based on Local, Affinely Invariant Regions,” Proc. 11th British Machine Vision Conf., pp. 412-425, 2000.
[3] K. Mikolajczyk and C. Schmid, “An Affine Invariant Interest Point Detector,” Proc. Seventh European Conf. Computer Vision, pp. 128-142, 2002.
[4] M. Brown and D. Lowe, “Invariant Features from Interest Point Groups,” Proc. 13th British Machine Vision Conf., 2002.
[5] R. Basri and D. Jacobs, “Recognition Using Region Correspondences,” Int'l J. Computer Vision, vol. 25, no. 2, pp. 141-162, 1997.
[6] S. Agarwal and D. Roth, “Learning a Sparse Representation for Object Detection,” Proc. Seventh European Conf. Computer Vision, pp. 113-127, 2002.
[7] S. Ullman, M. Vidal-Naquet, and E. Sali, “Visual Features of Intermediate Complexity and Their Use in Classification,” Nature Neuroscience, vol. 5, no. 7, pp. 682-687, 2002.
[8] D.I. Perrett, P.A.J. Smith, D.D. Potter, A.J. Mistlin, A.S. Head, A.D. Milner, and M.A. Jeeves, “Visual Cells in the Temporal Cortex Sensitive to Face View and Gaze Direction,” Proc. Royal Soc. London, Series B: Biological Sciences, vol. 223, pp. 293-317, 1985.
[9] M.J. Black and P. Anandan, “A Framework for the Robust Estimation of Optical Flow,” Proc. Fourth Int'l Conf. Computer Vision, pp. 231-236, 1993.
[10] Y. Adini, Y. Moses, and S. Ullman, “Face Recognition: The Problem of Compensating for Changes in Illumination Direction,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp.721-732, July 1997.
[11] C. Wallraven and H.H. Bülthoff, “Automatic Acquisition of Exemplar-Based Representations for Recognition from Image Sequences,” Proc. CVPR 2001—Workshop Models vs. Exemplars, 2001.
[12] S.-H. Lai, “Robust Image Matching under Partial Occlusion and Spatially Varying Illumination Change,” Computer Vision and Image Understanding, vol. 78, pp. 84-98, 2000.
[13] G.D. Hager and P.N. Belhumeur, “Efficient Region Tracking with Parametric Models of Geometry and Illumination,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 20, no. 10, pp. 1025-1039, Oct. 1998.
[14] M.J. Black and A.D. Jepson, “EigenTracking: Robust Matching and Tracking of Articulated Objects Using a View-Based Representation,” Proc. Fourth European Conf. Computer Vision, pp.329-342, 1996.
[15] D. Tell and S. Carlsson, “Combining Appearance and Topology for Wide Baseline Matching,” Proc. Seventh European Conf. Computer Vision, pp. 68-81, 2002.
[16] V. Ferrari, T. Tuytelaars, and L.V. Gool, “Simultaneous Object Recognition and Segmentation by Image Exploration,” Proc. Eighth European Conf. Computer Vision, 2004.
[17] D.G. Lowe, “Three-Dimensional Object Recognition from Single Two-Dimensional Images,” Artificial Intelligence, vol. 31, no. 3, pp.355-395, 1987.
[18] P. Viola and W. Wells, “Alignment by Maximization of Mutual Information,” Int'l J. Computer Vision, vol. 24, no. 2, pp. 137-154, 1997.
[19] E. Bart, E. Byvatov, and S. Ullman, “View-Invariant Recognition Using Corresponding Object Fragments,” Proc. Eighth European Conf. Computer Vision, Part II, pp. 152-165, 2004.
[20] A. Chowdhury, R. Chellappa, and T. Keaton, “Wide Baseline Image Registration with Application to 3D Face Modeling,” IEEE Trans. Multimedia. to appear.
[21] E. Sali and S. Ullman, “Combining Class-Specific Fragments for Object Recognition,” Proc. 10th British Machine Vision Conf., pp.203-213, 1999.
[22] D.G. Lowe, “Distinctive Image Features from Scale-Invariant Keypoints,” Int'l J. Computer Vision, vol. 60, no. 2, pp. 91-110, 2004.
[23] H. Abdi, “Kendall Rank Correlation,” Encyclopedia of Measurement and Statistics, N.J. Salkind, ed., Sage, 2007.
[24] J.-M. Laferte, P. Perez, and F. Heitz, “Discrete Markov Image Modeling and Inference on the Quadtree,” IEEE Trans. Image Processing, vol. 9, no. 3, pp. 390-404, 2000.
[25] F.R. Kschischang, B.J. Frey, and H.-A. Loeliger, “Factor Graphs and the Sum-Product Algorithm,” IEEE Trans. Information Theory, vol. 47, no. 2, pp. 498-519, 2001.
[26] http://vision.stanford.edu/~birchklt/, 2007.
[27] http://www.cs.brown.edu/people/blackignc.html , 2008.
[28] http://www.inrialpes.fr/lear/peopleMikolajczyk /, 2007.
[29] T. Sim, S. Baker, and M. Bsat, “The CMU Pose, Illumination, and Expression (PIE) Database of Human Faces,” Technical Report CMU-RI-TR-01-02, Robotics Inst., Carnegie Mellon Univ., Jan. 2001.
[30] P.J. Phillips, H. Wechsler, J. Huang, and P. Rauss, “The FERET Database and Evaluation Procedure for Face Recognition Algorithms,” Image and Vision Computing, vol. 16, no. 5, pp. 295-306, 1998.
[31] O. Chum and J. Matas, “Randomized RANSAC with $t_{d, d}$ Test,” Proc. 13th British Machine Vision Conf., 2002.
[32] “Oxford Colleges,” http://www.robots.ox.ac.uk/~vggdata2. html , 2008.
[33] “Weizmann Institute Toy Car Database,” http://www.wisdom. weizmann.ac.il~cars, 2008.
[34] S. Ullman and E. Bart, “Recognition Invariance Obtained by Extended and Invariant Features,” Neural Networks, vol. 17, pp.833-848, 2004.
[35] E. Bart and S. Ullman, “Class-Based Matching of Object Parts,” Proc. IEEE Workshop Image and Video Registration, 2004.
259 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool