The Community for Technology Leaders
Green Image
<p>Abstract—Many recognition procedures rely on the consistency of a subset of data features with a hypothesis as the sufficient evidence to the presence of the corresponding object. We analyze here the performance of such procedures, using a probabilistic model, and provide expressions for the sufficient size of such data subsets, that, if consistent, guarantee the validity of the hypotheses with arbitrary confidence. We focus on 2D objects and the affine transformation class, and provide, for the first time, an integrated model which takes into account the shape of the objects involved, the accuracy of the data collected, the clutter present in the scene, the class of the transformations involved, the accuracy of the localization, and the confidence we would like to have in our hypotheses. Interestingly, it turns out that most of these factors can be quantified cumulatively by one parameter, denoted "effective similarity," which largely determines the sufficient subset size. The analysis is based on representing the class of instances corresponding to a model object and a group of transformations, as members of a metric space, and quantifying the variation of the instances by a metric cover.</p>
Object recognition, localization, pose estimation, similarity measures, noise models, performance analysis.
Michael Lindenbaum, "An Integrated Model for Evaluating the Amount of Data Required for Reliable Recognition", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 19, no. , pp. 1251-1264, November 1997, doi:10.1109/34.632984
80 ms
(Ver 3.3 (11022016))