CSDL Home IEEE Transactions on Pattern Analysis & Machine Intelligence 2013 vol.35 Issue No.10 - Oct.
A Graph Lattice Approach to Maintaining and Learning Dense Collections of Subgraphs as Image Features
Issue No.10 - Oct. (2013 vol.35)
Eric Saund , Palo Alto Research Center, Palo Alto
Effective object and scene classification and indexing depend on extraction of informative image features. This paper shows how large families of complex image features in the form of subgraphs can be built out of simpler ones through construction of a graph lattice--a hierarchy of related subgraphs linked in a lattice. Robustness is achieved by matching many overlapping and redundant subgraphs, which allows the use of inexpensive exact graph matching, instead of relying on expensive error-tolerant graph matching to a minimal set of ideal model graphs. Efficiency in exact matching is gained by exploitation of the graph lattice data structure. Additionally, the graph lattice enables methods for adaptively growing a feature space of subgraphs tailored to observed data. We develop the approach in the domain of rectilinear line art, specifically for the practical problem of document forms recognition. We are especially interested in methods that require only one or very few labeled training examples per category. We demonstrate two approaches to using the subgraph features for this purpose. Using a bag-of-words feature vector we achieve essentially single-instance learning on a benchmark forms database, following an unsupervised clustering stage. Further performance gains are achieved on a more difficult dataset using a feature voting method and feature selection procedure.
Junctions, Lattices, NIST, Vocabulary, Vectors, Histograms, Support vector machine classification, weighted voting, Graph lattice, subgraph matching, document classification, line-art analysis, CMD distance
Eric Saund, "A Graph Lattice Approach to Maintaining and Learning Dense Collections of Subgraphs as Image Features", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol.35, no. 10, pp. 2323-2339, Oct. 2013, doi:10.1109/TPAMI.2012.267