CSDL Home IEEE Transactions on Pattern Analysis & Machine Intelligence 2012 vol.34 Issue No.12 - Dec.

Subscribe

Issue No.12 - Dec. (2012 vol.34)

pp: 2393-2406

Jun Wang , Bus. Analytics & Math. Sci. Dept., IBM T.J. Watson Res. Center, Yorktown Heights, NY, USA

S. Kumar , Google Res., New York, NY, USA

Shih-Fu Chang , Dept. of Electr. & Comput. Eng., Columbia Univ., New York, NY, USA

DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/TPAMI.2012.48

ABSTRACT

Hashing-based approximate nearest neighbor (ANN) search in huge databases has become popular due to its computational and memory efficiency. The popular hashing methods, e.g., Locality Sensitive Hashing and Spectral Hashing, construct hash functions based on random or principal projections. The resulting hashes are either not very accurate or are inefficient. Moreover, these methods are designed for a given metric similarity. On the contrary, semantic similarity is usually given in terms of pairwise labels of samples. There exist supervised hashing methods that can handle such semantic similarity, but they are prone to overfitting when labeled data are small or noisy. In this work, we propose a semi-supervised hashing (SSH) framework that minimizes empirical error over the labeled set and an information theoretic regularizer over both labeled and unlabeled sets. Based on this framework, we present three different semi-supervised hashing methods, including orthogonal hashing, nonorthogonal hashing, and sequential hashing. Particularly, the sequential hashing method generates robust codes in which each hash function is designed to correct the errors made by the previous ones. We further show that the sequential learning paradigm can be extended to unsupervised domains where no labeled pairs are available. Extensive experiments on four large datasets (up to 80 million samples) demonstrate the superior performance of the proposed SSH methods over state-of-the-art supervised and unsupervised hashing techniques.

INDEX TERMS

learning (artificial intelligence), content-based retrieval, file organisation, image retrieval, orthogonal hashing, semisupervised hashing method, large-scale search, hashing-based approximate nearest neighbor search, ANN search, computational efficiency, memory efficiency, locality sensitive hashing, spectral hashing, random projections, principal projections, semantic similarity, SSH framework, information theoretic regularizer, unlabeled sets, nonorthogonal hashing, sequential learning paradigm, content-based image retrieval, sequential hashing method, Artificial neural networks, Semantics, Encoding, Extraterrestrial measurements, Binary codes, Semisupervised learning, Sequential analysis, sequential hashing, Hashing, nearest neighbor search, binary codes, semi-supervised hashing, pairwise labels

CITATION

Jun Wang, S. Kumar, Shih-Fu Chang, "Semi-Supervised Hashing for Large-Scale Search",

*IEEE Transactions on Pattern Analysis & Machine Intelligence*, vol.34, no. 12, pp. 2393-2406, Dec. 2012, doi:10.1109/TPAMI.2012.48REFERENCES