The Community for Technology Leaders
2012 23rd International Workshop on Database and Expert Systems Applications (2011)
Toulouse, France
Aug. 29, 2011 to Sept. 2, 2011
ISSN: 1529-4188
ISBN: 978-0-7695-4486-1
pp: 185-189
ABSTRACT
Classification tasks in information retrieval deal with document collections of enormous size, which makes the ratio between the document set underlying the learning process and the set of unseen documents very small. With a ratio close to zero, the evaluation of a model-classifier-combination's generalization ability with leave-n-out-methods or cross-validation becomes unreliable: The generalization error of a complex model (with a more complex hypothesis structure) might underestimated compared to the generalization error of a simple model (with a less complex hypothesis structure). Given this situation, optimizing the bias-variance-tradeoff to select among these models will lead one astray. To address this problem we introduce the idea of robust models, where one intentionally restricts the hypothesis structure within the model formation process. We observe that -- despite the fact that such a robust model entails a higher test error -- its efficiency "in the wild" outperforms the model that would have been chosen normally, under the perspective of the best bias-variance-tradeoff. We present two case studies: (1) a categorization task, which demonstrates that robust models are more stable in retrieval situations when training data is scarce, and (2) a genre identification task, which underlines the practical relevance of robust models.
INDEX TERMS
retrieval model, bias, overfitting, machine learning
CITATION
Nedim Lipka, Benno Stein, "Robust Models in Information Retrieval", 2012 23rd International Workshop on Database and Expert Systems Applications, vol. 00, no. , pp. 185-189, 2011, doi:10.1109/DEXA.2011.73
100 ms
(Ver 3.3 (11022016))