The Community for Technology Leaders
Green Image
Issue No. 12 - December (2011 vol. 33)
ISSN: 0162-8828
pp: 2492-2505
Carole J. Twining , University of Manchester, Manchester
Christopher J. Taylor , The University of Manchester, Manchester
In statistical modeling, there are various techniques used to build models from training data. Quantitative comparison of modeling techniques requires a method for evaluating the quality of the fit between the model probability density function (pdf) and the training data. One graph-based measure that has been used for this purpose is the specificity. We consider the large-numbers limit of the specificity, and derive expressions which show that it can be considered as an estimator of the divergence between the unknown pdf from which the training data was drawn and the model pdf built from the training data. Experiments using artificial data enable us to show that these limiting large-number relations enable us to obtain good quantitative and qualitative predictions of the behavior of the measured specificity, even for small numbers of training examples and in some extreme cases. We demonstrate that specificity can provide a more sensitive measure of difference between various modeling methods than some previous graph-based techniques. Key points are illustrated using real data sets. We thus establish a proper theoretical basis for the previously ad hoc concept of specificity, and obtain useful insights into the application of specificity in the analysis of real data.
Specificity, generalization, assessment of modeling, graph-based estimators, entropy estimation, estimation of statistical distance, estimation of divergence, nearest-neighbor estimators, cross entropy, Kullback-Leibler divergence.
Carole J. Twining, Christopher J. Taylor, "Specificity: A Graph-Based Estimator of Divergence", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 33, no. , pp. 2492-2505, December 2011, doi:10.1109/TPAMI.2011.90
101 ms
(Ver 3.1 (10032016))