The Community for Technology Leaders
2018 24th International Conference on Pattern Recognition (ICPR) (2018)
Beijing, China
Aug. 20, 2018 to Aug. 24, 2018
ISSN: 1051-4651
ISBN: 978-1-5386-3789-0
pp: 874-879
Luis P. F. Garcia , InfAI/Leipzig University Leipzig, Germany
Ana C. Lorena , Universidade Federal de São Paulo and Instituto Tecnológico de Aeronáutica, São Josédos Campos, SP, Brazil
Marcilio C. P. de Souto , LIFO/University of Orleans, Orleans, France
Tin Kam Ho , IBM Watson Yorktown, Heights, NY, USA
ABSTRACT
Application of machine learning to new and unfamiliar domains calls for increasing automation in choosing a learning algorithm suitable for the data arising from each domain. Meta-learning could address this need since it has been largely used in the last years to support the recommendation of the most suitable algorithms for a new dataset. The use of complexity measures could increase the systematic comprehension over the meta-models and also allow to differentiate the performance of a set of techniques taking into account the overlap between classes imposed by feature values, the separability and distribution of the data points. In this paper we compare the effectiveness of several standard regression models in predicting the accuracies of classifiers for classification problems from the OpenML repository. We show that the models can predict the classifiers' accuracies with low mean-squared-error and identify the best classifier for a problem that results in statistically significant improvements over a randomly chosen classifier or a fixed classifier believed to be good on average.
INDEX TERMS
Complexity theory, Prediction algorithms, Support vector machines, Measurement uncertainty, Density measurement, Volume measurement, Training
CITATION

L. P. Garcia, A. C. Lorena, M. C. de Souto and T. K. Ho, "Classifier Recommendation Using Data Complexity Measures," 2018 24th International Conference on Pattern Recognition (ICPR), Beijing, China, 2018, pp. 874-879.
doi:10.1109/ICPR.2018.8545110
327 ms
(Ver 3.3 (11022016))