This Article 
 Bibliographic References 
 Add to: 
Sum Versus Vote Fusion in Multiple Classifier Systems
January 2003 (vol. 25 no. 1)
pp. 110-115

Abstract—Amidst the conflicting experimental evidence of superiority of one over the other, we investigate the Sum and majority Vote combining rules in a two class case, under the assumption of experts being of equal strength and estimation errors conditionally independent and identically distributed. We show, analytically, that, for Gaussian estimation error distributions, Sum always outperforms Vote. For heavy tail distributions, we demonstrate by simulation that Vote may outperform Sum. Results on synthetic data confirm the theoretical predictions. Experiments on real data support the general findings, but also show the effect of the usual assumptions of conditional independence, identical error distributions, and common target outputs of the experts not being fully satisfied.

[1] F.M. Alkoot and J. Kittler, “Experimental Evaluation of Expert Fusion Strategies,” Pattern Recognition Letters, vol. 20, no. 11, pp. 11-13, 1999.
[2] L. Breiman, “Bagging Predictors,” Machine Learning, vol. 24, pp. 123-140, 1996.
[3] The Extended M2VTS Database, Research/VSSPxm2vtsdb/, 2002.
[4] T. Dietterich, “An Experimental Comparison of Three Methods for Constructing Ensembles of Decision Trees: Bagging, Boosting, and Randomization,” Machine Learning, pp. 1-22, 1998.
[5] R.P.W. Duin and D.M.J. Tax, “Experiments With Classifier Combining Rules,” Multiple Classifier Systems, J. Kittler and F. Roli, eds., pp. 16-29, Springer, 2000.
[6] L.K. Hansen and P. Salamon, “Neural Network Ensembles,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 12, no. 10, pp. 993-1001, Oct. 1990.
[7] T.K. Ho, J.J. Hull, and S.N. Srihari, “Decision Combination in Multiple Classifiers Systems,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 16, no. 1, pp. 66-75, Jan. 1994.
[8] J. Kittler, M. Hatef, R. Duin, and J. Matas, “On Combining Classifiers,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 20, no. 3, pp. 226-239, Mar. 1998.
[9] L. Lam and C. Suen, “Application of Majority Voting to Pattern Recognition: An Analysis of Its Behaviour and Performance,” IEEE Trans. Systems, Man, and Cybernetics, Part A: Systems and Humans, vol. 27, no. 5, pp. 553-568, 1997.
[10] J. Matas, M. Hamouz, K. Jonsson, J. Kittler, Y. Li, C. Kotropoulos, A. Tefas, I. Pitas, T. Tan, H. Yan, F. Smeraldi, J. Bigun, N. Capdevielle, W. Gerstner, S. Ben-Yacoub, Y. Abdeljaoued, and E. Mayoraz, “Comparison of Face Verification Results on the XM2VTS Database,” Proc. Int'l Conf. Pattern Recognition, 2000.
[11] A.F.R. Rahman and M.C. Fairhurst, “Enhancing Multiple Expert Decision Combination Strategies through Exploitation of A Priori Information Sources,” Proc. Vision Image and Signal Processing, vol. 146, no. 1, pp. 40-49, 1999.
[12] C. Suen, R. Legault, C. Nadal, M. Cheriet, and L. Lam, “Building a New Generation of Handwriting Recognition Systems,” Pattern Recognition Letters, vol. 14, pp. 303-315, 1993.
[13] D.M.J. Tax, M. van Breukelen, R.P.W. Duin, and J. Kittler, “Combining Multiple Classifiers by Averaging or by Multiplying,” Pattern Recognition, vol. 33, no. 9, pp. 1475-1485, 2000.
[14] K. Tumer and J. Ghosh, “Analysis of Decision Boundaries in Linearly Combined Neural Classifiers,” Pattern Recognition, vol. 29, no. 2, pp. 341-348, 1996.
[15] L. Xu, A. Krzyzak, and C.Y. Suen, “Methods of Combining Multiple Classifiers and Their Applications to Handwriting Recognition,” IEEE Trans. Systems, Man, and Cybernetics, vol. 22, no. 3, pp. 418-435, 1992.

Index Terms:
Multiple classifiers, fusion rules, estimation error.
J. Kittler, F.M. Alkoot, "Sum Versus Vote Fusion in Multiple Classifier Systems," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 1, pp. 110-115, Jan. 2003, doi:10.1109/TPAMI.2003.1159950
Usage of this product signifies your acceptance of the Terms of Use.