Issue No. 01 - January (2003 vol. 25)
<p><b>Abstract</b>—Amidst the conflicting experimental evidence of superiority of one over the other, we investigate the Sum and majority Vote combining rules in a two class case, under the assumption of experts being of equal strength and estimation errors conditionally independent and identically distributed. We show, analytically, that, for Gaussian estimation error distributions, Sum always outperforms Vote. For heavy tail distributions, we demonstrate by simulation that Vote may outperform Sum. Results on synthetic data confirm the theoretical predictions. Experiments on real data support the general findings, but also show the effect of the usual assumptions of conditional independence, identical error distributions, and common target outputs of the experts not being fully satisfied.</p>
Multiple classifiers, fusion rules, estimation error.
F. Alkoot and J. Kittler, "Sum Versus Vote Fusion in Multiple Classifier Systems," in IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 25, no. , pp. 110-115, 2003.