Issue No. 04 - April (1999 vol. 21)
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/34.761264
<p><b>Abstract</b>—The anticipated behavior of the n-tuple classification system is that it gives the highest output score for the class to which the input example actually belongs. By performing a theoretical analysis of how the output scores are related to the underlying probability distributions of the data, this paper shows that this in general is not to be expected. The theoretical results are able to explain the behavior that is observed in experimental studies. The theoretical analysis also give valuable insight into how the n-tuple classifier can be improved to deal with skewed training priors, which until now have been a hard problem for the architecture to tackle. It is shown that by relating an output score to the probability that a given class generates the data makes it possible to design the n-tuple net to operate as a close approximation to the Bayes estimator. It is specifically illustrated that this approximation can be obtained by modifying the decision criteria. In real cases, the underlying example distributions are unknown and accordingly the optimum way to treat the output scores cannot be calculated theoretically. However, it is shown that the feasibility of performing leave-one-out cross-validation tests in n-tuple networks makes it possible to obtain proper processing of the scores in such cases.</p>
n-tuple classifier, maximum likelihood, Bayes, cross-validation, RAM-net.
C. Linneberg and T. M. Jørgensen, "Theoretical Analysis and Improved Decision Criteria for the n-Tuple Classifier," in IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 21, no. , pp. 336-347, 1999.