Issue No. 02 - February (2001 vol. 23)
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/34.908975
<p><b>Abstract</b>—Structuralization of the covariance matrix reduces the number of parameters to be estimated from the training data and does not affect an increase in the generalization error asymptotically as both the number of dimensions and training sample size grow. A method to benefit from approximately correct assumptions about the first order tree dependence between components of the feature vector is proposed. We use a structured estimate of the covariance matrix to decorrelate and scale the data and to train a single-layer perceptron in the transformed feature space. We show that training the perceptron can reduce negative effects of inexact a priori information. Experiments performed with 13 artificial and 10 real world data sets show that the first-order tree-type dependence model is the most preferable one out of two dozen of the covariance matrix structures investigated.</p>
First-order tree-type dependence, a priori information, classification, generalization, sample size, dimensionality.
A. Saudargiene and S. Raudys, "First-Order Tree-Type Dependence between Variables and Classification Performance," in IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 23, no. , pp. 233-239, 2001.