Issue No. 03 - March (2010 vol. 22)
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/TKDE.2009.107
H. Joel Trussell , NC State University, Raleigh
Huiwen Zeng , Synopsys, Portland
Reducing the dimensionality of a classification problem produces a more computationally-efficient system. Since the dimensionality of a classification problem is equivalent to the number of neurons in the first hidden layer of a network, this work shows how to eliminate neurons on that layer and simplify the problem. In the cases where the dimensionality cannot be reduced without some degradation in classification performance, we formulate and solve a constrained optimization problem that allows a trade-off between dimensionality and performance. We introduce a novel penalty function and combine it with bilevel optimization to solve the constrained problem. The performance of our method on synthetic and applied problems is superior to other known penalty functions such as weight decay, weight elimination, and Hoyer's function. An example of dimensionality reduction for hyperspectral image classification demonstrates the practicality of the new method. Finally, we show how the method can be extended to multilayer and multiclass neural network problems.
Pruning, neural networks, penalty function, mixed-norm penalty.
H. Joel Trussell, Huiwen Zeng, "Constrained Dimensionality Reduction Using a Mixed-Norm Penalty Function with Neural Networks", IEEE Transactions on Knowledge & Data Engineering, vol. 22, no. , pp. 365-380, March 2010, doi:10.1109/TKDE.2009.107