Issue No. 03 - March (1996 vol. 29)
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/2.485895
<p>The highly nonlinear nature of neural networks' input-to-output mapping makes it difficult to describe how they arrive at predictions. Thus, although their predictive accuracy is satisfactory for applications from finance to medicine, they have long been thought of as "black boxes." The authors propose to understand a neural network via rules extracted from it. Their algorithm, NeuroRule, extracts rules from a standard feed-forward neural network, with network training and pruning via the simple, widely used back-propagation method. The extracted rules, a one-to-one mapping of the pruned network, are compact and comprehensible and do not involve weight values. The authors' experiments show that neural-network-based rules are as accurate and compact as decision-tree-based rules, which are widely regarded as explicit and understandable. Thus, using rules extracted by Neuro-Rule, neural networks become understandable and could lose their black-box reputation. </p>
R. Setiono and H. Liu, "Symbolic Representation of Neural Networks," in Computer, vol. 29, no. , pp. 71-77, 1996.