The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.03 - March (1996 vol.29)
pp: 71-77
ABSTRACT
<p>The highly nonlinear nature of neural networks' input-to-output mapping makes it difficult to describe how they arrive at predictions. Thus, although their predictive accuracy is satisfactory for applications from finance to medicine, they have long been thought of as "black boxes." The authors propose to understand a neural network via rules extracted from it. Their algorithm, NeuroRule, extracts rules from a standard feed-forward neural network, with network training and pruning via the simple, widely used back-propagation method. The extracted rules, a one-to-one mapping of the pruned network, are compact and comprehensible and do not involve weight values. The authors' experiments show that neural-network-based rules are as accurate and compact as decision-tree-based rules, which are widely regarded as explicit and understandable. Thus, using rules extracted by Neuro-Rule, neural networks become understandable and could lose their black-box reputation. </p>
CITATION
Rudy Setiono, Huan Liu, "Symbolic Representation of Neural Networks", Computer, vol.29, no. 3, pp. 71-77, March 1996, doi:10.1109/2.485895
20 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool