The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.03 - March (1993 vol.42)
pp: 281-290
ABSTRACT
<p>Through parallel processing, low precision fixed point hardware can be used to build a very high speed neural network computing engine where the low precision results in a drastic reduction in system cost. The reduced silicon area required to implement a single processing unit is taken advantage of by implementing multiple processing units on a single piece of silicon and operating them in parallel. The important question which arises is how much precision is required to implement neural network algorithms on this low precision hardware. A theoretical analysis of error due to finite precision computation was undertaken to determine the necessary precision for successful forward retrieving and back-propagation learning in a multilayer perceptron. This analysis can easily be further extended to provide a general finite precision analysis technique by which most neural network algorithms under any set of hardware constraints may be evaluated.</p>
INDEX TERMS
neural network hardware; parallel processing; low precision; system cost; silicon area; neural network algorithms; finite precision computation; forward retrieving; back-propagation learning; multilayer perceptron; error analysis; feedforward neural nets; neural chips.
CITATION
J.L. Holi, J.-N. Hwang, "Finite Precision Error Analysis of Neural Network Hardware Implementations", IEEE Transactions on Computers, vol.42, no. 3, pp. 281-290, March 1993, doi:10.1109/12.210171
15 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool