Issue No. 03 - March (1993 vol. 42)

ISSN: 0018-9340

pp: 281-290

DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/12.210171

ABSTRACT

<p>Through parallel processing, low precision fixed point hardware can be used to build a very high speed neural network computing engine where the low precision results in a drastic reduction in system cost. The reduced silicon area required to implement a single processing unit is taken advantage of by implementing multiple processing units on a single piece of silicon and operating them in parallel. The important question which arises is how much precision is required to implement neural network algorithms on this low precision hardware. A theoretical analysis of error due to finite precision computation was undertaken to determine the necessary precision for successful forward retrieving and back-propagation learning in a multilayer perceptron. This analysis can easily be further extended to provide a general finite precision analysis technique by which most neural network algorithms under any set of hardware constraints may be evaluated.</p>

INDEX TERMS

neural network hardware; parallel processing; low precision; system cost; silicon area; neural network algorithms; finite precision computation; forward retrieving; back-propagation learning; multilayer perceptron; error analysis; feedforward neural nets; neural chips.

CITATION

J. Holi and J. Hwang, "Finite Precision Error Analysis of Neural Network Hardware Implementations," in

*IEEE Transactions on Computers*, vol. 42, no. , pp. 281-290, 1993.

doi:10.1109/12.210171

CITATIONS