This Article 
 Bibliographic References 
 Add to: 
Finite Precision Error Analysis of Neural Network Hardware Implementations
March 1993 (vol. 42 no. 3)
pp. 281-290

Through parallel processing, low precision fixed point hardware can be used to build a very high speed neural network computing engine where the low precision results in a drastic reduction in system cost. The reduced silicon area required to implement a single processing unit is taken advantage of by implementing multiple processing units on a single piece of silicon and operating them in parallel. The important question which arises is how much precision is required to implement neural network algorithms on this low precision hardware. A theoretical analysis of error due to finite precision computation was undertaken to determine the necessary precision for successful forward retrieving and back-propagation learning in a multilayer perceptron. This analysis can easily be further extended to provide a general finite precision analysis technique by which most neural network algorithms under any set of hardware constraints may be evaluated.

[1] D. Hammerstrom, "A VLSI architecture for high-performance, low-cost, on-chip learning," inProc. IJCNN'90, vol. II, San Diego, CA, June 17-21, 1990, pp. 537-544.
[2] J. L. Holt and J. N. Hwang, "Finite precision error analysis for neural network hardware implementation," inProc. Int. Joint Conf. Neural Networks, Seattle, WA, July 1991, pp. I:519-526.
[3] S. M. Pizer with V. L. Wallace,To Compute Numerically, Concepts and Strategies. Boston, MA: Little, Brown and Co., 1983.
[4] J. N. Hwang, J. A. Vlontzos, and S. Y. Kung, "A systolic neural network architecture for hidden Markov models,"IEEE Trans. Acoust., Speech, Signal Processing, vol. 37, pp. 1967-1979, Dec. 1989.
[5] S. Y. Kung, and J. N. Hwang, "A unified modeling of connectionist neural networks,"J. Parallel Distributed Comput., vol. 6, pp. 358-387, 1989.
[6] A. Papoulis,Probability, Random Variables, and Stochastic Processes. New York: McGraw-Hill, 1984.
[7] P. J. Werbos, "Beyond regression: New tools for prediction and analysis in the behavior science," Ph.D. dissertation, Harvard Univ., Cambridge, MA, 1974.
[8] D. E. Rumelhart, G. E. Hinton, and R. J. Williams, "Learning internal representation by error propagation,"Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vols. 1 and 2. Cambridge, MA: MIT Press, 1986.
[9] J. N. Hwang and P. S. Lewis, "From nonlinear optimization to neural network learning," inProc. 24th Asilomar Conf. Signals, Syst.,&Comput., Pacific Grove, CA, Nov. 1990, pp. 985-989.
[10] T. E. Baker, "Implementation limits for artificial neural networks," Master thesis, Dep. Comput. Sci. and Eng., Oregon Graduate Institute of Science and Technology, 1990.
[11] P. S. Lewis and J. N. Hwang, "Recursive least squares learning algorithms for neural networks," inProc. SPIE's Int. Symp. Opt. and Optoelectron. Appl. Sci. and Eng., San Diego, CA, July 1990, pp. 28-39.
[12] J. L. Holt and T. E. Baker, "Back propagation simulations using limited precision calculations," inProc. Int. Joint Conf. Neural Networks, Seattle, WA, July 1991, pp. II: 121-126.

Index Terms:
neural network hardware; parallel processing; low precision; system cost; silicon area; neural network algorithms; finite precision computation; forward retrieving; back-propagation learning; multilayer perceptron; error analysis; feedforward neural nets; neural chips.
J.L. Holi, J.-N. Hwang, "Finite Precision Error Analysis of Neural Network Hardware Implementations," IEEE Transactions on Computers, vol. 42, no. 3, pp. 281-290, March 1993, doi:10.1109/12.210171
Usage of this product signifies your acceptance of the Terms of Use.