
This Article  
 
Share  
Bibliographic References  
Add to:  
Digg Furl Spurl Blink Simpy Del.icio.us Y!MyWeb  
Search  
 
ASCII Text  x  
J.L. Holi, J.N. Hwang, "Finite Precision Error Analysis of Neural Network Hardware Implementations," IEEE Transactions on Computers, vol. 42, no. 3, pp. 281290, March, 1993.  
BibTex  x  
@article{ 10.1109/12.210171, author = {J.L. Holi and J.N. Hwang}, title = {Finite Precision Error Analysis of Neural Network Hardware Implementations}, journal ={IEEE Transactions on Computers}, volume = {42}, number = {3}, issn = {00189340}, year = {1993}, pages = {281290}, doi = {http://doi.ieeecomputersociety.org/10.1109/12.210171}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, }  
RefWorks Procite/RefMan/Endnote  x  
TY  JOUR JO  IEEE Transactions on Computers TI  Finite Precision Error Analysis of Neural Network Hardware Implementations IS  3 SN  00189340 SP281 EP290 EPD  281290 A1  J.L. Holi, A1  J.N. Hwang, PY  1993 KW  neural network hardware; parallel processing; low precision; system cost; silicon area; neural network algorithms; finite precision computation; forward retrieving; backpropagation learning; multilayer perceptron; error analysis; feedforward neural nets; neural chips. VL  42 JA  IEEE Transactions on Computers ER   
Through parallel processing, low precision fixed point hardware can be used to build a very high speed neural network computing engine where the low precision results in a drastic reduction in system cost. The reduced silicon area required to implement a single processing unit is taken advantage of by implementing multiple processing units on a single piece of silicon and operating them in parallel. The important question which arises is how much precision is required to implement neural network algorithms on this low precision hardware. A theoretical analysis of error due to finite precision computation was undertaken to determine the necessary precision for successful forward retrieving and backpropagation learning in a multilayer perceptron. This analysis can easily be further extended to provide a general finite precision analysis technique by which most neural network algorithms under any set of hardware constraints may be evaluated.
[1] D. Hammerstrom, "A VLSI architecture for highperformance, lowcost, onchip learning," inProc. IJCNN'90, vol. II, San Diego, CA, June 1721, 1990, pp. 537544.
[2] J. L. Holt and J. N. Hwang, "Finite precision error analysis for neural network hardware implementation," inProc. Int. Joint Conf. Neural Networks, Seattle, WA, July 1991, pp. I:519526.
[3] S. M. Pizer with V. L. Wallace,To Compute Numerically, Concepts and Strategies. Boston, MA: Little, Brown and Co., 1983.
[4] J. N. Hwang, J. A. Vlontzos, and S. Y. Kung, "A systolic neural network architecture for hidden Markov models,"IEEE Trans. Acoust., Speech, Signal Processing, vol. 37, pp. 19671979, Dec. 1989.
[5] S. Y. Kung, and J. N. Hwang, "A unified modeling of connectionist neural networks,"J. Parallel Distributed Comput., vol. 6, pp. 358387, 1989.
[6] A. Papoulis,Probability, Random Variables, and Stochastic Processes. New York: McGrawHill, 1984.
[7] P. J. Werbos, "Beyond regression: New tools for prediction and analysis in the behavior science," Ph.D. dissertation, Harvard Univ., Cambridge, MA, 1974.
[8] D. E. Rumelhart, G. E. Hinton, and R. J. Williams, "Learning internal representation by error propagation,"Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vols. 1 and 2. Cambridge, MA: MIT Press, 1986.
[9] J. N. Hwang and P. S. Lewis, "From nonlinear optimization to neural network learning," inProc. 24th Asilomar Conf. Signals, Syst.,&Comput., Pacific Grove, CA, Nov. 1990, pp. 985989.
[10] T. E. Baker, "Implementation limits for artificial neural networks," Master thesis, Dep. Comput. Sci. and Eng., Oregon Graduate Institute of Science and Technology, 1990.
[11] P. S. Lewis and J. N. Hwang, "Recursive least squares learning algorithms for neural networks," inProc. SPIE's Int. Symp. Opt. and Optoelectron. Appl. Sci. and Eng., San Diego, CA, July 1990, pp. 2839.
[12] J. L. Holt and T. E. Baker, "Back propagation simulations using limited precision calculations," inProc. Int. Joint Conf. Neural Networks, Seattle, WA, July 1991, pp. II: 121126.