This Article 
 Bibliographic References 
 Add to: 
Stochastic Neural Computation I: Computational Elements
September 2001 (vol. 50 no. 9)
pp. 891-905

Abstract—This paper examines a number of stochastic computational elements employed in artificial neural networks, several of which are introduced for the first time, together with an analysis of their operation. We briefly include multiplication, squaring, addition, subtraction, and division circuits in both unipolar and bipolar formats, the principles of which are well-known, at least for unipolar signals. We have introduced several modifications to improve the speed of the division operation. The primary contribution of this paper, however, is in introducing several state machine-based computational elements for performing sigmoid nonlinearity mappings, linear gain, and exponentiation functions. We also describe an efficient method for the generation of, and conversion between, stochastic and deterministic binary signals. The validity of the present approach is demonstrated in a companion paper through a sample application, the recognition of noisy optical characters using soft competitive learning. Network generalization capabilities of the stochastic network maintain a squared error within 10 percent of that of a floating-point implementation for a wide range of noise levels. While the accuracy of stochastic computation may not compare favorably with more conventional binary radix-based computation, the low circuit area, power, and speed characteristics may, in certain situations, make them attractive for VLSI implementation of artificial neural networks.

[1] B.R. Gaines, “Stochastic Computing Systems,” Advances in Information Systems Science, J.F. Tou, ed., vol. 2, chapter 2, pp. 37-172, New York: Plenum, 1969.
[2] J. Hertz, A. Krogh, and R.G. Palmer, Introduction to the Theory of Neural Computation. Addison-Wesley, 1991.
[3] C.M. Bishop, Neural Networks for Pattern Recognition. Clarendon Press, 1995.
[4] Pulsed Neural Networks, W. Maass and C.M. Bishop, eds. Cambridge, Mass.: MIT Press, 1999.
[5] M.A. Mahowald, “Evolving Analog VLSI Neurons,” Single Neuron Computation, T. McKenna, J. Davis, S. Zornetzer, eds., pp. 413-435, San Diego, Calif.: Academic Press, 1992.
[6] S.R. Deiss, R.J. Douglas, and A.M. Whatley, “A Pulse Coded Communications Infrastructure for Neuromorphic Systems,” Pulsed Neural Networks, W. Maass and C.M. Bishop, eds., chapter 6, Cambridge, Mass.: MIT Press, 1999.
[7] J.G. Elias, “Artificial Dendritic Trees,” Neural Computation, vol. 5, pp. 648-663, 1993.
[8] W. Maass, “Fast Sigmoidal Networks via Spiking Neurons,” Neural Computation, vol. 9, pp. 279-304, 1997.
[9] J. Meador, A. Wu, C. Cole, N. Nintunze, and P. Chintrakulchai, “Programmable Impulse Neural Circuits,” IEEE Trans. Neural Networks, vol. 2, pp. 101-109, Jan. 1991.
[10] A.F. Murray and A.V.W. Smith, “Asynchronous VLSI Neural Networks Using Pulse Stream Arithmetic,” IEEE J. Solid State Circuits, vol. 23, pp. 688-697, 1988.
[11] A.F. Murray, D. Del Corso, and L. Tarassenko, “Pulse‐Stream VLSI Neural Networks Mixing Analog and Digital Techniques,” IEEE Trans. Neural Networks, Vol. 2, No. 2, Mar. 1991, pp. 193‐204.
[12] A.F. Murray, “Pulse Based Computation in VLSI Neural Networks,” Pulsed Neural Networks, W. Maass and C.M. Bishop, eds., chapter 3, Cambridge, Mass.: MIT Press, 1999.
[13] D. Del Corso, F. Gregoretti, and L.M. Reyneri, “Mixed Analog-Digital Basic Cells for Artificial Neural Systems Using Pulse Rate and Width Modulations,” Parallel Architectures and Neural Networks, pp. 279-287, Apr. 1988.
[14] D. Del Corso, E. Filippi, F. Gregoretti, C. Pellegrini, L.M. Reyneri, and M. Sartori, “An Artificial Neural System Based on Pulse Stream Neural Chips,” Parallel Architectures and Neural Networks, pp. 164-171, Apr. 1991.
[15] T.G. Clarkson, D. Gorse, J.G. Taylor, and C.K. Ng, “Learning Probabilistic RAM Nets Using VLSI Structures,” IEEE Trans. Computers, vol. 41, pp. 1552-1561, 1992.
[16] T.G. Clarkson, Y. Guan, J.G. Taylor, and D. Gorse, “Generalization in Probabilistic RAM Nets,” IEEE Trans. Neural Networks, vol. 4, pp. 1552-1561, 1993.
[17] J.E. Tomberg and K. Kaski, “Pulse Density Modulation Technique in VLSI Implementation of Neural Network Algorithms,” IEEE J. Solid-State Circuits, vol. 25, pp. 1277-1286, 1990.
[18] M.S. Tomlinson, D.J. Walker, and M.A. Sivilotti, “A Digital Neural Network Architecture for VLSI,” Proc. Int'l Joint Conf. Neural Networks, vol. 2, pp. 545-550, 1990.
[19] D.E. Van den Bout and T.K. Miller III, “A Digital Architecture Employing Stochasticism for the Simulation of Hopfield Neural Nets,” IEEE Trans. Circuits and Systems, vol. 36, pp. 732-738, 1989.
[20] M.S. Melton, T. Phan, D.S. Reeves, and D.E. van der Bout, “The TinMANN VLSI Chip,” IEEE Trans. Neural Networks, vol. 3, pp. 375-384, May 1992.
[21] A. Torralba, F. Colodro, E. Ibanez, and L.G. Franquelo, “Two Digital Circuits for a Fully Parallel Stochastic Hopfield Neural Network,” IEEE Trans. Neural Networks, vol. 6, pp. 1264-1268, Sept. 1995.
[22] Y. Kondo and Y. Sawada, “Functional Abilities of a Stochastic Logic Neural Network,” IEEE Trans. Neural Networks, vol. 3, pp. 434-443, May 1992.
[23] C.L. Janer, J.M. Quero, J.G. Ortega, and L.G. Franquelo, “Fully Parallel Stochastic Computation Architecture” IEEE Trans. Signal Processing, vol. 44, pp. 2110-2117, Aug. 1996.
[24] P.S. Burge, M.R. van Daalen, B.J.P. Rising, and J.S. Shawe-Taylor, “Stochastic Bit-Stream Neural Networks,” Pulsed Neural Networks, W. Maass and C.M. Bishop, eds., Cambridge, Mass.: MIT Press, 1999.
[25] J.A. Dickson, R.D. McLeod, and H.C. Card, “Stochastic Arithmetic Implementations of Neural Networks with In Situ Learning,” Proc. Int'l Conf. Neural Networks, pp. 711-716, 1993.
[26] J. Zhao, J. Shawe-Taylor, and M. van Daalen, “Learning in Stochastic Bit-Stream Neural Networks,” Neural Networks, vol. 9, pp. 991-998, 1996.
[27] Y.C. Kim and M.A. Shanblatt, “Architecture and Statistical Model of a Pulse-Mode Digital Multilayer Neural Network,” IEEE Trans. Neural Networks, vol. 6, pp. 1109-1118, 1995.
[28] B.D. Brown, “Soft Competitive Learning Using Stochastic Arithmetic,” MSc thesis, Dept. of Electrical and Computer Eng., Univ. of Manitoba, 1998.
[29] P.D. Hortensius, R.D. McLeod, and H.C. Card, Parallel Random Number Generation for VLSI Systems using Cellular Automata IEEE Trans. Computers, vol. 38, no. 10, pp. 1466-1473, Oct. 1989.
[30] H. Zhou, H.C. Card, and G.E. Bridges, “Parallel Pseudorandom Number Generation in GaAs Cellular Automata for High Speed Circuit Testing,” J. Electrical Testing and Theory Applications, vol. 6, pp. 325-330, 1995.
[31] H.C. Card, “Doubly Stochastic Poisson Processes in Artificial Neural Learning,” IEEE Trans. Neural Networks, vol. 9, pp. 229-231, 1998.
[32] G.E. Hinton and T.J. Sejnowski, “Learning and Relearning in Boltzmann Machines,” Parallel Distributed Processing: Explorations in Microstructure of Cognition, D.E. Rumelhart and J.L. McClelland, eds., Cambridge, Mass.: MIT Press, 1986.
[33] G.E. Hinton, P. Dayan, B.J. Frey, and R.M. Neal, “The Wake-Sleep Algorithm for Unsupervised Neural Networks,” Science, vol. 268, pp. 1158-1161, 1995.
[34] R.S. Fetherston, I.P. Shaik, and S.C. Ma, “Testability Features of the AMD-K6 Microprocessor,” IEEE Design and Test of Computers, pp. 64-69, July-Sept. 1989.
[35] B.D. Brown and H.C. Card, “Stochastic Neural Computation II: Soft Competitive Learning,” IEEE Trans. Computers, vol. 50, no. 9, pp. 906-920, Sept. 2001.

Index Terms:
Pulsed neural networks, stochastic arithmetic, computational elements.
Bradley D. Brown, Howard C. Card, "Stochastic Neural Computation I: Computational Elements," IEEE Transactions on Computers, vol. 50, no. 9, pp. 891-905, Sept. 2001, doi:10.1109/12.954505
Usage of this product signifies your acceptance of the Terms of Use.