Issue No.03 - June (1995 vol.15)
Published by the IEEE Computer Society
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/MM.1995.10011
Interest in neural and fuzzy systems remains undiminished and actually seems to be growing. The hype surrounding neural networks during the 1980s is gone, giving way to realistic expectations based on solid designs and a careful analysis of applications. A rapidly increasing number of industrial applications demonstrate that neural net hardware has moved beyond the laboratory stage and is proving itself in the real world. Nevertheless, many new concepts and ideas are still emerging, making it clear that this is far from a mature field.
While a few years ago the goal of many researchers was the development of generalpurpose neural processors, today's designs tend to be more specific to applications. Processors sufficiently general to satisfy the requirements of many different applications have proven ineffective. The most general processors, the digital neural signal processors, are basically multiple digital signal processors.
Most researchers realize by now that it is not sufficient to present block diagrams of new types of synapses, neurons, or other components of a neural network and assume that everything else will fall into place. Designers must consider applications right from the beginning and cannot just design new synapses "in search of an application."
So, does this mean that a neural network is just another applicationspecific circuit? Not quite—the ambition of most designs is to cover not just one application, but a class of applications. This makes neural net designs particularly difficult, because commonalities among applications must be extracted and tradeoffs between flexibility and performance have to be analyzed carefully. The systems and circuits we collected for this issue are good examples of this process. Many of the articles do not so much describe the details of a circuit as they focus on the influence of various tradeoffs on the performance of a system.
This special issue contains six articles selected from the presentations at Microneuro 94, the Fourth International Conference on Microelectronics for Neural Networks and Fuzzy Systems, held last fall in Turin, Italy. Microneuro has emerged as the only international forum devoted exclusively to hardware implementations of neural and fuzzy systems. Though open to circuit and system implementations, it also welcomes applications of systems as well as theoretical developments of direct interest to hardware implementations.
The program of the conference was full of interesting applications, ranging from pattern recognition and robotic vision over automatic control to time series forecasting. In all these areas, neural networks and fuzzy systems continue to gain more and more acceptance from the scientific and technological community.
With the large selection of excellent contributions at the meeting, we found it difficult to make a choice that would fit into the space of this issue. We tried our best to cover a broad range of approaches, emphasizing systems where applications had been demonstrated.
Today, fuzzy logic processors are more mature and more widely used than neural network hardware. Fuzzy logic engines form an integral part of many microcontrollers, and fuzzy logic supports countless control applications. The first article, by H. Eichfeld and coauthors, describes a large fuzzy processor. While the main application is in control, its large size also makes this processor of interest for pattern recognition applications. The chip computes up to 10 million rules/s, with up to 256 inputs, 64 outputs, and 16,384 rules. The article also introduces the basics of fuzzy computing and describes how these can be mapped on the proposed device.
R. Coggins and coauthors describe an analog VLSI neural network for biomedical applications. The system supports very low power applications, where digital devices are too power hungry. The accuracy of the computation is lower than that of a digital system, but performance is not affected in the described applications. The chip is used for online classification of intracardiac electrograms for direct driving of an implantable defibrillator. The whole system dissipates less than 200 nW.
A. König and coauthors present a digital, parallel VLSI implementation of a neural system designed for visual inspection. The system implements an algorithm, which then can efficiently be implemented on a parallel SIMD machine. The system operates on images coming from a camera and classifies defects and anomalies in manufacturing in real time. An example describes electromechanical components being inspected for defects.
Another application to machine vision appears in the article by E. Cosatto and coauthor. Here, an analog neural network accelerator analyzes images. The system is based on analog matching of one input with many reference images. Two analog devices integrated into a board interface to a host computer and a camera. The system can process up to 20 images (512×512 pixels each) per second. An application to layout analysis of checks proves interesting.
Pulse stream circuits, an alternative to traditional analog designs, have become popular for neural net implementations. They find applications in small and lowpower systems. While they are less flexible than digital systems, they are considerably less power hungry, smaller in size, and often find applications in vision and image classification. M. Chiaberge and coauthor present a VLSI neurofuzzy device based on pulse stream computation. The system mixes finitestate automata and a mixed analog/digital neural engine to implement realtime intelligent controllers. The system can compute up to 1 billion connections per second, with a power dissipation of about 15 mW. The authors also describe a development environment for training the system, which uses a mixture of neural, fuzzy, and genetic algorithms.
The last article is a collection of four different projects, giving a brief overview of other technology and application areas that were presented and discussed at Microneuro 94.
Finally, we also want to mention the winners of the awards given during and after the conference. P. Masa shared the award for the best chip made using ES2 (European Silicon Structures) technology, for his work, "HighSpeed VLSI Neural Network for HighEnergy Physics," with M. Chiaberge for the work mentioned earlier. (Masa's work will appear in another publication.) Best Student Paper prizes were given to G. Cairns for "Learning with Analogue VLSI MLPs," and to G. Indiveri for "Analog Subthreshold VLSI Implementation of a Neuromorphic Model of the Visual Cortex for Preattentive Vision."
Hans Peter Graf, a member of the technical staff at AT&T Bell Laboratories in Holmdel, New Jersey, is conducting research on massively parallel processors and their applications to industrial machine vision problems. He has worked on neural net models, designing microelectronic processors and leading the construction of board systems. His theoretical work includes algorithms for the decomposition of complex images into elementary shapes. These algorithms, implemented on neural net processors, support such applications as analyzing bank checks and finding the locations and identities of people in complex images.Graf received a Diploma and a PhD in physics from the Swiss Federal Institute of Technology in Zurich, Switzerland. He is a senior member of the IEEE and a member of the American Physical Society. He has authored and coauthored more than 70 articles on neural networks and pattern recognition.
Leonardo M. Reyneri is an associate professor at the University of Pisa, Italy. He teaches applied electronics and microelectronics, and carries out research on applications of neural networks to intelligent control and pattern recognition. He is also involved in the design of VLSI implementations of highperformance neural networks and parallel SIMD architectures. His fields of personal research interest include the design of lowpower mixed analog/digital integrated circuits for robotics and the development of dedicated architectures for neural networks and massively parallel systems.Reyneri holds a PhD in electronic engineering from the Polytechnic of Turin. He has published more than 60 papers and holds five patents.Direct questions concerning this article to either Hans Peter Graf, AT&T Bell Laboratories, Room 4G 320, Homdel, NJ 07733; hpgresearch.att.com; or to Leonardo M. Reyneri, Universitá di Pisa, Dip. di Ingegneria dell'Informazione, Via Diotisalvi, 256126 Pisa, Italy; lmriet.unipi.it.