Issue No.03 - May/June (2002 vol.22)
Published by the IEEE Computer Society
<p>Progress in hardware integration technology for full- and semicustom implementations enables cost-efficient, innovative computing architectures that could provide alternatives or add-ons to the current von Neumann architecture. Radical, alternative architectures not in compliance with this classical imperative approach--for example, those that support functional programming--have not succeeded in replacing the von Neumann architecture. But several basic approaches--especially an inherent exploitation of the parallelizibility of certain problems--continue to imply modifications of the imperative style. Although unorthodox, these architectures nevertheless constitute well-fitting add-ons to a basic von Neumann architecture. Such approaches include <ul> <li>massively parallel processing,</li> <li>instruction-level parallelism,</li> <li>dataflow machines,</li> <li>associative architectures,</li> <li>neural networks, and </li> <li>biologically inspired architectures.</li></ul></p>
Progress in hardware integration technology for full- and semicustom implementations enables cost-efficient, innovative computing architectures that could provide alternatives or add-ons to the current von Neumann architecture. Radical, alternative architectures not in compliance with this classical imperative approach—for example, those that support functional programming—have not succeeded in replacing the von Neumann architecture. But several basic approaches—especially an inherent exploitation of the parallelizibility of certain problems—continue to imply modifications of the imperative style. Although unorthodox, these architectures nevertheless constitute well-fitting add-ons to a basic von Neumann architecture. Such approaches include
• massively parallel processing,
• instruction-level parallelism,
• dataflow machines,
• associative architectures,
• neural networks, and
• biologically inspired architectures.
I based this IEEE Micro special issue mainly on material presented at special sessions on unorthodox computer architectures held in the last two years in connection with the Euromicro Workshops on Parallel and Distributed Processing. This issue presents five articles that offer solution-type approaches to the spectrum of potential architectural add-ons to the von Neumann architecture.
Three articles address different aspects of hardware support for neural networks. Ulrich Rückert from the University of Paderborn, Germany, presents some recently developed ultra-large-scale full-custom chips for realizing artificial-neural-network features of three different network classes: neural associative memories, self-organizing feature maps, and function approximators. The author explains these classes of networks and describes chip implementations that support them.
Giovanni Danese, Francesco Leporati, and Stefano Ramat from the University of Pavia, Italy, investigate ways to support the training and testing of multilayer perceptron neural networks. The authors compare the use of a Matlab simulation system running on a PC, a general-purpose multiprocessor (the TMS 32C80), and a dedicated neural accelerator chip (the Neuricam Twin Chip Totem NC3001). These comparisons consider two application examples: the classification of points in a 2D space and a biomedical filtering problem.
Next, Jürgen Büddefeld from the University of Applied Sciences, Germany, and I introduce an approach for neural network support; we base the approach on integrating processing logic into a classical memory architecture. The article outlines this approach's merits—especially for neural-network pattern recognition—and discusses a design to realize it by means of reconfigurable logic.
The fourth article, which comes in two parts, is by Marek Perkowski, David Foote, Qihong Chen, and Anas Al-Rabadi from Portland State University, Oregon, and Lech Jozwiak from Eindhoven University of Technology, Netherlands. This article addresses the field of learning hardware. Specifically, it summarizes strategies for hardware that include mechanisms for improving system operation. The authors introduce a symbolic-learning method based on multiple-valued logic and discuss the method's hardware support using a reconfigurable logic implementation.
Finally, Frank Eschmann, Bernd Klauer, Ronald Moore, and Klaus Waldschmidt from the University of Frankfurt, Germany, describe the self-distributing associative architecture (SDAARC). This approach extends the paradigm of automatic distribution of data within distributed, shared memories to the automatic distribution of instruction sequences within distributed processor systems. The architecture's goal is to balance the competing demands for parallelism and locality, solving this problem with a combination of static program analysis at compile time, dataflow graph partitioning into coarser subunits (so-called microthreads), and dynamic analysis at runtime to map these microthreads to available processors.
Karl E. Grosspietsch is a senior scientist at the Fraunhofer Institute of Autonomous Intelligent Systems in Sankt Augustin, Germany. His research interests include computer architecture, dependable computing, and autonomous systems. Grosspietsch has a diploma in computer science from the University of Hamburg and a PhD in computer science from the University of Bonn. He is a member of the IEEE, Euromicro, and the German Computing Society (Gesellschaft für Informatik).