Issue No. 04 - April (1983 vol. 32)
SINCE the inception of electronic computers, much effort has been directed towards the search of faster arithmetic techniques. For all scientific computations, the arithmetic units have always been considered as the heart of a digital computer. In the earlier approaches, emphasis on the arithmetic elements was limited to integer arithmetic with limited precision, as the cost of discrete components was the main driving factor. Circuit optimization with reduced components and techniques for high-speed arithmetic were crucial at that time. Later, software techniques were employed to include floating-point representation and to achieve desirable precision. The advances in LSI technology and the inception of microcomputers caused a-great deal of impact on the ways people used to think. Emphasis on circuit component reduction was shifted to iterative logic and minimization of chip types. This led to various uniformly structured and specialized units like cellular arrays, etc. Now, several high-speed multipliers and FFT processors are commercially available. The on-line processing requirement also enhanced acceptance of techniques like overlapping and pipelining. The error analysis has always been a lively topic of interest and so also the variable word-lengths. Parallel processing requirements led to the introduction of bit-slice ALU's. The current trend is to use high-speed arithmetic for various transforms and signal processing applications. This was clearly reflected from the papers submitted for the special issue.
T. Rao and D. Agrawal, "Introduction: Computer Arithmetic," in IEEE Transactions on Computers, vol. 32, no. , pp. 329-330, 1983.