The Community for Technology Leaders
Green Image
Issue No. 02 - February (2005 vol. 16)
ISSN: 1045-9219
pp: 145-162
<p><b>Abstract</b>—The bypass paths and multiported register files in microprocessors serve as an implicit interconnect to communicate operand values among pipeline stages and multiple ALUs. Previous superscalar designs implemented this interconnect using centralized structures that do not scale with increasing ILP demands. In search of scalability, recent microprocessor designs in industry and academia exhibit a trend toward distributed resources such as partitioned register files, banked caches, multiple independent compute pipelines, and even multiple program counters. Some of these partitioned microprocessor designs have begun to implement bypassing and operand transport using point-to-point interconnects. We call interconnects optimized for scalar data transport, whether centralized or distributed, <b>scalar operand networks</b>. Although these networks share many of the challenges of multiprocessor networks such as scalability and deadlock avoidance, they have many unique requirements, including ultra-low latency (a few cycles versus tens of cycles) and ultra-fast operation-operand matching. This paper discusses the unique properties of scalar operand networks (SONs), examines alternative ways of implementing them, and introduces the AsTrO taxonomy to distinguish between them. It discusses the design of two alternative networks in the context of the Raw microprocessor, and presents timing, area, and energy statistics for a real implementation. The paper also presents a 5-tuple performance model for SONs and analyzes their performance sensitivity to network properties for ILP workloads.</p>
Interconnection architectures, distributed architectures, microprocessors.
Michael Bedford Taylor, Walter Lee, Saman P. Amarasinghe, Anant Agarwal, "Scalar Operand Networks", IEEE Transactions on Parallel & Distributed Systems, vol. 16, no. , pp. 145-162, February 2005, doi:10.1109/TPDS.2005.24
95 ms
(Ver 3.3 (11022016))