This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
One Billion Transistors, One Uniprocessor, One Chip
September 1997 (vol. 30 no. 9)
pp. 51-57

Researchers from the University of Michigan conclude that billion-transistor processors will be much as they are today, but just bigger, faster, and wider (issuing more instructions at once). The authors describe the key problems (instruction supply, data memory supply, and an implementable execution core) that prevent current superscalars from scaling up to the 16- or 32-instructions per issue. They propose using out-of-order fetching, Multi-Hybrid branch predictors, and trace caches to improve the instruction supply. They predict that replicated first-level caches, huge on-chip caches, and data value speculation will enhance the data supply. To provide a high-speed, implementable execution core capable of sustaining the necessary instruction throughput, they advocate a large, out-of-order-issue instruction window (2,000 instructions), clustered (separated) banks of functional units, and hierarchical scheduling of ready instructions. They contend that the current uniprocessor model can provide sufficient performance and use a billion transistors effectively without changing the programming model or discarding software compatibility.

Citation:
Yale N. Patt, Sanjay J. Patel, Marius Evers, Daniel H. Friendly, Jared Stark, "One Billion Transistors, One Uniprocessor, One Chip," Computer, vol. 30, no. 9, pp. 51-57, Sept. 1997, doi:10.1109/2.612249
Usage of this product signifies your acceptance of the Terms of Use.