Dec. 2013 (Vol. 46, No. 12) pp. 24-25
0018-9162/13/$31.00 © 2013 IEEE
Published by the IEEE Computer Society
Published by the IEEE Computer Society
Computing Laws: Origins, Standing, and Impact
PDFs Require Adobe Acrobat
As the age of electronic digital computing reaches 75 years in 2014, we take a journey back to review some of the most popular computing laws introduced in the past century and to see where they stand now and what they tell us about the future.
The field of computing is still relatively young, and the pioneers who brought us from vacuum tubes to integrated circuits and then on to tablet computers are not only still with us but still making meaningful contributions and writing articles for Computer! We asked a handful of visionary computer scientists to look back on some key computing laws—observations on how various technology components evolved, how they are valued and leveraged, and how their rates of progress have played out, not only in computing, but in society at large.
We chose to focus on the following laws because of their rather transformative impact on the essence and appeal of computing: Metcalfe's law, Makimoto's wave, Amdahl's law, Moore's law, and Grosch's law. Each, with its particular technical focus, helps explain our rapid progression from giant, multiton government machines to the Internet of Things, digital nomads, and wearable computers.
Metcalfe's law, as Bob Metcalfe himself describes in this issue, has been controversial, but never disproven. The law itself says that the value of a network grows as the square of the number of its users, and Metcalfe explores this vis-à-vis Facebook. It's fitting that the law that really helped give legs to the LAN and described the explosion of networking through Ethernet technology examine equally well just how valuable networks are in the context of social networking. Metcalfe examines some of the other networking laws and whether they are helpful in determining the value of connections in a network like Facebook. The article also includes a sidebar from Sumi Helal describing the importance—and complexity—of determining value in network technologies and services.
Another law that gave us a 50-year vantage point on semiconductor technology is Makimoto's wave. Tsugio Makimoto spent a career in semiconductors and early on observed that the industry had a somewhat predictable 10-year swing between innovating toward standardization, thus improving manufacturing, cost containment, and growth in market share, and then toward customization, which led to better implementation, differentiation, and performance, and decreased power consumption. Makimoto walks us through his experience in semiconductor research and development, and uses his wave to explore how changes in semiconductors played a key role in the computing revolution. While the technological victories certainly spurred market growth for computers, the more important impact might be on how society has changed, and how computing has been democratized in a relatively short period of time.
An exciting area of revolutionary progress is covered in Gene Amdahl's new article describing for the first time his early work and achievements leading up to Amdahl's law. At the request of IBM, Amdahl gave a presentation back in 1967 at the Spring Joint Computer Conference held on the East Coast that essentially defined the relationship between the specific application code and the underlying architecture on parallel computing performance. This was a pivotal event, and Amdahl's insights based also on the early work of Kenneth Knight 1 from both before and after it are quite compelling. This article provides much-needed clarifications, and most notably includes the original performance formula as presented by Amdahl nearly five decades ago! No doubt, his mark on parallel computing continues to be highly influential today.
Moore's law is arguably one of the most widely known computing laws. Published in Electronics Weekly in 1965, the law predicted a biennial doubling of chip capacity at minimal cost. It has borne out for the past 50 years, enabling mind-blowing expansion in performance capacity with simultaneous reductions in size and pricing, and is expected to hold at least until 2020. 2 Authors Andrew Chien and Vijay Karamcheti explore the promises of Moore's law, and examine where we are on its trajectory—which was never thought to be without end—and how it could end. The authors provide insight into how new technologies, such as phase-change memory (PCM) and resistive RAM (reRAM), might continue to carry forward Moore's principles. The article also includes a sidebar by Bob Colwell examining the all-important economics of Moore's law.
Finally, Patrick Ryan, Sarah Falvey, and Ronak Merchant use Grosch's law as a jumping-off point from which to examine one of computing's current buzzwords: the cloud. Grosch's law, first put forward in 1953, says, essentially, that computing performance increases by the square of its cost—to do something 10 times cheaper, it has to be 100 times as fast—and that “relatively dumb terminals would tap into the power of large datacenters,” as the authors state in the article. It's from this vantage point that the authors examine the true nature of the cloud—is it really distinct from the Internet? Or are they one and the same? Will our efforts to define, regulate, and safeguard the cloud be successful, or will they have some unintended consequences? This interesting examination brings us around full circle: How will technology continue to evolve from here? What are some key patterns emerging now, and who will be the thought leaders that help us better understand and capitalize on them?
We are very proud to have assembled this collection of articles from this very distinguished group of authors, and we hope you will use this review of the past to think on what directions future research might take. Finally, we would love to know what other pivotal laws you think we should have included; send your thoughts to email@example.com.
Vladimir Getov is a professor of distributed and high-performance computing at the University of Westminster, London. His research interests include parallel architectures and performance, autonomous distributed computing, and high-performance programming environments. Getov received a PhD and DSc in computer science from the Bulgarian Academy of Sciences. He is a senior member of IEEE, a member of ACM, a Fellow of the Bristish Computer Society, and Computer's area editor for high-performance computing. Contact him at firstname.lastname@example.org.