The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.01 - January/February (2006 vol.8)
pp: 15-17
Published by the IEEE Computer Society
Steven Gottlieb , Indiana University
ABSTRACT
My awareness of special-purpose computing goes back quite a long time. In the early 1980s, my former grad-school housemate and (still) good friend Doug Toussaint worked with Bob Pearson and John Richardson on a special-purpose computer to study the Ising model. They constructed and successfully used that computer, but by 1983, when Doug and I met up again at the University of Southern California, San Diego (UCSD), he was using the Ising model computer in his office as a coffeemaker stand, not for studying hot Ising spins. By serving as guest editor, I hoped to find a mix of mature and emerging projects that would provoke readers to wonder whether their own problems would be suited with a special-purpose computer.




My awareness of special-purpose computing goes back quite a long time. In the early 1980s, my former grad-school housemate and (still) good friend Doug Toussaint was working with Bob Pearson and John Richardson on a special-purpose computer to study the Ising model. That computer was constructed and successfully used, but by 1983, when Doug and I were reunited at the University of California, San Diego (UCSD), the Ising model computer in his office was being used as a stand for his stack of preprints, not for studying hot Ising spins. We were busy using his Sun workstation and DEC VAX computers to study lattice quantum chromodynamics (QCD). Doug concluded that the pace of advance in computing technology was such that we would get more science done exploiting what was commercially available.
From Humble Beginnings
In 1984, Bob Sugar and Doug Scalapino at the University of California, Santa Barbara, along with Doug Toussaint and Julius Kuti at UCSD, received a US$500,000 award from the US National Science Foundation to purchase an ST100 array processor that had a peak speed of 100 Mflops/second and was more cost effective than the Cray supercomputers that were popular at the time. Although we had to use assembly code, we could achieve roughly 85 to 90 percent of peak speed for lattice QCD calculations that neglected quarks and about 30 percent of peak for calculations with quarks.
As the size of our collaborations grew, so did the power of the computers. We used the Cyber 205, ETA10, CM2, CM5, Ncube, IPSC/860, and probably many other computers that I've long since forgotten. Over this period, others in the field of lattice gauge theory were involved with building computers—for example, the Array Processor Experiment (APE) in Italy (about which you will read in this issue), the Cosmic Cube at the California Institute of Technology (a forerunner of the Intel Parallel Scientific Computers), the Fermi 256 computer at Columbia, the Advanced Computer Program Multiple Array Processor System (ACPMAPS) at Fermilab (on which Doug Toussaint and I ran one of the first production codes), and the GF11 computer at IBM spearheaded by Don Weingarten. (Had Weingarten not left Indiana University for IBM, thus opening up an academic job for me, I might be rich today.)
In 1990, I got the special-purpose computing bug and joined the QCD Teraflops project. I spent my sabbatical in 1992 and 1993 working on the design of a special-purpose computer with a large number of other people, a project that eventually resulted in a design that was to be built with the partnership of Thinking Machines. You might have guessed that this computer was never built. Norman Christ, who initiated the Fermi 256 and the QCD Teraflops projects, decided to pursue his own project, which eventually resulted in the QCDSP computer. (This name stands for QCD Digital Signal Processor because the floating-point performance came from Texas Instrument DSP chips.) Although he invited me to join his project, I thought I might get more physics done by not building a machine.
I'm currently involved in a large US project funded by the US Department of Energy under its Scientific Discovery through Advanced Computing (SciDAC) program. This project, called National Computational Infrastructure for Lattice Gauge Theory, is headed by Bob Sugar. To perform our calculations, we use a special-purpose computer, the QCDOC, and clusters of computers networked with gigabit Ethernet, Myrinet, or Infiniband. The QCDOC, which stands for QCD on Chip, is the successor to the QCDSP, and was built by Norman Christ and Bob Mawhinney at Columbia University, along with strong participation from individuals in the United Kingdom QCD (UKQCD) collaboration and IBM. As those of us involved in this large US project look to the future, we need to ask, what is the best platform for our very challenging calculations?
Grand Challenges
As you can see, my research involves a very difficult computational challenge and is one of several scientific problems that used to be called Grand Challenges. They greatly influenced my search for special-purpose computing projects: I sought problems that were computationally intensive, but whose solution could be greatly accelerated by a cleverly designed computer. I was also looking for successful projects. Although we could learn a great deal from a failed project, it might be difficult to find a willing author.
An interesting question is how to measure the success of special-purpose computing projects. We can consider it from either a professional or personal perspective. Professionally, we would hope that the primary measure of success is scientific discovery, but other measures of success such as performance increase or cost effectiveness can be easier to measure. (In my own field, many projects advertise their costs for production hardware, and sometimes it's very difficult to determine the true cost of development, especially for personnel.) In some cases, a competitive prize such as the Gordon Bell Award or IR100 recognition is evidence of success. Technology spin-off from a project can be of great practical importance as well, and can lead to future funding. In a competitive funding environment, it might be possible to fund a computer construction project with a prospect of spin-off when a simple computer purchase would have been too expensive or too uninteresting to the funding agency to be considered. In such a situation, computer construction might become a practical, not a scientific, necessity.
Turning to more personal issues, I also ask, what is the best way to spend my time? Will I get more science done by building a machine? Can I convince some clever computer designers that this is an interesting project, so I don't have to do it all on my own? Do I like to design and build computers? Would I be better off trying to design a new algorithm for my problem?
The Articles in this Issue
Because the question of how best to advance our scientific inquiries is still very much on my mind, I looked forward to the challenge of being guest editor for CiSE magazine and soliciting articles that would address the following issues:

    • What problem is being solved, and why is a special-purpose computer useful?

    • How was the special-purpose computer designed?

    • How did the special-purpose computer's performance compare with general-purpose computers available at the time of construction?

    • What results were achieved with the special-purpose computer?

    • Were technology spin-offs from the project incorporated into future designs?

    • By how many years, or by what cost factor, was the special-purpose computer an improvement on general-purpose computers?

I hoped to find a mix of mature and emerging projects that would provoke readers to wonder whether their own problems would be suited to solution via a special-purpose computer.
"Computing for LQCD: apeNEXT," by Francesco Belletti and his colleagues, tells the story of the most recent computer in a series that started approximately 20 years ago. APE computers use their own floating-point units and a custom network, and the authors have paid a lot of attention to software development. They developed their own language (called TAO) and compilers to enable efficient use of their computers; codes written in TAO port easily between generations of the APE computers.
"The GRAPE Project," by Jun Makino, reviews the history of the gravity pipe series of computers that have special-purpose processors designed for extreme speed on inverse square force laws. This project, not quite as old as APE, has gone from GRAPE-1 to GRAPE-6, achieving a million-fold speedup. Makino treats us to the details of the GRAPE-DR project, which is expected to achieve a peak speed of 2 Pflops/sec by the end of 2008 and explains some of the considerations in the design choice.
An additional article by Francesco Belletti and a different set of colleagues, "Ianus: An Adaptive FPGA Computer," is a little different from what I first envisioned. The predecessor to this project (the Spin Update Engine, or SUE) was a special-purpose computer built in Spain to study spin-glass systems. I wanted a project that wasn't as large, or as old, as the previous two, and this one seemed ideal because it was less than five years old, had only six authors, and was using programmable logic rather than a custom-designed chip. The article I received indicates how quickly this project has grown, with 20 people now involved in constructing the next machine. Not only that, the project now involves several APE collaborators.
"Chess Hardware in Deep Blue," by Feng-Hsiung Hsu, tells the story of IBM's famous Deep Blue computer, which defeated world chess champion Garry Kasparov in 1997. The problem of computer chess has the distinction of having achieved its goal, and the author has gone on to other challenges, but he discusses how someone might build hardware for a chess computer today. He suggests that a field-programmable gate array (FPGA) could prove to be better than designing an application-specific integrated circuit (ASIC). If this article on computer chess piques your interest, you might want to visit the Computer History Museum and see the exhibit called "Mastering the Game." If you can't get to Mountain View, California, to see it in person, visit www.computerhistory.org/chess/.
Conclusion
I hope you enjoy reading this issue and pondering whether your problems could be solved on a special-purpose computer as much as I've enjoyed learning about the projects and soliciting the articles.
Steven Gottlieb is professor of physics at Indiana University. His research interests include elementary particle theory—in particular, lattice QCD and computational physics. Gottlieb has a BA in mathematics and physics from Cornell University and an MA and a PhD in physics from Princeton University. He is a member of the American Physical Society and Sigma Xi, has served on the executive committee of the APS Division of Computational Physics, and is currently a divisional associate editor for Physical Review Letters. Contact him at sg AT indiana.edu.
41 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool