Pages: pp. 4-5
A recent article in The New York Times described the work of a professor at Stanford who says that experts in literature should not spend time actually reading things. Instead, he thinks they should gather statistics, plot data, and generally do what scientists do (or what he appears to think they do). Naturally, not many experts in literature are taking him up on this approach, but it could have one significant advantage: it would shift emphasis away from ratings, prizes, and opinions and more toward data. Of course, that's probably not what people want, and the data might be meaningless, but you can't have everything.
Certainly the functions of critics and pundits are important and useful. You can't really judge everything first-hand, and having access to experts is a great time-saver. It's also amusing to read what the critics say about things you've read. It's even interesting to read what critics say about other critics, although this does get a little far down on the creativity food chain. But blind trust in credentials and eminence is dangerous, very dangerous.
My mistrust of ratings and prizes was triggered long ago when I happened to be within earshot of a vigorous debate between two senior scientists. The topic they were discussing was not well known to either of them, but they had strong and opposing opinions. At a certain point, one of them said, "Well, you might be interested to know that a Nobel Prize winner disagrees with your view!" The other answered, "So what? There's nothing special about the Nobel Prize. They give it out every year!" This response was, I thought, startling and brilliant. The Nobel Prize winner who'd been mentioned was also not an expert in the subject of the argument, but even if he had been, so what? Mere fame itself shouldn't give someone special authority to speak on any and all subjects. If fame where enough, we'd turn to movie stars for guidance on policies and budgets for large states or even for the whole country.
Computational science is no exception. Most computational scientists belong to one or more professional societies. Virtually all of these give prizes and awards that are taken seriously. In fact, some of these organizations were created for the sole purpose of giving out awards. This tendency to rate and rank things based merely on hearsay and reputation plays a role in decision-making on some very important subjects. The two most dramatic but unfortunately meaningless questions I can think of are, "What's the fastest computer?" and "Where's the most powerful machine?" Neither of these questions has a precise answer; indeed, they're both stand-ins for more complex and urgent questions, such as
There's an obvious conclusion that I believe is worth stating yet again. A robust scientific approach to addressing these questions involves formulas, test runs, statistics, and graphs, not the tools used for lit-crit. No single numeric metric for machine performance will provide the information needed.
George K. Thiruvathukal is a visiting associate professor in the Department of Computer Science at Loyola University Chicago. He is also president and CEO of Nimkathana Corporation, which does research and development in high-performance cluster computing, data mining, handheld/embedded software, and distributed systems. Thiruvathukal's role on the board will be to serve as the new co-editor for the Scientific Programming department. His other research interests include scientific programming in Java and XML and open-source projects. He has written two books for Prentice Hall covering concurrent, parallel, and distributed programming patterns, and techniques in Java and Web programming in Python. Thiruvathukal has a PhD from the Illinois Institute of Technology in Chicago. For further information, please visit his Web site at http://gkt-www.cs.luc.edu.