The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.02 - March/April (2004 vol.6)
pp: 4-5
Published by the IEEE Computer Society
ABSTRACT
<p>A recent article in The New York Times described the work of a professor at Stanford who says that experts in literature should not spend time actually reading things. Instead, he thinks they should gather statistics, plot data, and generally do what scientists do (or what he appears to think they do). Naturally, not many experts in literature are taking him up on this approach, but it could have one significant advantage: it would shift emphasis away from ratings, prizes, and opinions and more toward data. Of course, that's probably not what people want, and the data might be meaningless, but you can't have everything. </p>
A recent article in The New York Times described the work of a professor at Stanford who says that experts in literature should not spend time actually reading things. Instead, he thinks they should gather statistics, plot data, and generally do what scientists do (or what he appears to think they do). Naturally, not many experts in literature are taking him up on this approach, but it could have one significant advantage: it would shift emphasis away from ratings, prizes, and opinions and more toward data. Of course, that's probably not what people want, and the data might be meaningless, but you can't have everything.
Certainly the functions of critics and pundits are important and useful. You can't really judge everything first-hand, and having access to experts is a great time-saver. It's also amusing to read what the critics say about things you've read. It's even interesting to read what critics say about other critics, although this does get a little far down on the creativity food chain. But blind trust in credentials and eminence is dangerous, very dangerous.
My mistrust of ratings and prizes was triggered long ago when I happened to be within earshot of a vigorous debate between two senior scientists. The topic they were discussing was not well known to either of them, but they had strong and opposing opinions. At a certain point, one of them said, "Well, you might be interested to know that a Nobel Prize winner disagrees with your view!" The other answered, "So what? There's nothing special about the Nobel Prize. They give it out every year!" This response was, I thought, startling and brilliant. The Nobel Prize winner who'd been mentioned was also not an expert in the subject of the argument, but even if he had been, so what? Mere fame itself shouldn't give someone special authority to speak on any and all subjects. If fame where enough, we'd turn to movie stars for guidance on policies and budgets for large states or even for the whole country.
Computational science is no exception. Most computational scientists belong to one or more professional societies. Virtually all of these give prizes and awards that are taken seriously. In fact, some of these organizations were created for the sole purpose of giving out awards. This tendency to rate and rank things based merely on hearsay and reputation plays a role in decision-making on some very important subjects. The two most dramatic but unfortunately meaningless questions I can think of are, "What's the fastest computer?" and "Where's the most powerful machine?" Neither of these questions has a precise answer; indeed, they're both stand-ins for more complex and urgent questions, such as

    • How should I spend my dollars to get the computing capability and capacity I need to deal with my problem set?

    • Which metrics should I use to evaluate machines proposed by vendors?

    • What promises should I make to my boss about our next machine's performance and ease of programming?

    • How long before our best programmers get top performance from the machine? How long for our "pretty good" programmers?

    • How can I state my requirements in ways that lead vendors to build the right machine?

    • Does the fact that the Top-500 list has changed mean that we're making the wrong decisions?

There's an obvious conclusion that I believe is worth stating yet again. A robust scientific approach to addressing these questions involves formulas, test runs, statistics, and graphs, not the tools used for lit-crit. No single numeric metric for machine performance will provide the information needed.
6 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool