Every now and then, a peculiar kind of news story appears about some scientific topic. On first reading, it looks like "startling new results" or "the answer to everything" about some perpetually hot topic, such as the age of the universe, the origin of mankind, or the best diet for a healthy life. One characteristic these examples all share is that they fade quickly, only to be replaced by a new ultimate answer. Sometimes rather than fading, the thrilling discovery has a second life in checkout-line tabloids. A few of these items are hoaxes, some are merely consequences of over-enthusiasm about preliminary results, but many are honest mistakes carried to the point of pathology.
To be fair, let me say at the outset that computational science is not immune from this pathology. But a point I hope to make is that widespread availability of fairly high-end computing has shortened the life span of the science pathologies that occur in computing.
The term "pathological science" goes back at least as far as Irving Langmuir's famous 1953 General Electric lecture, in which he discussed things like N-rays and ESP. He described pathological science this way:
These are cases where there is no dishonesty involved but where people are tricked into false results by a lack of understanding about what human beings can do to themselves in the way of being led astray by subjective effects, wishful thinking or threshold interactions. These are examples of pathological science. These are things that attracted a great deal of attention. Usually hundreds of papers have been published on them. Sometimes they have lasted for 15 or 20 years and then gradually have died away.
Langmuir also identified six features that he thought characterized pathological science:
• The maximum effect observed is produced by a causative agent of barely detectable intensity; the effect's magnitude is substantially independent of the cause.
• The effect is of a magnitude that remains close to the limit of detectability; otherwise, many measurements are necessary because of the very low significance of the results.
• Claims of great accuracy.
• Fantastic theories contrary to experience.
• Criticisms are met by ad hoc excuses thought up on the spur of the moment.
• The ratio of supporters to critics rises up to somewhere near 50 percent and then falls gradually to oblivion.
Langmuir's lecture did not put an end to pathological science. In 1966, the Soviet scientists Boris Valdimirovich Derjaguin and N.N. Fedyakin discovered a new form of water that came to be known as "polywater." It had a density higher than normal water, a viscosity 15 times that of normal water, a boiling point higher than 100 degrees Centigrade, and a freezing point lower than zero degrees. After more experiments, it turned out that these strange properties were all due to impurities in the samples. An amusing sidenote is that the polywater episode occurred a few years after Kurt Vonnegut's book Cat's Cradle, which imagined a form of water, and more importantly a form of ice, with strange properties. The most well-publicized pathological case in recent years is arguably the cold fusion story.
Why do these things happen? Imagine working late into the night on a new algorithm that you feel sure will be much more efficient than existing methods, but it somehow doesn't seem to work. After many hours of effort, you make a few more changes to the code, and suddenly it works amazingly well. The results begin to appear almost as soon as you hit the enter key. Next you try another case, but that example doesn't work well at all. You go back to re-run the original wonderful case, and that doesn't work either! This is the danger point: you either find the error that made the one good case work, or you decide that there's a subtle effect here that can only be produced by doing things just so. If you choose the second path and get one more good result, you might end up believing you have an excellent method that only you know how to use. This is one way that legitimate science can descend into pathology.
Fortunately, your experiment was done with a computer rather than a complicated lab setup, which means that, in principle, others can repeat the experiment quickly and easily. And unless you're very stubborn indeed, you'll soon discover that your error was a fluke, perhaps something like branching to a routine where the correct answer was stored for testing purposes.
A final caution: to guard against becoming too complacent about the use of computing as immunization against pathological science, recall the many instances where easily generated and beautiful "gratuitous graphics" are used in lieu of content in computational science presentations. I don't know if this is pathological science in the old sense, but it's a symptom of something spawned by the ease of computing.