The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.05 - September/October (2004 vol.6)
pp: 3
Published by the IEEE Computer Society
ABSTRACT
A fanciful way to describe the difference between magic and science is by way of humankind?s relationship with the gods. Magic, in this view, is an attempt to force the gods to do what we humans want, but science is an attempt to determine what the gods will do next. Put another way, magic pays little attention to discovering causes of various outcomes, because practitioners of magic believe (or appear to believe) that apparent causes have little to do with observed outcomes. Science, in contrast, is all about discovering causes. But what about prediction? Is Yogi Berra's quote in the title of this column as true today as it was when he first uttered it?
A fanciful way to describe the difference between magic and science is by way of humankind's relationship with the gods. Magic, in this view, is an attempt to force the gods to do what we humans want, but science is an attempt to determine what the gods will do next. Put another way, magic pays little attention to discovering causes of various outcomes, because practitioners of magic believe (or appear to believe) that apparent causes have little to do with observed outcomes. Science, in contrast, is all about discovering causes.
But what about prediction? Is Yogi Berra's quote in the title of this column as true today as it was when he first uttered it? What is the status of prediction vis-á-vis magic and science? When Uri Geller "bends" a spoon, he always invites his audience to examine it both before and after the magical bending. So he does, in fact, predict that the spoon will be bent. However, this prediction is probably what most readers of CiSE mean when they use the word "predict." A prediction claiming to be scientific must meet one requirement—that others be able to use the same methods, such as an experiment or a theoretical derivation, to reproduce the same prediction. These methods should be clear, understandable, and, in some sense, universal. They should not depend on the observer being a member of a particular audience at a particular time. When Dirac predicted the existence of the positron, others could follow his reasoning and reach the same conclusion modulo, of course, assuming they possessed a certain degree of mathematical brilliance and depth of understanding.
Obviously, computation on its own can be used for prediction. It can be argued that computation was invented for predicting the change of seasons (and also for calculations related to trade and commerce). Gauss predicted where in the sky to look for the planetoid Ceres, and computations based on quantum electro-dynamics are probably the most precise in all of science. But these examples are not quite in the same league with the discovery of positrons. They're not predicting something entirely new and unsuspected—rather, they give computational confirmation of the results of theory.
Has any computation predicted something big and completely unknown before the computation was performed? Is it even possible? It's sometimes said that floating-point computing limits the possibility of computational prediction. There's even the famous story that Alston Householder wouldn't fly in a plane designed using floating-point. (He wouldn't fly in one not designed using floating-point either.) I don't know the answer to this question, but I'm just about certain that it's not a question of the type of arithmetic. If anyone knows of examples, please let us know (write to fran@super.org). We'll try to choose the best ones and announce the winners in the next issue of CiSE.
85 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool