The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.02 - March/April (2006 vol.8)
pp: 7-8
Published by the IEEE Computer Society
Isabel Beichl , National Institute of Standards and Technology
Francis Sullivan , IDA Center for Computing Sciences
ABSTRACT
The term Monte Carlo method stands for any member of a very large class of computational methods that use randomness to generate "typical" instances of a problem under investigation. Typical instances are generated because it's impractical or even impossible to generate all instances. A set of typical instances is supposed to help us learn something about a problem of interest. Most of the time, Monte Carlo works amazingly well, but when used blindly, with no firm basis in theory, it can yield some very strange results or run for many, many hours and yield nothing. One of the triumphs of the modern period in Monte Carlo methods has been a dramatic improvement in our understanding of how to speed up the computation and how to know when the method will work.
The term Monte Carlo method stands for any member of a very large class of computational methods that use randomness to generate "typical" instances of a problem under investigation. Typical instances are generated because it's impractical or even impossible to generate all instances. A set of typical instances is supposed to help us learn something about a problem of interest. Most of the time, Monte Carlo works amazingly well, but when used blindly, with no firm basis in theory, it can yield some very strange results or run for many, many hours and yield nothing. One of the triumphs of the modern period in Monte Carlo methods has been a dramatic improvement in our understanding of how to speed up the computation and how to know when the method will work.
Betting with Bits
The basic ideas of Monte Carlo go back at least to the founding of probability theory. In fact, if we're willing to think of gambling as a Monte Carlo method, we could say that the ideas go back to the dawn of civilization. Probability theory itself started with Pascal's analysis of an interrupted card game. Stated in modern terminology, gambling can be thought of as a Monte Carlo experiment designed to calculate the value of the casino's house advantage as a rate of loss per hour for the typical patron. It's an expensive computation, but, as thousands of casino goers can attest, it works quite well.
Of course, "ancient" (circa 1777) Monte Carlo ideas, such as Buffon's technique for determining the digits of p by repeatedly dropping a needle on a lined piece of paper, are of limited use in practical problems. Monte Carlo techniques got their real start with the birth of digital computers in the late 1940s. The origin of the key idea for the most important and widely used Monte Carlo method—namely, using a type of importance sampling to sample from the limit distribution—has been variously ascribed to John von Neumann, Stanislas Ulam, Enrico Fermi, and others. Whatever its origin, the idea was first described in the now famous paper by Nicholas Metropolis, Arianna W. Rosenbluth, Marshall N. Rosenbluth, Augusta H. Teller, and Edward Teller 1 and has come to be known as the Monte Carlo Markov chain (MCMC) method.
This Issue
The trail from the classical Metropolis method to modern MCMC techniques is a long and interesting journey. Improvements in both theory and implementation have sped up convergence, opened up vast new areas of application, and generalized the basic method in many ways. This special issue of CiSE includes a few, though certainly not all, of the more important developments and trends. Even a brief survey of all important topics in Monte Carlo would, of course, require a door-stop-sized tome.
In the first article, Jacques Amar traces the path from the original Metropolis method to some of its present-day versions. He also includes a discussion of acceleration methods and a study of kinetic methods. In the case of kinetic Monte Carlo, we're interested in time-dependent behavior rather than a fixed probability distribution.
For many years, researchers thought Monte Carlo couldn't be applied to molecular dynamics (MD) simulations because for MD, we want to follow individual motion and interaction. Erik Luijten's article explains how MC can, in fact, be used for MD.
Dana Randall's article provides an entry into the all important theory of MCMC. Her emphasis is on Monte Carlo use in counting problems—an application that has developed strongly in the past decade.
Finally, your faithful guest editors, Isabel Beichl and Francis Sullivan, take up the theme of counting. In this case, however, the method of choice is not MCMC, but rather sequential importance sampling (SIS). We attempt to explain how and why SIS works, and we also try to correct some misconceptions about its efficiency.
Conclusion
We hope that readers skimming through these articles will begin to feel some of the excitement of recent work in Monte Carlo and will go on to read these and other papers in detail. In the best of all possible worlds, readers who aren't yet users of Monte Carlo will try it and contribute to the subject.

Reference

Isabel Beichl is a mathematician in the Information Technology Laboratory at the National Institute of Standards and Technology. Contact her at isabel.beichl@nist.gov.
Francis Sullivan is the director of the IDA Center for Computing Sciences in Bowie, Maryland. From 2000 through 2004, he served as CiSE magazine's editor in chief. Contact him at fran@super.org.
24 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool