The Community for Technology Leaders
Green Image
<p>Probabilistic inference is an important technique for reasoning under uncertainty in such areas as medicine, software fault diagnosis, speech recognition, and automated vision. Although it could contribute to many more applications, probabilistic inference is extremely computationally intensive, making it impractical for applications that involve large databases. One way to address this problem is to take advantage of the technique's available parallelism. The authors evaluated the effectiveness of doing probabilistic inference in parallel. They found that parallel probabilistic inference presents interesting tradeoffs between load balance and data locality. These factors are key to successful parallel applications and yet are often difficult to optimize. The authors attempted to find the optimal tradeoff by writing two parallel programs--static and dynamic--to exploit different forms of parallelism available in probabilistic inference. Both programs were tested on a 32-processor Stanford Dash and a 16-processor SGI Challenge XL, using six medium belief networks to evaluate the programs. In a series of experiments and analyses, the results were evaluated to see how computation time was used and how data locality affected performance. The authors then tested the static program using a large medical diagnosis network. The static program, which maximizes data locality, outperformed the dynamic program. It also reduced the time probabilistic inference takes on the large medical network. The results suggest that maintaining good data locality is crucial for obtaining good speedups and that the speedups attained will depend on the network's structure and size. </p>

J. P. Singh and A. V. Kozlov, "Parallel Implementations of Probabilistic Inference," in Computer, vol. 29, no. , pp. 33-40, 1996.
87 ms
(Ver 3.3 (11022016))