Issue No. 07 - July (2015 vol. 64)

ISSN: 0018-9340

pp: 2060-2070

James Demmel , Mathematics Department and CS Division, University of California at Berkeley, Berkeley, CA

Hong Diep Nguyen , EECS Department, University of California at Berkeley, Berkeley, CA

ABSTRACT

Reproducibility, i.e. getting bitwise identical floating point results from multiple runs of the same program, is a property that many users depend on either for debugging or correctness checking in many codes [10] . However, the combination of dynamic scheduling of parallel computing resources, and floating point nonassociativity, makes attaining reproducibility a challenge even for simple reduction operations like computing the sum of a vector of numbers in parallel. We propose a technique for floating point summation that is reproducible independent of the order of summation. Our technique uses Rump’s algorithm for error-free vector transformation [7] , and is much more efficient than using (possibly very) high precision arithmetic. Our algorithm reproducibly computes highly accurate results with an absolute error bound of $n \cdot 2^{-28} \cdot macheps \cdot \max _i |v_i|$ at a cost of $7n$ FLOPs and a small constant amount of extra memory usage. Higher accuracies are also possible by increasing the number of error-free transformations. As long as all operations are performed in to-nearest rounding mode, results computed by the proposed algorithms are reproducible for any run on any platform. In particular, our algorithm requires the minimum number of reductions, i.e. one reduction of an array of six double precision floating point numbers per sum, and hence is well suited for massively parallel environments.

INDEX TERMS

Vectors, Program processors, Accuracy, Standards, Algorithm design and analysis, Numerical analysis, Computational modeling,numerical analysis, Reproducibility, summation, floating-point, rounding error, parallel computing

CITATION

James Demmel, Hong Diep Nguyen, "Parallel Reproducible Summation",

*IEEE Transactions on Computers*, vol. 64, no. , pp. 2060-2070, July 2015, doi:10.1109/TC.2014.2345391