Issue No.07 - July (2009 vol.58)
pp: 994-1000
Stef Graillat , Université Pierre et Marie Curie, Paris
Several different techniques and softwares intend to improve the accuracy of results computed in a fixed finite precision. Here, we focus on a method to improve the accuracy of the product of floating-point numbers. We show that the computed result is as accurate as if computed in twice the working precision. The algorithm is simple since it only requires addition, subtraction, and multiplication of floating-point numbers in the same working precision as the given data. Such an algorithm can be useful for example to compute the determinant of a triangular matrix and to evaluate a polynomial when represented by the root product form. It can also be used to compute the integer power of a floating-point number.
Accurate product, exponentiation, finite precision, floating-point arithmetic, faithful rounding, error-free transformations.
Stef Graillat, "Accurate Floating-Point Product and Exponentiation", IEEE Transactions on Computers, vol.58, no. 7, pp. 994-1000, July 2009, doi:10.1109/TC.2008.215
[1] T. Ogita, S.M. Rump, and S. Oishi, “Accurate Sum and Dot Product,” SIAM J. Scientific Computing, vol. 26, no. 6, pp. 1955-1988, 2005.
[2] S.M. Rump, T. Ogita, and S. Oishi, “Accurate Floating-Point Summation. Part I: Faithful Rounding,” SIAM J. Scientific Computing, vol. 31, no. 1, Oct. 2008.
[3] Research Report 04, S. Graillat, N. Louvet, and P. Langlois, “Compensated Horner Scheme,” Équipe de recherche DALI, Laboratoire LP2A, Université de Perpignan Via Domitia, France, July 2005.
[4] P. Langlois and N. Louvet, “How to Ensure a Faithful Polynomial Evaluation with the Compensated Horner Algorithm,” Proc. 18th IEEE Symp. Computer Arithmetic (ARITH '07), pp. 141-149, 2007.
[5] P. Kornerup, V. Lefevre, and J.-M. Muller, Computing Integer Powers in Floating-Point Arithmetic, arXiv:0705.4369v1 [cs.NA], 2007.
[6] P.H. Sterbenz, Floating-Point Computation. Prentice-Hall, 1974.
[7] IEEE Standard for Binary Floating-Point Arithmetic, vol. 22, no. 2,ANSI/IEEE Standard 754-1985, New York, IEEE, 1985, reprinted in SIGPLAN Notices, pp. 9-25, 1987.
[8] N.J. Higham, Accuracy and Stability of Numerical Algorithms, second ed. SIAM, 2002.
[9] T.J. Dekker, “A Floating-Point Technique for Extending the Available Precision,” Numerical Math., vol. 18, pp. 224-242, 1971.
[10] D.E. Knuth, The Art of Computer Programming, Volume 2, Seminumerical Algorithms, third ed. Addison-Wesley, 1998.
[11] Y. Nievergelt, “Scalar Fused Multiply-Add Instructions Produce Floating-Point Matrix Arithmetic Provably Accurate to the Penultimate Digit,” ACM Trans. Math. Software, vol. 29, no. 1, pp. 27-48, 2003.
[12] C. Jacobi, H.-J. Oh, K.D. Tran, S.R. Cottier, B.W. Michael, H. Nishikawa, Y. Totsuka, T. Namatame, and N. Yano, “The Vector Floating-Point Unit in a Synergistic Processor Element of a Cell Processor,” Proc. 17thIEEE Symp. Computer Arithmetic (ARITH '05) pp. 59-67, 2005.
[13] T. Ogita, S.M. Rump, and S. Oishi, “Verified Solution of Linear Systems without Directed Rounding,” Technical Report 2005-04, Advanced Research Inst. of Science and Eng., Waseda Univ., 2005.
[14] D.H. Bailey, A Fortran-90 Double-Double Library, , 2001.
[15] C.Q. Lauter, Basic Building Blocks for a Triple-Double Intermediate Format, Research Report RR-5702, INRIA, Sept. 2005.
[16] P. Langlois and N. Louvet, More Instruction Level Parallelism Explains the Actual Efficiency of Compensated Algorithms, hal-00165020, version 1, 2007.