2016 IEEE International Parallel and Distributed Processing Symposium (IPDPS) (2016)

Chicago, IL, USA

May 23, 2016 to May 27, 2016

ISSN: 1530-2075

ISBN: 978-1-5090-2141-3

pp: 912-922

DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/IPDPS.2016.67

ABSTRACT

As parallel computing trends towards the exascale, scientific data produced by high-fidelity simulations are growing increasingly massive. For instance, a simulation on a three-dimensional spatial grid with 512 points per dimension that tracks 64 variables per grid point for 128 time steps yields 8 TB of data, assuming double precision. By viewing the data as a dense five-way tensor, we can compute a Tucker decomposition to find inherent low-dimensional multilinear structure, achieving compression ratios of up to 5000 on real-world data sets with negligible loss in accuracy. So that we can operate on such massive data, we present the first-ever distributed-memory parallel implementation for the Tucker decomposition, whose key computations correspond to parallel linear algebra operations, albeit with nonstandard data layouts. Our approach specifies a data distribution for tensors that avoids any tensor data redistribution, either locally or in parallel. We provide accompanying analysis of the computation and communication costs of the algorithms. To demonstrate the compression and accuracy of the method, we apply our approach to real-world data sets from combustion science simulations. We also provide detailed performance results, including parallel performance in both weak and strong scaling experiments.

INDEX TERMS

Tensile stress, Computational modeling, Data models, Kernel, Matrix decomposition, Distributed databases, Algorithm design and analysis

CITATION

W. Austin, G. Ballard and T. G. Kolda, "Parallel Tensor Compression for Large-Scale Scientific Data,"

*2016 IEEE International Parallel and Distributed Processing Symposium (IPDPS)*, Chicago, IL, USA, 2016, pp. 912-922.

doi:10.1109/IPDPS.2016.67

CITATIONS