Issue No. 03 - May/June (2010 vol. 30)
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/MCG.2010.51
Hank Childs , Lawrence Berkeley National Laboratory
David Pugmire , Oak Ridge National Laboratory
Sean Ahern , Oak Ridge National Laboratory
Brad Whitlock , Lawrence Livermore National Laboratory
Mark Howison , Lawrence Berkeley National Laboratory
Prabhat , Lawrence Berkeley National Laboratory
Gunther H. Weber , Lawrence Berkeley National Laboratory
E. Wes Bethel , Lawrence Berkeley National Laboratory
A series of experiments studied how visualization software scales to massive data sets. Although several paradigms exist for processing large data, the experiments focused on pure parallelism, the dominant approach for production software. The experiments used multiple visualization algorithms and ran on multiple architectures. They focused on massive-scale processing (16,000 or more cores and one trillion or more cells) and weak scaling. These experiments employed the largest data set sizes published to date in the visualization literature. The findings on scaling characteristics and bottlenecks will help researchers understand how pure parallelism performs at high levels of concurrency with very large data sets.
visualization, pure parallelism, many-core processing, petascale computing, VisIt, very large data sets, interprocess communication, I/O performance, Denovo, Dawn, computer graphics, graphics and multimedia
D. Pugmire et al., "Extreme Scaling of Production Visualization Software on Diverse Architectures," in IEEE Computer Graphics and Applications, vol. 30, no. , pp. 22-31, 2010.