This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
Extreme Scaling of Production Visualization Software on Diverse Architectures
May/June 2010 (vol. 30 no. 3)
pp. 22-31
Hank Childs, Lawrence Berkeley National Laboratory
David Pugmire, Oak Ridge National Laboratory
Sean Ahern, Oak Ridge National Laboratory
Brad Whitlock, Lawrence Livermore National Laboratory
Mark Howison, Lawrence Berkeley National Laboratory
Prabhat, Lawrence Berkeley National Laboratory
Gunther H. Weber, Lawrence Berkeley National Laboratory
E. Wes Bethel, Lawrence Berkeley National Laboratory
A series of experiments studied how visualization software scales to massive data sets. Although several paradigms exist for processing large data, the experiments focused on pure parallelism, the dominant approach for production software. The experiments used multiple visualization algorithms and ran on multiple architectures. They focused on massive-scale processing (16,000 or more cores and one trillion or more cells) and weak scaling. These experiments employed the largest data set sizes published to date in the visualization literature. The findings on scaling characteristics and bottlenecks will help researchers understand how pure parallelism performs at high levels of concurrency with very large data sets.
Index Terms:
visualization, pure parallelism, many-core processing, petascale computing, VisIt, very large data sets, interprocess communication, I/O performance, Denovo, Dawn, computer graphics, graphics and multimedia
Citation:
Hank Childs, David Pugmire, Sean Ahern, Brad Whitlock, Mark Howison, Prabhat, Gunther H. Weber, E. Wes Bethel, "Extreme Scaling of Production Visualization Software on Diverse Architectures," IEEE Computer Graphics and Applications, vol. 30, no. 3, pp. 22-31, May-June 2010, doi:10.1109/MCG.2010.51
Usage of this product signifies your acceptance of the Terms of Use.