2009 IEEE International Conference on Cluster Computing and Workshops (2009)
New Orleans, LA USA
Aug. 31, 2009 to Sept. 4, 2009
Sarah Loebman , University of Washington, Seattle, WA
Dylan Nunley , University of Washington, Seattle, WA
YongChul Kwon , University of Washington, Seattle, WA
Bill Howe , University of Washington, Seattle, WA
Magdalena Balazinska , University of Washington, Seattle, WA
Jeffrey P. Gardner , University of Washington, Seattle, WA
As the datasets used to fuel modern scientific discovery grow increasingly large, they become increasingly difficult to manage using conventional software. Parallel database management systems (DBMSs) and massive-scale data processing systems such as MapReduce hold promise to address this challenge. However, since these systems have not been expressly designed for scientific applications, their efficacy in this domain has not been thoroughly tested. In this paper, we study the performance of these engines in one specific domain: massive astrophysical simulations. We develop a use case that comprises five representative queries. We implement this use case in one distributed DBMS and in the Pig/Hadoop system. We compare the performance of the tools to each other and to hand-written IDL scripts. We find that certain representative analyses are easy to express in each engine's highlevel language and both systems provide competitive performance and improved scalability relative to current IDL-based methods.
software management, data analysis, parallel databases, query processing, relational databases
S. Loebman, D. Nunley, Y. Kwon, B. Howe, M. Balazinska and J. P. Gardner, "Analyzing massive astrophysical datasets: Can Pig/Hadoop or a relational DBMS help?," 2009 IEEE International Conference on Cluster Computing and Workshops(CLUSTER), New Orleans, LA USA, , pp. 1-10.