The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.03 - May-June (2013 vol.15)
pp: 64-70
Teong Han Chew , Universiti Teknologi Malaysia
Kwee Hong Joyce-Tan , Universiti Kebangsaan Malaysia
Zeti Azura Mohamed Hussein , Universiti Kebangsaan Malaysia
Pek Iee Elizabeth-Chia , Universiti Teknologi Malaysia
Mohd Shahir Shamsir , Universiti Teknologi Malaysia
ABSTRACT
For simulation-based work, researchers often have limited hardware capacities and budgets, so they create a low-cost parallel computing platform using PCs and commodity hardware, assembled and configured as a Beowulf-class computing cluster. Here, an astute approach (for a molecular dynamics simulation example) greatly improves performance through hardware upgrades and other tricks.
INDEX TERMS
Scientific computing, Performance evaluation, Costs, Simulation, Parallel processing, Molecular computing, scientific computing, molecular dynamics, performance tuning, performance comparison, low-cost system, Gromacs, NAMD
CITATION
Teong Han Chew, Kwee Hong Joyce-Tan, Zeti Azura Mohamed Hussein, Pek Iee Elizabeth-Chia, Mohd Shahir Shamsir, "Improving Molecular Dynamics Simulation Performance on Low-Cost Systems", Computing in Science & Engineering, vol.15, no. 3, pp. 64-70, May-June 2013, doi:10.1109/MCSE.2013.61
REFERENCES
1. National Grid Service (NGS)/Collaborative Computational Project for Biomolecular (CCPB) Simulation, “Running Simulations on the NGS,” NGS/CCPB Training Workshop, training tutorial, 2008; www.ccpb.ac.uk/?q=workshopNGS.
2. E. Chia et al., “GridMACS Portal: A Grid Web Portal for Molecular Dynamics Simulation Using GROMACS,” Proc. 2010 Fourth Asia Int'l Conf. Mathematical/Analytical Modeling and Computer Simulation, IEEE, 2010, pp. 507–512.
3. C. Kutzner et al., “Improved GROMACS Scaling on Ethernet Switched Clusters,” Recent Advances in Parallel Virtual Machine and Message Passing Interface, Springer-Verlag, 2006, pp. 404–405.
4. H. Ong and P.A. Farrell, “Performance Comparison of LAM/MPI, MPICH and MVICH on a Linux Cluster Connected by a Gigabit Ethernet Network,” Proc. 4th Ann. Linux Showcase and Conf., Usenix Assoc., 2000; www.usenix.org/conference/als-2000performance-comparison-lammpi-mpich-and-mvich-linux-cluster-connected-gigabit .
5. C. Mei et al., “Optimizing a Parallel Runtime System for Multicore Clusters: A Case Study,” Proc. 2010 TeraGrid Conf., ACM, 2010; http://doi.acm.org/10.11451838574.1838586 .
6. HPC Advisory Council, NAMD Performance Benchmark and Profiling, tech. report, Nov. 2010; www.hpcadvisorycouncil.com/pdfNAMD_intel_stmv.pdf .
7. B. Hess, C. Kutzner, and D. van der Spoel,“GROMACS 4: Algorithms for Highly Efficient, Load-Balanced, and Scalable Molecular Simulation,” J. Chemical Theory and Computation, vol. 4, no. 3, 2008, pp. 435–447.
8. H.H. Loeffler and M.D. Winn, Large Biomolecular Simulation on HPC Platforms I. Experiences with AMBER, Gromacs and NAMD, tech. report, 2009; http://epubs.stfc.ac.ukwork-details?w=50963 .
9. D. Eadline, “The Lawnmover Law,” Linux Magazine, 2008; www.linux-mag.com/id6020.
10. J.C. Philips et al., “Scalable Molecular Dynamics with NAMD,” J. Computational Chemistry, vol. 26, no. 16, 2005, pp. 1781–1802.
11. S. Plimpton, “Fast Parallel Algorithms for Short-Range Molecular Dynamics,” J. Computational Physics, vol. 117, no. 1, 1995, pp. 1–19.
12. E. Lindahl, B. Hess, and D. van der Spoel,“GROMACS 3.0: A Package for Molecular Simulation and Trajectory Analysis,” J. Molecular Modeling, vol. 7, no. 8, 2001, pp. 306–317.
17 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool