The Community for Technology Leaders
2012 ACM/IEEE/SCS 26th Workshop on Principles of Advanced and Distributed Simulation (2007)
San Diego, California, USA
June 12, 2007 to June 15, 2007
ISBN: 0-7695-2898-8
pp: 131-140
Rajive Bagrodia , University of California, Los Angeles, USA
Zhiguo Xu , University of California, Los Angeles, USA
High-fidelity simulations of mixed wired and wireless network systems are dependent on detailed simulation models, especially in the lower layers of the network stack. However, detailed modeling can result in prohibitive computation cost. In recent years, commercial graphics cards (GPUs) have drawn attention from the general computing community due to the superior computation capability. In this paper, we present our experience with using commercial graphics cards to speed up execution of network simulation models. First, we propose a general simulation framework supporting GPU-accelerated simulation models. Software abstraction is designed to facilitate the use and development of GPU-based models. Second, we implement and evaluate two simulation models using GPUs. We observed that using the GPUs can yield significant performance improvements for large configurations of the model, as compared with pure CPU-based computations, with no degradation in the accuracy of the results. This benefit is particularly impressive for models that include significant data parallel computations. However, we also observed that the overhead introduced by GPUs make them less effective in improving execution time of other network models. This study suggests that besides parallel computing and grid computing, network simulations can also be scaled by reaping computation capability of GPUs and, potentially, other external computational hardware.
Rajive Bagrodia, Zhiguo Xu, "GPU-Accelerated Evaluation Platform for High Fidelity Network Modeling", 2012 ACM/IEEE/SCS 26th Workshop on Principles of Advanced and Distributed Simulation, vol. 00, no. , pp. 131-140, 2007, doi:10.1109/PADS.2007.20
81 ms
(Ver 3.3 (11022016))