The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.06 - Nov.-Dec. (2013 vol.15)
pp: 36-45
George Bosilca , Univ. of Tennessee, Knoxville, TN, USA
Aurelien Bouteiller , Univ. of Tennessee, Knoxville, TN, USA
Anthony Danalis , Univ. of Tennessee, Knoxville, TN, USA
Mathieu Faverge , Bordeaux Inst. of Technol., Bordeaux, France
Thomas Herault , Univ. of Tennessee, Knoxville, TN, USA
Jack J. Dongarra , Univ. of Tennessee, Knoxville, TN, USA
ABSTRACT
New high-performance computing system designs with steeply escalating processor and core counts, burgeoning heterogeneity and accelerators, and increasingly unpredictable memory access times call for one or more dramatically new programming paradigms. These new approaches must react and adapt quickly to unexpected contentions and delays, and they must provide the execution environment with sufficient intelligence and flexibility to rearrange the execution to improve resource utilization. The authors present an approach based on task parallelism that reveals the application's parallelism by expressing its algorithm as a task flow. This strategy allows the algorithm to be decoupled from the data distribution and the underlying hardware, since the algorithm is entirely expressed as flows of data. This kind of layering provides a clear separation of concerns among architecture, algorithm, and data distribution. Developers benefit from this separation because they can focus solely on the algorithmic level without the constraints involved with programming for current and future hardware trends.
INDEX TERMS
Programming, Computer architecture, Runtime, Computational modeling, Parallel processing, Biological system modeling, Adaptation models, Scalability,programming paradigms, Programming, Computer architecture, Runtime, Computational modeling, Parallel processing, Biological system modeling, Adaptation models, Scalability, scientific computing, high-performance computing, HPC, scheduling and task partitioning, distributed programming
CITATION
George Bosilca, Aurelien Bouteiller, Anthony Danalis, Mathieu Faverge, Thomas Herault, Jack J. Dongarra, "PaRSEC: Exploiting Heterogeneity to Enhance Scalability", Computing in Science & Engineering, vol.15, no. 6, pp. 36-45, Nov.-Dec. 2013, doi:10.1109/MCSE.2013.98
REFERENCES
1. C. Augonnet et al., “StarPU: A Unified Platform for Task Scheduling on Heterogeneous Multicore Architectures,” J. Concurrency and Computation: Practice & Experience, vol. 23, no. 2, 2011, pp. 187-198.
2. H. Kaiser, M. Brodowicz, and T. Sterling, “ParalleX: An Advanced Parallel Execution Model for Scaling-Impaired Applications,” Int'l Conf. Parallel Processing, IEEE, 2009, pp. 394-401.
3. C. Lauderdale and R. Khan, “Towards a Codelet-Based Runtime for Exascale Computing: Position Paper,” Proc. 2nd Int'l Workshop Adaptive Self-Tuning Computing Systems for the Exaflop Era, ACM, 2012, pp. 21-26.
4. J. Planas, R. M. Badia, E. Ayguade, and J. Labarta., “Hierarchical Task-Based Programming with StarSs,” Int'l J. High-Performance Computing Applications, vol. 23, no. 3, 2009, pp. 284-299.
5. G. Bosilca et al., “Dense Linear Algebra on Distributed Heterogeneous Hardware with a Symbolic DAG Approach,” Scalable Computing and Communications: Theory and Practice, Jan. 2013, pp. 699-733.
6. A.J. Bernstein, “Analysis of Programs for Parallel Processing,” IEEE Trans. Electronic Computers, vol. 15, no. 5, 1966, pp. 757-763.
7. M. Cosnard and E. Jeannot, “Automatic Parallelization Techniques Based on Compact DAG Extraction and Symbolic Scheduling,” Parallel Processing Letters, vol. 11, 2001, pp. 151-168.
71 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool