Computer Architecture and High Performance Computing, Symposium on (2012)
New York, NY, USA USA
Oct. 24, 2012 to Oct. 26, 2012
In this paper we present Atomic Dataflow model (ADF), a new task-based parallel programming model for C/C++ which integrates dataflow abstractions into the shared memory programming model. The ADF model provides pragma directives that allow a programmer to organize a program into a set of tasks and to explicitly define input data for each task. The task dependency information is conveyed to the ADF runtime system which constructs the dataflow task graph and builds the necessary infrastructure for dataflow execution. Additionally, the ADF model allows tasks to share data. The key idea is that computation is triggered by dataflow between tasks but that, within a task, execution occurs by making atomic updates to common mutable state. To that end, the ADF model employs transactional memory which guarantees atomicity of shared memory updates. We show examples that illustrate how the programmability of shared memory can be improved using the ADF model. Moreover, our evaluation shows that the ADF model performs well in comparison with programs parallelized using OpenMP and transactional memory.
transactional memory, Parallel programming, dataflow, shared memory
Vladimir Gajinov, Srdjan Stipic, Osman S. Unsal, Tim Harris, Eduard Ayguade, Adrian Cristal, "Integrating Dataflow Abstractions into the Shared Memory Model", Computer Architecture and High Performance Computing, Symposium on, vol. 00, no. , pp. 243-251, 2012, doi:10.1109/SBAC-PAD.2012.24