BSSync: Processing Near Memory for Machine Learning Workloads with Bounded Staleness Consistency Models
2015 International Conference on Parallel Architecture and Compilation (PACT) (2015)
San Francisco, CA, USA
Oct. 18, 2015 to Oct. 21, 2015
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/PACT.2015.42
Parallel machine learning workloads have become prevalent in numerous application domains. Many of these workloads are iterative convergent, allowing different threads to compute in an asynchronous manner, relaxing certain read-after-write data dependencies to use stale values. While considerable effort has been devoted to reducing the communication latency between nodes by utilizing asynchronous parallelism, inefficient utilization of relaxed consistency models within a single node have caused parallel implementations to have low execution efficiency. The long latency and serialization caused by atomic operations have a significant impact on performance. The data communication is not overlapped with the main computation, which reduces execution efficiency. The inefficiency comes from the data movement between where they are stored and where they are processed. In this work, we propose Bounded Staled Sync (BSSync), a hardware support for the bounded staleness consistency model, which accompanies simple logic layers in the memory hierarchy. BSSync overlaps the long latency atomic operation with the main computation, targeting iterative convergent machine learning workloads. Compared to previous work that allows staleness for read operations, BSSync utilizes staleness for write operations, allowing stale-writes. We demonstrate the benefit of the proposed scheme for representative machine learning workloads. On average, our approach outperforms the baseline asynchronous parallel implementation by 1.33x times.
Instruction sets, Computational modeling, Synchronization, Parallel processing, Atomic layer deposition, Convergence, Hardware,Atomic Operation, Iterative Convergent Machine Learning Workloads, Bounded Staleness Consistency Model, Asynchronous Parallelism
Joo Hwan Lee, Jaewoong Sim, Hyesoon Kim, "BSSync: Processing Near Memory for Machine Learning Workloads with Bounded Staleness Consistency Models", 2015 International Conference on Parallel Architecture and Compilation (PACT), vol. 00, no. , pp. 241-252, 2015, doi:10.1109/PACT.2015.42