The Community for Technology Leaders
Green Image
Issue No. 05 - May (2010 vol. 21)
ISSN: 1045-9219
pp: 620-630
Rahul Garg , IBM T.J. Watson Research Center, Yorktown Heights
Vijay K. Garg , University of Texas at Austin, Austin
Yogish Sabharwal , IBM India Research Laboratory, New Delhi
Existing algorithms for global snapshots in distributed systems are not scalable when the underlying topology is complete. There are primarily two classes of existing algorithms for computing a global snapshot. Algorithms in the first class use control messages of size O(1) but require O(N) space and O(N) messages per processor in a network with N processors. Algorithms in the second class use control messages (such as rotating tokens with vector counter method) of size O(N), use multiple control messages per channel, or require recording of message history. As a result, algorithms in both of these classes are not efficient in large systems when the logical topology of the communication layer such as MPI is complete. In this paper, we propose three scalable algorithms for global snapshots: a grid-based, a tree-based, and a centralized algorithm. The grid-based algorithm uses O(N) space but only O(\sqrt{N}) messages per processor each of size O(\sqrt{N}). The tree-based and centralized algorithms use only O(1) size messages. The tree-based algorithm requires O(1) space and O(\log N \log (W/N)) messages per processor where W is the total number of messages in transit. The centralized algorithm requires O(1) space and O(\log (W/N)) messages per processor. We also have a matching lower bound for this problem. We also present hybrid of centralized and tree-based algorithms that allow trade-off between the decentralization and the message complexity. Our algorithms have applications in checkpointing, detecting stable predicates, and implementing synchronizers.
Checkpointing, global snapshots, stable predicates.

Y. Sabharwal, V. K. Garg and R. Garg, "Efficient Algorithms for Global Snapshots in Large Distributed Systems," in IEEE Transactions on Parallel & Distributed Systems, vol. 21, no. , pp. 620-630, 2009.
98 ms
(Ver 3.3 (11022016))