The Community for Technology Leaders
Green Image
<p><b>Abstract</b>—Given a set <tmath>V</tmath> of active components in charge of a distributed execution, a <it>storage scheme</it> is a sequence <tmath>B_{0}, B_{1}, \ldots, B_{b-1}</tmath> of subsets of <tmath>V</tmath>, where successive global states are recorded. The subsets, also called <it>blocks</it>, have the same size and are scheduled according to some fixed and cyclic calendar of <tmath>b</tmath> steps. During the <tmath>i\rm th</tmath> step, block <tmath>B_{i}</tmath> is selected. Each component takes a copy of its local state and sends it to one of the components in <tmath>B_i</tmath>, in such a way that each component stores (approximately) the same number of local states. Afterward, if a component of <tmath>B_{i}</tmath> crashes, all of its stored data is lost and the computation cannot continue. If there exists a block with no failed components in it, then a recent global state can be retrieved and the computation does not need to start over from the very beginning. The goal is to design storage schemes that tolerate as many crashes as possible, while trying to have each component participating in as few blocks as possible and, at the same time, working with large blocks (so that a component in a block stores a small number of local states). In this paper, several such schemes are described and compared in terms of these measures. </p>
Load balancing and task assignment, distributed applications, checkpoint/restart, fault-tolerance, storage/repositories, distributed systems, network repositories/data mining/backup.

R. Marcel?n-Jim?nez, S. Rajsbaum and B. Stevens, "Cyclic Storage for Fault-Tolerant Distributed Executions," in IEEE Transactions on Parallel & Distributed Systems, vol. 17, no. , pp. 1028-1036, 2006.
94 ms
(Ver 3.3 (11022016))