2011 IEEE Third International Conference on Cloud Computing Technology and Science (2011)
Nov. 29, 2011 to Dec. 1, 2011
MapReduce is often used to run critical jobs such as scientific data analysis. However, evidence in the literature shows that arbitrary faults do occur and can probably corrupt the results of MapReduce jobs. MapReduce runtimes like Hadoop tolerate crash faults, but not arbitrary or Byzantine faults. We present a MapReduce algorithm and prototype that tolerate these faults. An experimental evaluation shows that the execution of a job with our algorithms uses twice the resources of the original Hadoop, instead of the 3 or 4 times more that would be achieved with the direct application of common Byzantine fault-tolerance paradigms. We believe this cost is acceptable for critical applications that require that level of fault tolerance.
Hadoop MapReduce, arbitrary faults, Byzantine Fault-Tolerance
A. N. Bessani, M. Pasin, M. Correia and P. Costa, "Byzantine Fault-Tolerant MapReduce: Faults are Not Just Crashes," 2011 IEEE Third International Conference on Cloud Computing Technology and Science(CLOUDCOM), Athens, Greece, 2011, pp. 32-39.