2014 IEEE International Conference on Data Mining Workshop (ICDMW) (2014)
Dec. 14, 2014 to Dec. 14, 2014
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/ICDMW.2014.58
Fuzzers, or random testing tools, are powerful tools for finding bugs. A major problem with using fuzzersis that they often trigger many bugs that are already known. The fuzzer taming problem addresses this issue by ordering bug-triggering random test cases generated by a fuzzer such that test cases exposing diverse bugs are found early in the ranking. Previous work on fuzzer taming first reduces each test case into a minimal failure-inducing test case using delta debugging, then finds the ordering by applying the Furthest Point First algorithm over the reduced test cases. During the delta debugging process, a sequence of failing test cases is generated (the "delta debugging trail"). We hypothesize that these additional failing test cases also contain relevant information about the bug and could be useful for fuzzertaming. In this paper, we propose to use these additional failing test cases generated during delta debugging to help tame fuzzers. Our experiments show that this allows for more diverse bugs to be found early in the furthest point first ranking.
Computer bugs, Debugging, Testing, Engines, Software, Feature extraction, Conferences
Y. Pei, A. Christi, X. Fern, A. Groce and W. Wong, "Taming a Fuzzer Using Delta Debugging Trails," 2014 IEEE International Conference on Data Mining Workshop (ICDMW), Shenzhen, China, 2014, pp. 840-843.