Issue No.05 - May (1986 vol.12)
Stuart H. Zweben , Department of Computer and Information Science, Ohio State University, Columbus, OH 43210
Cloze tests (i.e., fill-in-missing-parts tests) have been a long-standing measure of prose comprehension. They seem to offer software engineers several theoretical and practical advantages over multiple-choice comprehension quizzes, the most common software comprehension measurement tool. Through human-subject experimentation, evidence was gathered to support the practical advantages of using the cloze procedure for measuring software comprehension. Cloze tests were found to be easy to construct, administer, and score and capable of discriminating between programs of varying comprehensibility. However, discrepancies between multiple-choice comprehension quiz results and some cloze test results for the same software suggested that certain forms of software cloze tests may not be valid. A model of software cloze tests was developed to identify a software cloze test characteristic that may produce invalid results. The test characteristic was concerned with the relative proportion of “program-dependent” and “program-independent” cloze items within a test. (Program-dependent cloze items require at least some understanding of the purpose, functionality, algorithm, etc., of the program to be successfully completed. Program-independent items may be completed using only syntactic knowledge and general reasoning skills.) The developed model was shown to be consistent with software cloze test results of another researcher and led to suggestions for improving software cloze testing.
Software, Software measurement, Materials, Testing, Guidelines, Syntactics, Programming, validity, Cloze, human-subject experimentation, multiple-choice test, software comprehension
Stuart H. Zweben, "The cloze procedure and software comprehensibility measurement", IEEE Transactions on Software Engineering, vol.12, no. 5, pp. 608-623, May 1986, doi:10.1109/TSE.1986.6312957