A Novel Technique for Automated Linguistic Quality Assessment of Students' Essays Using Automatic Summarizers
Computer Science and Information Engineering, World Congress on (2009)
Los Angeles, California USA
Mar. 31, 2009 to Apr. 2, 2009
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/CSIE.2009.777
In this paper, experiments have addressed the calculation of inter-annotator inconsistency in selecting the content in both manual and automatic summarization of sample TOEFL essays. A new finding is that the linguistic quality of source essay has a very strong correlation with the degree of disagreement among human assessors to what should be included in a summary. This leads to a fully automated essay evaluation technique based on degree of disagreement among automated summarizes. ROUGE evaluation is used to measure the degree of inconsistency among the participants (human summarizers and automatic summarizers). This automated essay evaluation technique is potentially an important contribution with wider significance.
Automatic Summarization, Summarization Evaluation, ROUGE Evaluation
M. M. Wood and S. Latif, "A Novel Technique for Automated Linguistic Quality Assessment of Students' Essays Using Automatic Summarizers," 2009 WRI World Congress on Computer Science and Information Engineering, CSIE(CSIE), Los Angeles, CA, 2009, pp. 144-148.