A Novel Technique for Automated Linguistic Quality Assessment of Students' Essays Using Automatic Summarizers
Los Angeles, CA
March 31, 2009 to April 2, 2009
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/CSIE.2009.777
In this paper, experiments have addressed the calculation of inter-annotator inconsistency in selecting the content in both manual and automatic summarization of sample TOEFL essays. A new finding is that the linguistic quality of source essay has a very strong correlation with the degree of disagreement among human assessors to what should be included in a summary. This leads to a fully automated essay evaluation technique based on degree of disagreement among automated summarizes. ROUGE evaluation is used to measure the degree of inconsistency among the participants (human summarizers and automatic summarizers). This automated essay evaluation technique is potentially an important contribution with wider significance.
Automatic Summarization, Summarization Evaluation, ROUGE Evaluation
Seemab Latif, Mary McGee Wood, "A Novel Technique for Automated Linguistic Quality Assessment of Students' Essays Using Automatic Summarizers", CSIE, 2009, 2009 WRI World Congress on Computer Science and Information Engineering, CSIE, 2009 WRI World Congress on Computer Science and Information Engineering, CSIE 2009, pp. 144-148, doi:10.1109/CSIE.2009.777