The Community for Technology Leaders
RSS Icon
Los Angeles, California USA
Mar. 31, 2009 to Apr. 2, 2009
ISBN: 978-0-7695-3507-4
pp: 144-148
In this paper, experiments have addressed the calculation of inter-annotator inconsistency in selecting the content in both manual and automatic summarization of sample TOEFL essays. A new finding is that the linguistic quality of source essay has a very strong correlation with the degree of disagreement among human assessors to what should be included in a summary. This leads to a fully automated essay evaluation technique based on degree of disagreement among automated summarizes. ROUGE evaluation is used to measure the degree of inconsistency among the participants (human summarizers and automatic summarizers). This automated essay evaluation technique is potentially an important contribution with wider significance.
Automatic Summarization, Summarization Evaluation, ROUGE Evaluation
Seemab Latif, Mary McGee Wood, "A Novel Technique for Automated Linguistic Quality Assessment of Students' Essays Using Automatic Summarizers", CSIE, 2009, Computer Science and Information Engineering, World Congress on, Computer Science and Information Engineering, World Congress on 2009, pp. 144-148, doi:10.1109/CSIE.2009.777
20 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool