The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.03 - Third Quarter (2012 vol.5)
pp: 251-263
Published by the IEEE Computer Society
Ming Liu , Sch. of Electr. & Inf. Eng., Univ. of Sydney, Sydney, NSW, Australia
R. A. Calvo , Sch. of Electr. & Inf. Eng., Univ. of Sydney, Sydney, NSW, Australia
A. Aditomo , Univ. of Sydney, NSW, Australia
L. A. Pizzato , Univ. of Sydney, NSW, Australia
ABSTRACT
In this paper, we present a novel approach for semiautomatic question generation to support academic writing. Our system first extracts key phrases from students' literature review papers. Each key phrase is matched with a Wikipedia article and classified into one of five abstract concept categories: Research Field, Technology, System, Term, and Other. Using the content of the matched Wikipedia article, the system then constructs a conceptual graph structure representation for each key phrase and the questions are then generated based the structure. To evaluate the quality of the computer generated questions, we conducted a version of the Bystander Turing test, which involved 20 research students who had written literature reviews for an IT methods course. The pedagogical values of generated questions were evaluated using a semiautomated process. The results indicate that the students had difficulty distinguishing between computer-generated and supervisor-generated questions. Computer-generated questions were also rated as being as pedagogically useful as supervisor-generated questions, and more useful than generic questions. The findings also suggest that the computer-generated questions were more useful for the first-year students than for second or third-year students.

An alert was just sent to the Computer Society Digital Library (CSDL) department and we will restore this missing publication as soon as possible.
149 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool