2015 10th International Conference on P2P, Parallel, Grid, Cloud and Internet Computing (3PGCIC) (2015)
Nov. 4, 2015 to Nov. 6, 2015
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/3PGCIC.2015.7
The increase in popularity of Massive Open Online Courses (MOOCs) requires the resolution of new issues related to the huge number of participants to such courses. Among the main challenges is the difficulty in students' assessment, especially for complex assignments, such as essays or open-ended exercises, which is limited by the ability of teachers to evaluate and provide feedback at large scale. A feasible approach to tackle this problem is peer assessment, in which students also play the role of assessor for assignments submitted by others. Unfortunately, as students may have different expertise, peer assessment often does not deliver accurate results compared to human experts. In this paper, we describe and compare different methods aimed at mitigating this issue by adaptively combining peer grades on the basis of the detected expertise of the assessors. The possibility to improve these results through optimized techniques for assessors' assignment is also discussed. Experimental results with synthetic data are presented and show better performances compared to standard aggregation operators (i.e. median or mean) as well as to similar existing approaches.
Mathematical model, Calibration, Standards, Reliability, Peer-to-peer computing, Electronic learning, Market research
N. Capuano and S. Caballe, "Towards Adaptive Peer Assessment for MOOCs," 2015 10th International Conference on P2P, Parallel, Grid, Cloud and Internet Computing (3PGCIC), Krakow, Poland, 2016, pp. 64-69.