This Article 
 Bibliographic References 
 Add to: 
October-December 2010 (vol. 3 no. 4)
pp. 358-363
Javier Sanz-Rodríguez, Universidad Carlos III de Madrid, Leganés
Juan Manuel Dodero, Universidad de Cádiz, Cádiz
Salvador Sánchez-Alonso, Universidad de Alcalá de Henares, Alcalá de Henares
The solutions used to-date for recommending learning objects have proved unsatisfactory. In an attempt to improve the situation, this document highlights the insufficiencies of the existing approaches, and identifies quality indicators that might be used to provide information on which materials to recommend to users. Next, a synthesized quality indicator that can facilitate the ranking of learning objects, according to their overall quality, is proposed. In this way, explicit evaluations carried out by users or experts will be used, along with the usage data; thus, completing the information on which the recommendation is based. Taking a set of learning objects from the Merlot repository, we analyzed the relationships that exist between the different quality indicators to form an overall quality indicator that can be calculated automatically, guaranteeing that all resources will be rated.

[1] Y. Akpinar, "Validation of a Learning Object Review Instrument: Relationship between Ratings of Learning Objects and Actual Learning Outcomes," Interdisciplinary J. Knowledge and Learning Objects, vol. 4, pp. 291-302, 2008.
[2] R.G. Baraniuk, "Challenges and Opportunities for the Open Education Movement: A Connexions Case Study," Opening Up Education: The Collective Advancement of Education through Open Technology, Open Content, and Open Knowledge, T. Iiyoshi and M.S.V. Kumar eds., pp. 229-246, MIT Press, 2008.
[3] T. Boyle, "Design Principles for Authoring Dynamic, Reusable Learning Objects," Australian J. Educational Technology, vol. 19, no. 1, pp. 46-58, 2003.
[4] N. Boskic, "Faculty Assessment of the Quality and Reusability of Learning Objects," PhD dissertation, Athabasca Univ., 2003.
[5] G. Brownfield and R. Oliver, "Factors Influencing the Discovery and Reusability of Digital Resources for Teaching and Learning," Proc. 20th Ann. Conf. Australasian Soc. for Computers in Learning in Tertiary Education (ASCILITE '03), G. Crisp, D. Thiele, I. Scholten, S. Barker, and J. Baron, eds., pp. 74-83, 2003.
[6] M. Claypool, P. Le, M. Wased, and D. Brown, "Implicit Interest Indicators," Proc. Sixth Int'l Conf. Intelligent User Interfaces, pp. 33-40, 2001, doi:10.1145/359784.359836.
[7] S. Downes, "Models for Sustainable Open Educational Resources," Interdisciplinary J. Knowledge and Learning Objects, vol. 3, pp. 29-44, 2007.
[8] E. Duval, "A Learning Object Manifesto—Towards Share and Reuse on a Global Scale," Proc. Conf. Towards a Learning Soc. (ELearning '05), pp. 113-117, May 2005.
[9] E. Duval, "LearnRank: Towards a Real Quality Measure for Learning," Handbook on Quality and Standardisation in E-Learning, pp. 457-463, Springer, 2006.
[10] C. Dwork, R. Kumar, M. Naor, and D. Sivakumar, "Rank Aggregation Methods for the Web," Proc. 10th Int'l Conf. World Wide Web, pp. 613-622, 2001, doi:10.1145/371920.372165.
[11] S. Fox, K. Karnawat, M. Mydland, S. Dumais, and T. White, "Evaluating Implicit Measures to Improve Web Search," ACM Trans. Information Systems, vol. 23, no. 2, pp. 147-168, 2005.
[12] E. García-Barriocanal, Sicilia, and A. Miguel, "Preliminary Explorations on the Statistical Profiles of Highly-Rated Learning Objects," Proc. Third Int'l Conf. Metadata and Semantic Research (MTSR '09), pp. 108-117, 2009.
[13] R.L. Glass, "A Structure-Based Critique of Contemporary Computing Research," J. Systems and Software, vol. 28, no. 1, pp. 3-7, 1995.
[14] K. Han, "Quality Rating of Learning Objects Using Bayesian Belief Networks," master's thesis, Simon Fraser Univ., 2004.
[15] R. Kay and L. Knaack, "Assessing Learning, Quality, and Engagement in Learning Objects: The Learning Object Evaluation Scale for Students (LOES-S)," Educational Technology Research and Development, vol. 57, no 2, pp. 147-168, 2009.
[16] R.H. Kay and L. Knaack, "Evaluating the Learning in Learning Objects," Open Learning: The J. Open and Distance Learning, vol. 22, no 1, pp. 5-28, 2007, doi:10.1080/02680510601100135.
[17] C.M. Kelty, C.S. Burrus, and R.G. Baraniuk, "Peer Review Anew: Three Principles and a Case Study in Postpublication Quality Assurance," Proc. IEEE '08, vol. 96, no. 6, pp. 1000-1011, 2008.
[18] V. Kumar, J. Nesbit, and K. Han, "Rating Learning Object Quality with Distributed Bayesian Belief Networks: the Why and the How," Proc. Fifth IEEE Int'l Conf. Advanced Learning Technologies (ICALT '05), pp. 685-687, 2005.
[19] E. Kurilovas and V. Dagiene, "Learning Objects and Virtual Learning Environments Technical Evaluation Criteria," Electronic J. E-Learning (EJEL), vol. 7, no. 2, pp. 147-168, 2009.
[20] J.Z. Li, J.C. Nesbit, and G. Richards, "Evaluating Learning Objects Across Boundaries: The Semantics of Localization," J. Distance Education Technologies, vol. 4, no. 1, pp. 17-30, 2006.
[21] N. Manouselis and R. Vuorikari, "What If Annotations Were Reusable: A Preliminary Discussion," Proc. Eighth Int'l Conf. Advances in Web-Based Learning (ICWL), pp. 255-264, 2009.
[22] J.L. Marichal, "An Axiomatic Approach of the Discrete Choquet Integral as a Tool to Aggregate Interacting Criteria," IEEE Trans. Fuzzy Systems, vol. 8, no. 6, pp. 800-807, Dec. 2000.
[23] R. McGreal, T. Anderson, G. Babin, S. Downes, N. Friesen, K. Harrigan, M. Hatala, D. MacLeod, M. Mattson, G. Paquette, G. Richards, T. Roberts, and S. Schafer, "EduSource: Canada's Learning Object Repository Network," Int'l J. Instructional Technology and Distance Learning, vol. 1, no. 3, 2004.
[24] J.C. Nesbit and K. Belfer, "Collaborative Evaluation of Learning Objects," Online Education Using Learning Objects, R. McGreal, ed., RoutledgeFalmer, pp. 124-138, 2004.
[25] J.C. Nesbit and J. Li, "Web-Based Tools for Learning Object Evaluation," Proc. Int'l Conf. Education and Information Systems: Technologies and Applications, pp. 334-339, 2004.
[26] X. Ochoa and E. Duval, "Quality Metrics for Learning Object Metadata," Proc. World Conf. Educational Multimedia, Hypermedia and Telecomm. (EDMEDIA '06), E. Duval, R. Klamma, and M. Wolpers, eds., pp. 1004-1011, 2006.
[27] X. Ochoa and E. Duval, "Relevance Ranking Metrics for Learning Objects," IEEE Trans. Learning Technologies, vol. 1, no. 1, pp. 34-48, Jan. 2008.
[28] X. Ochoa and E. Duval, "Measuring Learning Object Reuse," Proc. Third European Conf. Technology Enhanced Learning: Times of Convergence, E. Duval, R. Klamma, and M. Wolpers, eds., pp. 322-325, 2008.
[29] K. Palmer and P. Richardson, "Learning Object Reusability Motivation, Production and Use," Proc. 11th Int'l Conf. Assoc. for Learning Technology (ALT), 2004.
[30] M.M. Recker and D.A. Wiley, "A Non-Authoritative Educational Metadata Ontology for Filtering and Recommending Learning Objects," Interactive Learning Environments, vol. 9, no. 3, pp. 255-271, 2001.
[31] J. Sanz, J.M. Dodero, and S. Sanchez-Alonso, "A Preliminary Analysis of Software Engineering Metrics-Based Criteria for the Evaluation of Learning Objects Reusability," Int'l J. Emerging Technologies in Learning (IJET), vol. 4, no. 1, pp. 30-34, 2009, doi:10.3991/ijet.v4s1.794.
[32] J. Sanz, S. Sánchez-Alonso, and J.M. Dodero, "Reusability Evaluation of Learning Objects Stored in Open Repositories Based on Their Metadata," Proc. Third Int'l Conf. Metadata and Semantic Research (MTSR), vol. 46, pp. 193-202, Oct. 2009.
[33] M.A. Sicilia and E. García-Barriocanal, "On the Concepts of Usability and Reusability of Learning Objects," Int'l Rev. Research in Open and Distance Learning, vol. 4, no. 2, 2003.
[34] M.A. Sicilia, "Reusability and Reuse of Learning Objects, Myths, Realities, and Possibilities," Proc. First Pluri-Disciplinary Symp. Design, Evaluation, and Description of Reusable Learning Contents, 2004.
[35] A. Tzikopoulos, N. Manouselis, and R. Vuorikari, "An Overview of Learning Object Repositories," Learning Objects for Instruction: Design and Evaluation, pp. 44-64, Idea Group Publishing, 2007.
[36] J. Vargo, J.C. Nesbit, K. Belfer, and A. Archambault, "Learning Object Evaluation: Computer-Mediated Collaboration and Inter-Rater Reliability," Int'l J. Computers and Applications, vol. 25, no. 3, pp. 198-205, 2003.
[37] R. Vuorikari and H. Poldoja, "Comparison of Educational Tagging Systems—Any Chances of Interplay," Proc. Second Workshop Social Information Retrieval for Technology Enhanced Learning, 2008.
[38] R. Vuorikari, N. Manouselis, and E. Duval, "Using Metadata for Storing, Sharing, and Reusing Evaluations in Social Recommendation: The Case of Learning Resources," Social Information Retrieval Systems: Emerging Technologies and Applications for Searching the Web Effectively, D.H. Go and S. Foo, eds., pp. 87-107, Idea Group Publishing, 2008.
[39] N.Y. Yen, F.F. Hou, L.R. Chao, and T.K. Shih, "Weighting and Ranking the E-Learning Resources," Proc. Ninth IEEE Int'l Conf. Advanced Learning Technologies (ICALT '09), pp. 701-703, 2009, doi:10.1109/ICALT.2009.36.
[40] B. Zimmermann, M. Meyer, C. Rensing, and R. Steinmetz, "Improving Retrieval of Re-Usable Learning Resources by Estimating Adaptation Effort Backhouse," Proc. First Int'l Workshop Learning Object Discovery and Exchange, 2007.

Index Terms:
Learning object, Merlot, ranking, quality.
Javier Sanz-Rodríguez, Juan Manuel Dodero, Salvador Sánchez-Alonso, "Ranking Learning Objects through Integration of Different Quality Indicators," IEEE Transactions on Learning Technologies, vol. 3, no. 4, pp. 358-363, Oct.-Dec. 2010, doi:10.1109/TLT.2010.23
Usage of this product signifies your acceptance of the Terms of Use.