The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.04 - October-December (2008 vol.1)
pp: 229-234
Published by the IEEE Computer Society
Erik Duval , Katholieke Universiteit Leuven, Belgium
Katrien Verbert , Katholieke Universiteit Leuven, Belgium
ABSTRACT
This paper presents some of the lessons that we have learned during our involvement with the development of learning technology standards. We believe that the decision to develop a new standards is sometimes taken too quickly and that, when possible, existing generic standards should be profiled for the domain of learning. At this point in time, the development of tools, technologies and methodologies that fully exploit the affordances of existing standards is more relevant than the development of new standards. Moreover, we believe that the relationship between technical standardization and research is often misunderstood: the main role of standards for research is to enable a large-scale technological infrastructure that promotes innovation through its open nature.
Introduction
Over the last 15 years, numerous learning technology standards and specifications have been developed in areas such as learning object metadata, learning content, and learning object repositories.

    • Standards for learning object metadata define ways for the description of learning objects and enable learning object discovery, sharing, reuse, and exchange. Examples are the IEEE Learning Object Metadata standard [ 1] and the Dublin Core Metadata Element Set [ 2].

    • Content standards and specifications define how to package learning content to enable exchange [ 3], how to sequence content in a uniform way [ 4], or how to associate runtime behavior with content [ 5], [ 6].

    • Repository standards and specifications focus mainly on federated search [ 7] or harvesting [ 8].

Other areas in which standardization efforts have been made are learner profiling [ 9] and learning design [ 10].
Major organizations that contribute to the development of e-learning standards and specifications are [ 11]:

    • accredited standards bodies with representation based on individual experts, like the IEEE Learning Technologies Standards Committee (LTSC) [ 12] or the CEN/ISSS Workshop on Learning Technologies (WSLT) [ 13],

    • accredited standards bodies with representation by countries, like the ISO IEC JTC1 SC36 [ 14] or the CEN TC353 [ 15], and

    • nonaccredited specification bodies with membership based on an individual and/or an organizational basis, like the ADL [ 16] (now LETSI) group [ 17], the IMS Global Consortium [ 18], or the ARIADNE Foundation [ 19], [ 20].

These organizations typically play different roles: Accredited standards bodies define actual standards. Those developed by such organizations with representation based on countries typically carry substantial legal weight: Their use may be mandatory in the case of public procurement procedures for goods or services in many countries. Accredited standards developed by the likes of the IEEE LTSC are typically more industry oriented. Specifications from groups like IMS or ARIADNE are often used as the starting point for a more formal standardization process.
In the case of IEEE Learning Object Metadata, for instance, the original specifications were developed by ARIADNE and IMS. Together, these organizations proposed a base document to the IEEE LTSC. The CEN/ISSS WSLT developed several complementary standards once LOM was finalized.
This paper is structured as follows: In the next section, we present some lessons we have learned through our involvement in standards development. The section thereafter discusses the relationship between research and technical standards.
2. Issues for Technical Standards in Learning Technologies
2.1 Introduction
We have actively contributed to a number of standards and continue to do so. We remain convinced that these are very worthwhile initiatives. However, we have learned some lessons from experience and want to share those in this section, as we are convinced that some of the ongoing standardization efforts in the domain of technology-enhanced learning are somewhat misguided.
2.2 Assess Carefully the Need for a New Standard
Our personal experience is that the decision to develop a new standard is often taken without proper due diligence. The enthusiasm and will to create something useful is of course positive, but channeling that energy in the development of a new standard is often misdirected. Developing a standard takes a long time—typically 3 to 6 years and sometimes many more—and many projects run out of steam before they deliver a standard.
More fundamentally, after a standard is finalized, it takes even longer before we typically develop tools that actually deliver the functionality to end users in a way that is useful and usable. In the beginning, the standard "shines through" and developers do not focus on the user experience. In the case of Learning Object Metadata, for instance, early tools typically presented an electronic form, like the one in Fig. 1, for querying repositories or inserting metadata into them.


Fig. 1. Electronic forms must die.




This approach suffered from many usability issues, and we launched the slogan "electronic forms must die" [ 21] to encourage developers to change their perspective on the kind of user experiences that could be supported. For querying purposes, this has led to the development of alternative user interfaces, like the elastic lists of Fig. 2 [ 22], recommender systems for learning objects [ 23], and even tabletops for direct manipulation of information about learning objects on architecture [ 24]. For inserting new objects with associated metadata, combinations of tagging and automated metadata generation are becoming available [ 21].


Fig. 2. Elastic forms.




In our opinion, the most pressing need with respect to learning technology standards nowadays is the development of tools, technologies, and methodologies that really exploit the affordances of the standards more fully. For instance, in our experience, rather than developing an alternative specification like Common Cartridge [ 6] or SCORM2.0 [ 25], the most urgent and relevant work with respect to learning content is the development of more useful and usable authoring tools, delivery platforms, management systems, etc. We have often observed how even quite tech-savy users are deeply challenged by the many bells and whistles that current SCORM tools typically include and how they fight to develop engaging content or to deploy it in their learning environments not because the functionality that SCORM provides is broken or lacking, but because the tools still very much reflect the structure of SCORM and require a good understanding of how SCORM is implemented [ 26], rather than focusing on the needs, characteristics, and goals of the end user.
2.3 Avoid "Not Invented Here"
Even if there is a need for a specific technical standard, we should avoid continuing the "not invented here" approach that has made us develop learning specific standards when there may be quite appropriate generic standards already available or being developed.
In the early days of learning technologies standardization, this caveat applied less, as there were fewer relevant standards around. For instance, when the IEEE LTSC started the development of the LOM standard around 1997, the Dublin Core Metadata Initiative (DCMI) was starting as well. In retrospect, it would probably have been useful to collaborate more directly from the start, but in the early years, it was quite unclear which initiative would eventually lead to a mature standard and by when. As both DCMI and LOM matured, collaboration intensified [ 27]. Nowadays, there is quite a close collaboration [ 28].
Around 2002, when learning object repository services became the focus of specification and standards work, a new standard was developed in the CEN/ISSS Workshop on Learning Technologies: the Simple Query Interface (SQI) [ 29]. Related work led to the development of the PLQL query language [ 30]. As SQI was being developed, it was clear that there was substantial overlap with related developments in the digital libraries world, more specifically, with Search/Retrieval via URL/Web (SRU/SRW), as well as with the Open Knowledge Initiative (OKI) that addresses Learning Technologies interoperability [ 31]. In retrospect, we believe that SQI could probably have built on SRU/SRW more directly.
There is a growing realization of the need to avoid continuing this development in isolation of learning specific standards: the ongoing development of the "Simple Publishing Interface" (SPI) [ 32], for instance, is seriously considering whether existing generic approaches like SWORD, itself an application profile of the Atom publishing interface [ 33], could fulfill the requirements in the learning domain, or whether they can be extended to do so.
A nice vehicle for building standards on existing ones is the use of "application profiles." Basically, in an application profile, a standard is constrained to take into account the specific requirements and constraints of a particular community. In metadata schemas, for instance, depending on the base standard, this may involve including particular data elements and excluding others, adding local data elements, making data elements mandatory, restricting their domain to a subset of that of the standard, etc. [ 34].
Application profiles should not be confused with bindings: Whereas application profiles restrict the set of conforming entities to a subset of that of the standard, bindings define how conforming entities can be represented in a particular technology. For example, there is a standard for the XML binding of LOM that defines how LOM can be represented using XML Schema Definition [ 35].
Still another vehicle for working with existing standards is a reference model that defines how different standards can be used together as a complete solution for a particular need. For example, for learning technologies, SCORM is the most widely deployed reference model: It defines how LOM, content packaging, simple sequencing, and runtime communication can be used together for learning objects that can be deployed in state-of-the-art learning management systems [ 36].
2.4 Standards Are Not Research
It is very important to understand the differences between standards and research, as Fig. 3 illustrates. The basic idea is that research can produce (among many other outcomes) specifications that can be implemented in software artefacts. The experience gained during the development and use in practice of the software typically leads to newer versions of the specifications. At some point in that feedback cycle, the specification can be used as input in the standardization process.


Fig. 3. Standards are not research.




Once the standardization process is started, the focus shifts to consensus building and deciding on what should be in the scope of the standard and what is best left out. As an example, the LOM standard was based on early specifications from IMS and ARIADNE that had already been implemented in several versions of tools that were being evaluated in practice. During the standardization track, consensus was built around, for instance, whether or not some elements would be made mandatory—and if so, which ones. Eventually, the decision was made not to make any data elements mandatory. Similarly, the clustering of data elements in several categories (technical, educational, etc.) was agreed upon during standardization.
The above is a rather ideal scenario: Often, researchers submit premature specifications. There sometimes seems to be an assumption that, once the outline of a specification has been drafted, it can be further finalized during the standardization process. Researchers then expect standardization participants to consolidate, implement, and improve the specification without bringing into question the basic approach or practical details of the original document. In our experience, this leads to a very frustrating experience, as the standardization group is disappointed that the specification is not more mature and the originators of the specification worry that "their baby" is mishandled.
Therefore, as a rule of thumb, we would suggest that a specification is only ready for the standardization process once it has been implemented by at least two independent development teams and has been evaluated in at least two independent user studies. (The users in question can be either developers that make use of the specification or end users of tools that implement the specifications.) A better situation is one where several similar specifications exist that attempt to solve the same problem and that overlap significantly in scope and approach. In that case, the standardization process can help to build consensus between the different communities and rapid development feedback cycles can be created by teams that implement the standard under development on top of existing closely aligned specifications and tools.
Admittedly, this is a somewhat idealistic situation, but we believe that it is very important to caution against entering the standardization process prematurely. The focus of that process is on consensus building, not on developing the best solution. If one or more good solutions are not already available at the start, then the whole process is jeopardized and may never lead to a relevant result.
3. Standards Matter for Research
3.1 Introduction
Even though we believe that the standardization process is not the right venue for research activities, we do believe that standards can be very relevant for research. Indeed, as will be explained below, standards enable large-scale experimentation and research based on nontrivial, nontoy applications. Moreover, standards are essential for an open infrastructure in which innovation and research thrive.
3.2 Large Scale
Standards are the infrastructures that work behind the scenes [ 9] and enable deployment on a large scale, and therefore make it possible to do research on global infrastructures with critical mass. This is, in our opinion, extremely valuable in the context of technology-enhanced learning, where researchers all too often make rather wide-ranging conclusions from "toy application" development and minimal (if any) evaluation studies.
Moreover, standards help remedy the balkanization problem that used to plague the field of learning technologies, where every research project had to start from scratch as none of the earlier results from previous research projects would continue to be available after the end of the projects or would interoperate with the results of other projects.
Certainly within the European Commission research and development funding programs, attention for the appropriate use of existing standards and support for feeding project results into the standardization process has become much more explicit, specifically in order to avoid the continuous redevelopment of identical, or at least very similar, tools and technologies with the associated rediscovery of the same issues. Indeed, many calls for projects now explicitly mention the adoption of existing standards or the creation of new standards as an explicit evaluation criterion.
Our vision is that technology standards should enable an open global learning infrastructure [ 37]. The basic building blocks in this infrastructure would enable

    • finding relevant learning resources (content, people, physical artifacts, events, etc.),

    • deploying them in an appropriate context-dependent way (through a learning management system, personal learning environment, etc., on a desktop, laptop, tabletop, mobile device, etc.), and

    • capturing the effects through attention metadata so that feedback loops can be established to improve human learning.

In the periphery of such an open infrastructure, novel research prototypes could then build on the global infrastructure to create exciting value-adding experiences, much in the same way that mash-up technologies on the Web 2.0 leverage the massive amounts of information and the mainstream functionalities provided by the likes of Google, Amazon, Twitter, etc. We acknowledge that such an application of standards is not without issues [ 38], but believe that these issues are not that difficult to address if the goals being pursued are clear. Nice examples of this approach to research include our own work experimenting with the automated decomposition and recomposition of powerpoint slides for learning [ 39], the development of relevance metrics for learning objects [ 40], the FOLKSEMANTIC project [ 41], the Visual Understanding Environment [ 42] based on the OKI work, as well as the work mentioned above on elastic lists ( Fig. 2) [ 22], recommender systems [ 23], and tabletops for learning objects on architecture [ 24].
3.3 Openness as a Driver for Innovation
Finally, standards enable openness, and that enables innovation—that is, another way for standards to be relevant to research.
In technology-enhanced learning, open source has so far probably been at least as important a driver for innovation as open standards. In fact, it is interesting to note that many of the early implementations of open standards used in research have been released under an open source license (like EXE [ 43] and RELOAD [ 44] for SCORM), whereas, on the other hand, the main open source learning management system, Moodle [ 45], has been relatively slow in supporting open standards.
An example of how the absence of standards can restrict innovation and research is the situation with search engines, where the far majority of research papers originate with Google, Yahoo, or Microsoft. Besides the financial leverage that these companies can apply to their research efforts, they are also the only ones who can in effect make use of the large data sets and user interactions that they collect.
If the Web crawls that they use would be available through standardized interfaces and if the user interactions would be available through an open attention metadata format, then many more, smaller labs could apply a wider variety of research ideas to the same infrastructure. We strongly believe that this would accelerate scientific progress, an opinion shared by the "open science" movement [ 46], [ 47].
In this context, it is important to make sure that standards enable, and do not preven, pedagogical innovation: Their role is to make sure that the expectation of students and teachers is met that "the e-learning infrastructure simply works" [ 48]. The authors acknowledge that learning technology standards do not guarantee learning: Excellent content can be nonconforming and useless content can be conforming. More importantly, learning is not in the first place about content, reuse, or discoverability—it is just that that part of the problem can be addressed by standards!
4. Conclusion
Standardization work is time consuming and, due to its consensus-based approach, can be quite tedious. We are proud of our involvement in this work over more than a decade and the valuable results that it contributed to.
However, as we explained in this paper, we caution against the ill-advised premature development of new standards when generic standards are available that can be profiled for the domain of learning. We believe that the greater need at this moment is to develop more usable and useful tools that "hide everything but the benefits." Technical standards are very relevant for research and development in the domain of technology-enhanced learning, as they enable a global infrastructure for large-scale realistic experimentation. However, that does not mean that research activities should be confused with the standardization process: Immature research results can derail standardization and the consensus-based approach of standardization bodies leads to risk-adverse repetitive "research." We hope that by spelling out these risks, this paper can help to avoid them, so that research and standards for technology-enhanced learning can proceed in a mutually supportive, healthy relationship.

ACKNOWLEDGMENTS

The authors would like to thank those who commented on a very early draft of the ideas outlined in this paper on the first author's blog, http://erikduval.wordpress.com/2008/11/28/standards-for-technology-enhanced-learning. Special thanks go to Martyn Cooper, Wolfgang Reinhardt, and Brian Kelly. The authors also gratefully acknowledge the support of the European Commission through the ProLearn Network of Excellence and the MELT and MACE eContentPlus projects.

    The authors are with the Computer Science Department, Katholieke Universiteit Leuven, Celestijnenlaan 200 A, BE 3000 Leuven, Belgium. E-mail: {erik.duval, katrien.verbert}@cs.kuleuven.be.

Manuscript received 16 Dec. 2008; revised 3 Jan. 2009; accepted 5 Jan. 2009; published online 14 Jan. 2009.

For information on obtaining reprints of this article, please send e-mail to: lt@computer.org, and reference IEEECS Log Number TLT-2008-12-0114.

Digital Object Identifier no. 10.1109/TLT.2009.6.

REFERENCES



Erik Duval is a professor of computer science at the Katholieke Universiteit Leuven, Belgium. His research focuses on management of and access to data and content available in a more or less structured way. More specifically, his interests include metadata, reusable content, global infrastructures based on open standards, and mass personalization ("The Snowflake Effect"). This work is applied in the fields of technology-enhanced learning, music information retrieval, and technology support for "Science2.0." Dr. Duval is the president of the ARIADNE Foundation, chair of the IEEE LTSC working group on Learning Object Metadata, a fellow of the AACE, and a member of the ACM and the IEEE Computer Society. He cofounded two spin-offs that apply research results for access to music and scientific output.



Katrien Verbert is a post-doctoral researcher at the hypermedia and databases research group of the Katholieke Universiteit Leuven. Her research interests include learning object content models, metadata, learning object reuse, and interoperability. She participates in the RAMLET IEEE LTSC standardization project that is developing a reference model for resource aggregation. She participated in the ACKNOWLEDGE project, which provides Flanders with a proof of concept demonstrator for an easy and accessible service in the field of learning and content management, and the PUBELO project, which investigated technological and educational standards for publishing in an electronic environment.
23 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool