1. System or Algorithmic relevance that represents how well the query and the object match.
2. Topical relevance that represents the relation between an object and the real-world topic of which the query is just a representation.
3. Pertinence, Cognitive, or Personal relevance that represents the relation between the information object and the information need, as perceived by the user.
4. Situational relevance that represents the relation between the object and the work task that generated the information need.
4.1.1 Basic Topical Relevance Metric (BT) This metric makes two naïve assumptions. The first assumption is that the topic needed by the user is fully expressed in the query. The second assumption is that each object is relevant to just one topic. As a consequence of these two assumptions, the degree of relevance of an object to the topic can be easily estimated as the relevance of the object to that specific query. That relevance is calculated by counting the number of times the object has been previously selected from the result list when the same (or similar) query terms have been used. Defining NQ as the total number of similar queries of which the system keeps record, BT relevance metric is the sum of the times that the object has been selected in any of those queries (2). This metric is an adaptation of the Impact Factor metric [ 35] in which the relevance of a journal in a field is calculated by simply counting the number of references to papers in that journal during a given period of time:
In (1) and (2), represents the learning object to be ranked, is the query performed by the user, and is the representation of the previous query. The distance between and can be seen as the similarity between two queries. This similarity can be calculated either as the semantic distances between the query terms (for example, their distance in WordNet [ 43]) or the number of objects that both queries have returned in common. is the total number of queries.
Example. We assume that the query history of the search engine consists of queries , , and . In , objects and were selected; in , objects and ; and in , objects and . A new query is performed, , and objects , , , and are present in the result list. The distance between and is 1 (both are the same query), between and is 0.8 (they are similar queries), and between and is 0 (not related queries). The BT metric value of is equal to ; for , it is 1.8; for , it is 0.8, and for , it is 0. The order of the final result list ranked by BT would be .
Data and initialization. In order to calculate this metric, the search engine needs to log the selections made for each query. If no information is available, the metric assigns the value of 0 to all objects, basically not affecting the final rank. When information, in the form of user selections, starts entering the system, the BT rank starts to boost previous selected objects higher in the result list. One way to avoid this initial training phase is to provide query-object pairs given by experts or obtained from information logged in previous versions of the search engine.
4.1.2 Course-Similarity Topical Relevance Ranking (CST) In the context of formal learning objects, the course in which the object will be reused can be directly used as the topic of the query. Objects that are used in similar courses should be ranked higher in the list. The main problem to calculate this metric is to establish which courses are similar. A very common way to establish this relationship is described by SimRank [ 44], an algorithm that analyzes the object-to-object relationships to measure the similarity between those objects. In this metric, the relation graph is established between courses and learning objects. Two courses are considered similar if they have a predefined percentage of learning objects in common. This relationship can be calculated by constructing a two-partite graph where courses are linked to objects published in them. This graph is folded over the object partition leaving a graph representing the existing relationships and strengths between courses. The number of objects shared between two courses, represented in this new graph as the number of links between them, determines the strength of the relationship. A graphical representation of these procedures can be seen in Fig. 1. The ranking metric is then calculated by counting the number of times that a learning object in the list has been used in the universe of courses (5). This metric is similar to the calculation made by e-commerce sites such as Amazon [ 40], where additional to the current item, other items are recommended based on their probability of being bought together:
In (3), (4), and (5), represents the learning object to be ranked, is the course where it will be inserted or used, is the course present in the system, is the total number of courses, and is the total number of objects.
Example ( Fig. 1). We assume that three courses are registered in the system , , and . Objects , , and are used in , objects , , and in , and objects , , , and in . The SimRank between and is 1, between and is 1, and between and is 2. A query is performed from and in the result list are the objects , , and . The CST value for is ; for , it is 3; for , it is 2. The order of the final result list ranked by CST would be .
Data and initialization. To apply the CST, the search engine should have access to the information from one or several LMSs, such as Moodle or Blackboard, where learning objects are being searched and inserted. First, it needs to create a graph with the current courses and the objects that they use in order to calculate the SimRank between courses. Second, it needs to obtain, along with the query terms, the course where the query was performed. In a system without this information, the CST will return 0, leaving unaffected the rank of the results. When the first results of insertion are obtained from the LMS, the CST could start to calculate course similarities and therefore ranking for the already used objects. This metric could be bootstrapped from the information already contained in common LMSs or Open Courseware initiatives [ 45].
4.1.3 Internal Topical Relevance Ranking (IT) If there is no usage information available, but there exists a linkage between objects and courses, the Basic Topical Relevance Rank can be refined using an adaptation of the HITS algorithm [ 46] proposed to rank web pages. This algorithm states the existence of hubs, pages that mostly point to other useful pages, and authorities, pages with comprehensive information about a subject. The algorithm presumes that a good hub is a document that points to many good authorities, and a good authority is a document that many good hubs point to. In the context of learning objects, courses can be considered as hubs and learning objects as authorities. To calculate the metric, a two-partite graph is created with each object in the list linked to its containing courses. The hub value of each course is then calculated as the number of in-bound links that it has. A graphical representation can be seen in Fig. 2. Finally, the rank of each object is calculated as the sum of the hub value of the courses where it has been used:
Example ( Fig. 2). We assume that in response to a query, objects , , , , and are returned. From the information stored in the system, we know that is used in course , , , and in , and and in . The hub value of (its degree in the graph) is 1, of is 3, and of is 2. The IT metric for is 1, the hub value of . For and , the value is 2, the hub value of . For , IT is the sum of the hub values of and , i.e., 5. For , it is 2. The order of the final result list ranked by IT would be .
Data and initialization. The calculation of IT needs information from LMSs. Similarly to CST, IT uses the relationship between courses and objects. On the other hand, IT does not need information about the course at query time (QT), so it can be used in anonymous Web searches. Course-Object relationship can be extracted from existing LMS that contribute objects to the LOR and can be used as bootstrapping data for this metric. An alternative calculation of this metric can use User-Object relationships in case that LMS information is not available.
4.2.1 Basic Personal Relevance Ranking (BP) The easiest and least intrusive way to generate user preference information is to analyze the characteristics of the learning objects they have used previously. First, for a given user, a set of the relative frequencies for the different metadata field values present in their objects is obtained:
In these equations, represents the value of the field in the object . The frequencies for each metadata field are calculated by counting the number of times that a given value is present in the given field in the metadata. For example, if a user has accessed 30 objects, from which 20 had "Spanish" as language and 10 had "English," the relative frequency set for the field "Language" will be . This calculation can be easily performed for each of the categorical fields (fields that can only take a value from a fixed vocabulary). Other types of fields (numerical and free text) can also be used in this calculation if they are "categorized." For example, the numerical field "Duration" that contain the estimated time to review the object can be transformed into categorical clustering the duration values in meaningful buckets: (0-5 minutes, 5-30 minutes, 30 minutes-1 hour, 1-2 hours, more than 2 hours). For text fields, keywords present in a predefined thesaurus could be extracted. An example of this technique is presented in [ 50].
Once the frequencies are obtained, they can be compared with the metadata values of the objects in the result list. If the value present in the user preference set is also present in the object, the object receives a boost in its rank equal to the relative frequency of the value. This procedure is repeated for all the values present in the preference set and the NF selected fields of the metadata standard:
This metric is similar to that used for automatically recording TV programs in Personal Video Recorders [ 41]. The metadata of the programs watched by the user, such as genre, actors, director, and so forth, is averaged and compared against the metadata of new programs to select which ones will be recorded.
In (7), represents the learning object to be ranked, represents a field in the metadata standard, and is a value that the field could take. Additionally, in (8), is the user, is the object previously used by , and is the total number of those objects. In (9), is the field considered for the calculation of the metric and the total number of those fields.
Example. We assume that a given learner has previously used three objects: , , and . is a Computer Science-related slide presentation in English. is a Computer Science-related slide presentation in Spanish. is a Math-related text document in Spanish. If the previously mentioned technique is used to create the profile of the learner, the result will be . The learner performs a query and in the result list are the objects , , and . is a Computer Science-related text document in English, is a Math-related figure in Dutch, and is a Computer Science-related slide presentation in Spanish. The BP value for is . For , it is 0.33. For , it is 1.66. The order of the final result list ranked by BP would be .
Data and initialization. The BP metric requires the metadata information about the objects previously selected by the users. This Identifier of the user and objects can be obtained from the logs of the search engine (given that the user is logged in at the moment of the search). Once the identifier is known, the metadata can be obtained from the LOR. A profile for each user can be created offline and updated regularly. To bootstrap this metric, the contextual information of the user can be transformed into the first profile. For example, if the user is registered in an LMS, we will have information about his major and educational level. Also, information collected at the registration phase could also be used to estimate user age and preferred language.
4.2.2 User-Similarity Personal Relevance Ranking (USP) The Basic Personal Relevance Metric relies heavily on the metadata of the learning object in order to be effective. But, metadata is not always complete or reliable [ 51]. A more robust strategy to rank objects according to personal preferences is to find the number of times similar users have reused the objects in the result list. To find similar users, we can apply the SimRank algorithm, previously used to obtain the CST metric. A two-partite graph contains the objects linked to the users who have reused them. The graph is folded over the object partition and a relationship between the users is obtained. The relationship graph is used to calculate the USP metric, as in (11). The final calculation is performed adding the number of times similar users have reused the object. This kind of metric is that used, for example, by Last.fm and other music recommenders [ 52] who present new songs based on what similar users are listening to; similarity is defined in this context as the number of shared songs in their playlists:
Example. We assume that there are four users registered in the system: , , , and . User has previously downloaded objects , , and ; user , objects , , and ; user , objects , , and ; user , objects and . User performs a query and objects , , and are present in the result list. The SimRank between and is 2, between and is 1, and between and is 0. The USP metric for is ; for , it is 3; and for , it is 1. The order of the final result list ranked by USP would be .
Data and initialization. The USP metric uses the User-Object relationships. These relationships can be obtained from the logging information from search engines (if the user is logged in during their interactions with the learning objects). The USP does not need metadata information about the learning objects and can work over repositories that do not store a rich metadata description. If no data is available, the metric returns 0 for all objects, not affecting the final ranking. To bootstrap this metric when there is no previous User-Object relationship information, the User-Course and Course-Object relationships obtainable from LMS systems could be used.
4.3.1 Basic Situational Relevance Ranking (BS) In formal learning contexts, the description of the course, lesson, or activity in which the object will be inserted is a source of contextual information. Such information is usually written by the instructor to indicate to the students what the course, lesson, or activity will be about. Keywords can be extracted from these texts and used to calculate a ranking metric based on the similarity between the keyword list and the content of the textual fields of the metadata record. To perform this calculation, the similarity is defined as the cosine distance between the TF-IDF vector of contextual keywords and the TF-IDF vector of words in the text fields of the metadata of the object in the result list:
The TF-IDF is a measure of the importance of a word in a document that belongs to a collection. TF is the Term Frequency or the number of times that the word appears in the current text. IDF is the Inverse Document Frequency or the number of document in the collection when the word is present. This procedure is based on the vector space model for information retrieval [ 53]. One parallel application of this type of metric has been developed by Yahoo for the Y!Q service [ 54], which can perform contextualized searches based on the content of a web page in which the search box is located.
In (12), represents the learning object to be ranked, is the course where the object will be used, is the component of the TF-IDF vector representing the keywords extracted from the course description, is the component of the TF-IDF vector representing the text in the object description, and is the dimensionality of the vector space (number of different words).
Example. We assume that an instructor creates a new lesson inside an LMS with the following description "Introduction to Inheritance in Java." The instructor then searches for learning objects using the term "inheritance." The result list is populated with three objects. has as description "Introduction to Object-Oriented languages: Inheritance," has "Java Inheritance," and has "Introduction to Inheritance." The universe of words, extracted from the description of the objects would be ("introduction," "inheritance," "java," "object-oriented," "languages"). The TF-IDF vector for the terms in the lesson description is then (1/2, 1/3, 1/1, 0/1, 0/1). For the description in object , the vector is (1/2, 0/3, 0/1, 1/1, 1/1). For , it is (0/2, 1/3, 1/1, 0/1, 0/1). For , it is (1/2, 1/3, 0/1, 0/1, 0/1). The cosine distance between the vector of the lesson description and is . For , it is 0.90, and for , it is 0.51. The order of the final result list ranked by BS would be .
Data and initialization. To calculate the BS metric, the only information needed is the text available in the context and the object metadata. The text information of the context should be provided at QT. The information needed to bootstrap this metric is a corpus with the text available in the object metadata to provide the value of the IDF of each word.
4.3.2 Context Similarity Situational Relevance Ranking (CSS) A fair representation of the kind of objects that are relevant in a given context can be obtained from objects that have already been used under similar conditions. For example, if we considered the case where the course represent the context, objects already present in the course are a good representation of what is relevant in that context. Similar to the calculation of the BP metric, the objects contained in the course are "averaged" to create a set of relative frequencies for different fields of the learning object metadata record:
This set of frequencies is then compared with the objects in the result list. The relative frequencies of the values present in the object's metadata are added to compute the final rank value:
This method can be seen as creating a different user profile for each context (in this case seen as course) where the learner is involved. This method can be applied to more complex descriptions of context. For example, if query is used during the morning, a frequency profile can be obtained from objects that the learner has used during similar hours in the morning. That "time of the day" profile can latter be used to rank the result list using the same approach presented above. Other contextual descriptors that can be used are place, type of task, access device, and so forth.
In (13) and (14), represents the learning object to be ranked, is the course where the object will be used, is the object contained in the course , represents a field in the metadata standard, and is a value that the field could take. returns the value of the field in the object . is the field considered for the calculation of the metric and is the total number of those fields. is presented in (7). Here, represents the number of objects contained in the course.
Example. We assume that a learner issues a query from course . Course has three objects , , and . is flash animation whose duration is between 0 and 5 min and is for higher education. is another flash animation whose duration is between 5 and 10 minutes and is for higher education. is a video of 20 minutes and also targeted to higher education. The profile for that specific course will be . The result list contains the following objects: , a text document with estimated learning time of 1 hour for higher education; , a video whose duration is between 0 and 5 minutes, targeted to primary education; and , a flash animation whose duration is between 10 and 30 minutes, targeted to higher education. The CSS value for is . For , it is 0.66. For , it is 2. The order of the final result list ranked by CSS would be .
Data and initialization. The CSS metric depends on the contextual information that can be captured during previous interactions of the user with the learning objects, as well as during QT. The most basic context that can be obtained from an LMS is the course from which the user submitted the query. Also, using the course as the context facilitates to capture the information about previous objects used in the same context, helping in the bootstrapping of the metric. Nevertheless, more advanced context definitions are allowed to calculate variations of this metric, at the cost of a more detailed logging of user actions.
1. Obtain human-generated values of relevance (explicitly or implicitly) for different result lists.
2. Calculate the metrics for the same objects.
3. Use the metrics values as input and the human-generated relevance values as output to train some machine learning algorithm.
4. Use the resulting trained machine learning model as the final ranking metric.
• Information about the usage of the learning objects, as well as the context where this use took place, can be converted into a set of automatically calculable metrics related to all the dimensions of relevance proposed by Borlund [ 36] and Duval [ 12]. This information can be obtained implicitly from the interaction of the user with the system.
• The evaluation of the metrics through an exploratory study concludes that all the proposed basic metrics outperformed the ranking based on pure text-based approach. This experiment shows that the most common of the current ranking methods is far from optimal, and the addition of even simple metrics could benefit the relevance of the results for LOR's users.
• The use of methods to learn ranking functions, for example RankNet, leads to a significant improvement of more than 50 percent over the baseline ranking. This result is very encouraging for the development of ranking metrics for learning objects, given that this improvement was reached with only four metrics as contributors to the ranking function.
• X. Ochoa is with the Centro de Tecnologías de Información, Escuela Superior Politécnica del Litoral, Campus Gustavo Galindo, Via Perimetral Km. 30.5, Apartado Guayaquil 09-01-5863, Ecuador. E-mail: email@example.com.
• E. Duval is with the Department of Computer Science, Katholieke Universiteit Leuven, Celestijnenlaan 200 A, B-3001 Leuven, Belgium. E-mail: Erik.Duval@cs.kuleuven.be.
Manuscript received 21 Mar. 2008; accepted 19 June 2008; published online 17 July 2008.
For information on obtaining reprints of this article, please send e-mail to: firstname.lastname@example.org, and reference IEEECS Log Number TLTSI-2008-03-0030.
Digital Object Identifier no. 10.1109/TLT.2008.1.
Xavier Ochoa received the degree in computer engineering from Escuela Superior Politécnica del Litoral (ESPOL), Guayaquil, Ecuador, in 2000 and the master's degree in applied computer science from Vrije Universiteit Brussel, Brussels, in 2002. He is an associate professor in the Faculty of Electrical and Computer Engineering, ESPOL, where he coordinates the research group on technology-enhanced learning at the Information Technology Center (CTI). His main research interests revolve around measuring the learning object economy and its impact in learning.
Erik Duval is a professor in the research unit on Hypermedia and Databases, Department of Computer Science, Katholieke Universiteit Leuven, where he teaches courses on human-computer interaction, multimedia, problem solving, and design. His current research interests are metadata in a wide sense, learning object metadata in particular, and how they enable finding rather than searching; global learning infrastructure based on open standards; human-computer interaction in general, and in a learning or digital repository context in particular, so that we can "hide everything but the benefits." He serves as a copresident of the ARIADNE Foundation, the chair of the IEEE LTSC working group on learning object metadata, and a member of the Scientific and Technical Council of the SURF Foundation. He is a fellow of the Association for the Advancement of Computers in Education (AACE) and a member of the ACM and the IEEE Computer Society.