1. From teaching deep domain knowledge in an array of disciplines toward teaching more general or meta-knowledge—principles that apply across areas, methodologies for answering research questions, and strategies for searching effectively and evaluating the credibility of information found.
2. In providing a physical social environment of peers. The university creates the notion of a "cohort," of others just like the learner, at the same stage of learning so that they can communicate, collaborate, share, and compare with each other.
3. In providing motivation or incentives for learning and certification/credentials to students becomes an increasingly important role of educational institutions, especially universities [ 20]. Interestingly, while credentials don't matter on the Internet, they become very important in the real world to get a job. In fact, credentials will continue to matter in the real world and will only increase in importance. The reason is that, since, as mentioned before, the quality of learning that one can receive on the Participative Web cannot be guaranteed, credentials are required to ensure that job applicants have the needed skills and knowledge. This leads to increasing competition between universities for high ranking and reputation.
1. Learner-Centered, in Context. The Digital Natives initiate their learning experiences. They are purpose-driven, self-centered, and should always feel in control. Thus, social learning environments are no longer stand-alone isolated systems that "teach" the user. Instead, like epiphytes, they harvest and connect existing resources—content and people—from the Participative Social Web. Like search engines or "knowledge navigators," they respond to a learner's query to provide the best learning resources available.
The results are ranked in an order that depends on the learner, her context, purpose, and pedagogy. The results can also be sequenced or the accompanying links or tags readjusted so that the learner can maximize her browsing exploration around the results. Decisions or adaptations made for the benefit of the learner should be invisible; the user has to be in control and steer these decisions/adaptations. Therefore, a social learning environment needs to:
• Help the learner find the "right puzzle piece" of knowledge.
• Help the learner find the "right" people—to collaborate or play with, to teach the learner, or to help find the answer—the missing "right puzzle piece."
2. Make Learning More Gratifying. Digital Natives cannot be easily coerced into learning, unlike previous generations. They need to be convinced, motivated to explore, and rewarded for achievement. Their need for instant gratification can be exploited and they can be "seduced" into learning by providing the right amounts of challenge, achievement, and rewards, similar to how players of online games are seduced into striving to achieve higher levels of skill.
Learning happens both by consuming and producing knowledge. For example, contributing to the collaborative writing of Wiki articles can be an effective way of learning [ 13] and contributing to an online discussion forum is widely acknowledged as being a valuable learning experience. However, engaging learners in the collaborative production of knowledge, in discussion or writing, is not easy. Therefore, a social learning environment needs to:
• create a feeling of achievement/self-actualization,
• tie learning more explicitly to social achievement related to status/reputation in the peer group, and
• tie learning more explicitly to social rewards in terms of marks and credentials.
To fit into the page constraints of this paper, in the next few sections, I will discuss only work that addresses the learner-centered challenge in tandem with the support social learning in context challenge or the gratification challenge, rather than providing a full overview of related work.
2.1.1 What Is <whatever>? Representing Semantics To be able to find the right stuff for the context, it is necessary to be able to distinguish between different aspects, characteristics of content, context, learner, and pedagogy. For this, it is necessary to agree about the semantics. A lot of research has addressed semantic interoperability on the Web. Some form of annotation or metadata is necessary to distinguish among the content and to be able to search. However, who defines the standard "dictionary" to be used in the metadata? Many metadata standards proposals have been developed specifically for learning objects, e.g., MERLOT, LOM, etc. However, they allow capturing mostly simple semantics. To allow for richness and consistency in the annotations, ontologies have been proposed as the basis for standards, content, learner characteristics, pedagogies, and learning context [ 30]. The heart of the Semantic Web, ontologies allow complex objects and the relationships among them to be represented. Therefore, ontologies allow for very powerful representations of meaning in any domain and allow sophisticated reasoning, recommendation, and adaptation mechanisms.
The problem is that ontologies are very hard to engineer, despite the availability of editing tools like Protégé. Just like MS Word does not help much in writing a meaningful essay, creating a consistent map of meaning in a given domain depends on the skill and art of the knowledge engineer and always reflects his or her individual viewpoint and understanding of the domain. As soon as the domain gets more realistic, complex, and interesting, people's viewpoints no longer agree and they start interpreting things in different ways. Different communities use different naming agreements. It is hard or impossible to agree on the semantics of relationships—people interpret them in different ways, even when they agree that there is a relationship between two entities.
In reality, most of the systems that claim to use ontologies are based on taxonomies or topic maps (categories connected with certain types of relationships). These simpler semantic representations allow straightforward mapping of terms using applications like WordNet. Just like translating, word by word, a text in a foreign language using a dictionary, this kind of mapping excludes the semantics of the relationships among objects (the relations can be mapped in name only). Research is currently going on to allow for more advanced structural ontology mapping, but the practical application of such mapping is still limited.
Even if there is agreement among designers about using a particular simple ontology, from a user's point of view, it imposes a cumbersome and inconvenient way of organizing or finding content. I will illustrate this with an experience with an early version of our Comtella system.
2.1.2 Finding Stuff with an Ontology-Based Interface The first version of Comtella [ 5] was developed in the MADMUC Lab at the University of Saskatchewan in 2002/2003. It was a peer-to-peer system, similar to the music-sharing systems Kazaa and LimeWire, that allowed graduate students and faculty in the Department of Computer Science to share research articles of interest. In order to share an article found on the Web, the user had to enter the URL and select the content/semantic category of the paper. Categories were used to allow for searching since Comtella did not support a full text search of the papers. We used an "ontological" approach for content organization by adopting the ACM category index, which we considered as a standard content indexing scheme for computer science.
Yet, finding the required category in a subject category index is challenging, as any author who ever published a paper with the ACM or the IEEE knows. So, we simplified the ontology by selecting only the top categories corresponding to areas in which our department has active research projects and limiting the depth of hierarchy of the topics to three (instead of six—the depth that the ACM category index reached at that time). We then organized the ontology as a set of three hierarchically nested menus (see Fig. 2) which were used to annotate a new contribution to the system and to search for articles that have been shared. The system had very little use—the users indicated that it took too much effort to both categorize a contribution and to find a contribution using the menus. The biggest difficulty for users was not knowing the "map" of categories: To access the right level-3 menu, one had to make the right choices in the level-1 and level-2 menus. Some of the top categories were vague and the users had no clue where the particular level-3 category they were looking for was hidden. This confusion resulted from the absence of a common agreement about how categories in computer science should be structured. The ACM category index, like any other category list, even though likely resulting from the dedicated work of a committee of experts, reflects the particular viewpoint of the committee that designed it. Luckily, in the design of the top-level menu, we had included a category "All" (Other) in case the user was not able find the category they were looking for. Interestingly, most of the user contributions were labeled under this category. The users explained later that this was the easiest way to share an article. This also emerged as the easiest way to search for articles, since all the articles were in this category. Yet, it worked well only because there weren't too many articles shared and the list of results was tractable. However, categorizing everything in the category "All" is the same as not having categories at all. This is an anecdotal confirmation of the "Everything is miscellaneous" postulate by David Weinberger in his recently published book [ 47], which discusses if it is possible to impose one ontology or unified classification schema on diverse and autonomous users. Weinberger answers this question negatively, with many convincing examples, and argues for collaborative tagging, a simpler, non-AI-based approach supporting "findability" rather than "interoperability."
2.1.3 Annotating and Finding Content through User Tagging and Folksonomies In contrast to the predefined "ontologies," which users/developers have to adopt, the main idea of collaborative tagging is to let users tag content with whatever words they find personally useful. In this way, for example, the tags "X1" and "My Project" may be perfectly meaningful tags for a given individual at a given time, even though they have no meaning for anyone else and probably won't have any meaning for the same user a couple of months later. However, users who have a lot of content will have to use more informative tags to be able to find their own stuff at a later time. Also, users who share content will use tags that they expect to be meaningful to other users; otherwise, it does not make sense to share the content at all. In this way, with many users who tag the same documents, the pool of tags added will capture some essential characteristics of the document, possibly its meaning. Thus, "folksonomies" emerge as an alternative to ontologies, developed by collaborative communities of users tagging a pool of documents. Folksonomies provide a user-centered approach to semantic annotation because selfish users tag for themselves. Tags are very easy to add and there is no need to agree on the semantics, taxonomy, and relationships or metadata standard in advance. The tags can express different semantic dimensions: content, context, pedagogical characteristics, learner type, and media type. The tag clouds that are found in many Web 2.0 systems provide a summary of the documents in the repository. They are useful to guide users in their browsing and provide a very intuitive and easy interface for search without the need for typing by just clicking on a tag. Different font sizes indicate the popularity of each tag, which gives an idea of the semantics of the entire content collection at a glance. For a person searching for something, it is acceptable if a document of potential interest is not found because it was tagged using a different language or different terminology (or ontology). The abundance of content guarantees that there will be something found that is suitable. Similarly to the collaborative knowledge negotiation process in Wikipedia, the abundance of actively tagging users guarantees that the quality of tags will improve under public scrutiny, especially for documents that are not too far to the right on the long tail (in which too few users are interested).
However, the tags in a folksonomy are not machine-understandable. From the machine's point of view, the tags are discrete labels with no relationship among them. The tags can be used for retrieval but not for machine reasoning and decision making. A machine cannot say how two documents, tagged with the same tag(s), are semantically related to each other or why they are similar. So, it would be, for example, very difficult to create a sequence of content from tagged learning objects. Tags are good for a "one-shot" retrieval by a user but are insufficient for inference or reasoning. There has been some interesting work comparing whether folksonomies capture the semantics of a document as well as automatic term extraction. Brooks and Montanez [ 3] did an experiment with the 250 most popular tags in Technorati. They grouped documents that share tags into clusters and then compared the similarity of all documents within a cluster. Their hypothesis was that documents in a cluster that shared a tag should be more similar to each other than a randomly constructed set of documents. As a benchmark, they also compared clusters of documents known to be similar (based on Google search results for the tag). Finally, they constructed tags automatically by extracting relevant keywords from documents and used these tags to construct clusters of documents. This was intended to determine whether humans were better at categorizing articles by tagging than automated techniques (semantic lexical analyis). The results showed that articles sharing a tag had a 0.3 pair-wise similarity; the articles considered similar by Google had 0.4. Automated tagging performed the best—the articles that shared three words with the highest TFDIF yielded a result between 0.5-0.7 (mean 0.6). The authors then applied agglomerative clustering over the tags and obtained a hierarchy of tags very similar to a hand-made taxonomy of tags which can be browsed. This is an interesting and potentially useful result. Automatic ontology generation based on tagged documents may be a promising direction of research avoiding some of the pitfalls in ontology research so far (the need for agreement among experts and the development of standards that are then imposed on others to follow). Yet, there is no guarantee that the automatically generated ontology will be understandable for humans and that humans will agree with it. On the other hand, agreement may not be necessary, since for practical reasons, the ontology is better used by the machine and hidden away from human eyes. The humans can deal with tags, which are user-friendly; the ontology should stay in the background to support machine reasoning and more complex inference and adaptation.
2.1.4 Combining the Strengths of Ontologies and Folksonomies Brooks and Montanez [ 4] also proposed an interesting approach that combines the powers of tags and ontologies and puts the human in the center. Machine learning/data-mining is used to extract meaningful tags from text. The machine-generated folksonomy is augmented using existing ontologies, a process referred to as "snap to grid" by Gruber [ 15]. The resulting tags are provided as suggestions to the user, who decides whether to add them or not. Ultimately, the ease of use provided by the tags is preserved; the user is in control, empowered by an invisible intelligent mechanism and ontologies in the background [ 4]. These are exactly the main features that we want in social learning environments: user-centered (supporting a selfish user), easy to use, intelligent, adaptatable, providing recommendations, and invisible in the background.
2.1.5 User Interfaces Supporting Exploratory Search An important problem remains: how to develop user interfaces that allow convenient search and, at the same time, convey an intuitive idea about the structure of information so that the user can navigate by browsing. As shown in Section 2.1.2, interfaces based on ontologies are not a good solution. It is better to develop interfaces that reveal a structure focused on the perceived purpose of use and make explicit only those dimensions of knowledge or information that are relevant for the purpose. There may be a need to create such interfaces and structures for a variety of purposes; in this case, the environment should be able to decide automatically which interface should be activated depending on the anticipated or explicitly declared purpose of the user. Next, I will briefly present an illustration—an interactive visualization of social history that allows a user to perform exploratory search in a blog archive. The visualization is shown in Fig. 3.
It uses three dimensions (semantic categories): time (horizontal axis), content (posts, vertical axis, upward), and people (comments of readers, vertical axis, downward). In the space defined by these three dimensions, each post is represented as a dot on the horizontal time axis. A line stretching up from each post indicates the length of the post. A line stretching down from each post indicates the average length of the comments made on the article. The downward line ends with a bubble whose size represents the number of comments received. In the upper (content) part of the space, tags indicate the semantics of posts written in the corresponding period of time, while, in the lower (people) part of the space, user names indicate the commentators of the posts during the period of time. In a case study, evaluation of the interactive visualization uses found it easy, intuitive, and efficient to find blog posts of interest in a large archive [ 19]. What makes this approach interesting is that it allows the combination of several principally different ways of searching information (corresponding to different purposes) that are not possible in the current state of the art tag-based systems: by time, content, and social interaction history. Providing a map-like overview of a blog archive allows the user to filter a large amount of information and discover interesting posts. It adds the power of query-based search to the freedom of browsing and the usability of a tag-oriented user interface.
• make it game-like, a combination of challenge and fun,
• boost the feeling of achievement by providing constant feedback on performance,
• relate performance to status in peer group (social reward), and
• relate performance to marks or credentials
3.3.1 Incentive Mechanism Rewarding Student Participation with Status The mechanism uses a utility matrix that defines a certain number of rewards (points) to desirable actions. In the context of Comtella, these actions are related to the following participation actions: logging in, downloading/reading an article, sharing a new article, rating an article, and commenting on an article. The reward for each type of action depends on how desirable the action is, which then depends on the goals of the designer and moderator of the community. For example, sharing new articles is very beneficial since it provides the community with materials to read and helps to overcome the "cold start" on a given topic. On the other hand, downloading/viewing/reading an article is useful for an individual student's learning of the topic. Logging in may not directly contribute to the community if the student remains a lurker, but it shows that the individual is keeping track of what is going on in the community, which should be encouraged. The goals of the community designer or moderator may be dynamic, and the rewards given for different related student actions may also be dynamic.
The points accumulated by each student through her participative actions allow students to be classified into different status levels, for example, gold, silver, or bronze. Explicit status categorization is used in many customer loyalty programs, such as the Star Alliance group of airlines that award certain status and related privileges to frequent flyers. The success of this marketing approach (part of a whole range of approaches in customer relationship management) can be explained by social psychology with the social comparison theory [ 11] and the theory of discrete emotions (fear). According to the social comparison theory, people strive to achieve higher status not just to gain the privileges (utility) associated with it. Belonging to an exclusive or elite club increases the individual's self-esteem because of downward comparison with people who have not achieved the same status.
It is important that status can be lost as a result of inactivity. Airlines usually award a certain level of status for one year, which is based on the accumulated miles during the previous year. According to the theory of discrete emotions (fear), people are generally more motivated by the fear of losing something they have than acquiring the same thing if they don't already have it. Therefore, people who have achieved a higher status, even one time, will try to avoid losing it.
In the first version of Comtella that was used to support a class of students taking an Ethics and IT course, we defined three status levels: Gold (the top 10 percent of the students based on their participation points), Silver (the next 60 percent of the students), and Bronze (the remaining 30 percent of the students). The status was valid for one week and it was based on the points collected from the participation actions of the student during the previous week. The status was displayed in the interface of the application as a shiny metallic card in the top left corner. By clicking on it, the student saw their level of participation in each of the rewarded activities compared to the top student in this action. A social visualization accessible from the main interface of the application showed all of the students as different sizes of stars in a night sky, which would encourage students to engage in social comparison. The students could choose to view the stars sorted by status, by number of new contributed papers, by number of downloaded papers, and also by login frequency (see Fig. 5). The incentive mechanism was introduced in Comtella in the middle of the term. The students used Comtella for 6 weeks without the incentive mechanism and for 4 weeks with the incentive mechanism. We saw a dramatic increase in the overall number of contributions during the first two weeks following the introduction of the mechanism and a decline in the next two weeks, but the contributions nevertheless remained at a level higher than most of the weeks before introducing the mechanism. We also observed a correlation (0.66) between the number of new contributions shared by individual students and their accesses to the social visualization, which displayed the students compared by number of new contributions as a default view [ 37]. This shows that the students engaged in social comparison.
In fact, the mechanism was too successful in encouraging participation; it encouraged "gaming," or students submitting low quality papers to achieve a higher status. This resulted in an excessive number of contributions during the second week after introduction of the mechanism, which lead to cognitive overload and withdrawal of some students. We learned several lessons from this experience. First, that in the next version of the mechanism, we need to encourage students to submit papers with high quality. Second, to enable a metric for paper quality, we had to encourage students to rate papers more often. Third, we had to stimulate contributions early in the week. We found that most of the contributions came late in the week when there was little time left for the students to read and rate them [ 8].
Gaming the system is a phenomenon that happens almost always when there is an incentive mechanism in place. According to Lewitt and Dubner [ 25], people are economic creatures who always try to minimize their efforts and maximize their rewards. In education, gaming the system is frequently found; for example, in course work as plagiarism or in exams as cheating. There are more sophisticated forms of gaming the system that aren't easily caught or punished, e.g., finding a critical path toward receiving a degree by selecting the easiest classes, persuading instructors to waive prerequisites for them, and so on. Generally, gaming the system means finding and exploiting loopholes in the rules of the incentive mechanism to gain advantage. This happens when students are under high pressure or when there aren't strong deterrents in place, such as strong penalties and successful policing. In the online world, gaming the system has been found in all online communities that make use of incentive mechanisms, i.e., Slashdot [ 24], in multiplayer online games, and even in students interacting with intelligent tutoring systems [ 2].
In game theory, a good mechanism is one that can be proven to be not gameable. Yet, in practical mechanism design, it is very hard to find such mechanisms apart from very constrained domains, like markets and auctions, since the rules and their possible interactions are very complex. In practice, designers often try to obscure the rules so that it is more expensive for students to find ways to game the system than to put in the required effort. Slashdot, for example, does not publish the algorithms of awarding "karma." We, similarly, did not tell the students how many points they were awarded for different participation actions and they had to explore this themselves. The exploration was made more challenging by displaying students' participation statistics only for the previous week. The limited data meant that the students needed to keep track of all their actions for a full week in order to discover how many points each action earned.
As a result of the lessons learned from the first version, we developed in 2004/2005 a new version of the Comtella incentive mechanism that differed from the first in the following aspects:
1. Dynamic, adaptive rewards were used for desirable actions instead of predefined rewards. Rewards differed in time and by students depending on the current community needs and the contribution history of the student.
2. The visualization displayed a new dimension for students to compare with each other—the reputation of a student for bringing high-quality contributions, which was based on the ratings earned from other students.
3. Another incentive mechanism was used in conjunction with the status-based one. The purpose of this mechanism was to encourage students to rate the contributions of their colleagues by rewarding them with a virtual currency (Cpoints) for each act of rating.
These aspects are explained below in more detail. The needs of the community change and evolve in time. It was important that students submit new contributions early in the week, so that their colleagues had time to read and rate them before the topic changed in the week after. Therefore, more points were awarded for new papers shared early in the week than to late papers. As the number of shared papers increased over the week, it became increasingly important to rate the papers so that they could be sorted by quality and thus facilitate students in finding better papers to read. Therefore, the points awarded for rating papers increased with time. We defined a time-dependent reward function which reflected the community needs over time and served as a communuty model.
Students are different. Some students tend to contribute a lot of lower quality articles, while others are more selective about their submissions. Therefore, an individual user model was created to compute the average quality of contributions submitted by each student. The average quality of each contributed paper by the student was computed based on the ratings received from all students. The average quality of the ratings the student has given was computed by comparing his or her ratings to the average rating that the papers have received from other students. These two values comprised the individual model. Based on the individual and community models, individual weights were computed for the actions of each student (the payoff matrix). The total number of points determined the students' status for the new week, which brought different feedback and privileges: a different color scheme in the interface, a higher number of ratings a student could give out, and personal complimentary messages. The status is reflected in the community visualization ( Fig. 6). The visualization differed from the earlier one by showing only one view, the same for all students [ 36]. The stars representing the individual students looked more like real stars and differed by size (representing the number of papers contributed), color (representing the status of the student), brightness (showing the reputation of the student based on the ratings of his or her contributions), and whether the student was online at the moment.
Students were encouraged to rate the papers submitted by their colleagues through the introduction of a new extrinsic reward (Cpoints) that the students could use like currency. The students were able to utlize the Cpoints to make their own submissions appear at the top of the search results for each week (similar to Google's sponsored links, see Fig. 7). The incentive mechanism motivated students to submit twice as many ratings as students who didn't have access to the CPoints mechanism in a controlled experiment [ 8].
3.3.2 Incentive Mechanism Rewarding the Student through Self-Actualization and Reciprocation According to the Social Identity Theory [ 34], many users contribute not to seek higher status but to help a shared cause because they identify with the community and its goals. Users gain a feeling of self-actualization [ 26] by seeing how their contributions support the cause or help the community to achieve its goals. In the third version of the class-support Comtella (2005/2006), we designed an incentive mechanism that explored this type of motivation and rewarded users for desirable actions through visualizing the impact of these actions on the community. There were two desirable actions: reading postings shared in the system and rating them.
Of course, to maintain the existence of the community, we needed to ensure that there were a sufficient number of postings submitted to the system. Instead of rewarding postings through the system, we decided to reward students for postings through the larger incentive system that existed in each university class in terms of course work and marks. We incorporated the use of Comtella in the required coursework for the Ethics and IT class in 2005/2006. Students were required to contribute one new post/article each week, comment on the posts of two of their colleagues, and respond to a question asked by the instructor. The use of the course work incentive mechanism ensured a sufficient level of activity in the Comtella community. Now, we could apply our novel mechanism targeted at yielding more reads and more ratings.
More reads were needed because the original educational purpose of the Comtella system was to make students read additional, more up-to-date material related to the class. In order to ensure quality control and to enable students to more easily find the good posts, we needed students to rate a high number of postings. We encouraged students to rate postings by designing the interface so that an aesthetically pleasing animation appeared after each act of rating: The post that was rated would change colors through a scale from violet to bright orange and finish with a color either brighter or darker than before (depending on if the rating was positive or negative—see Fig. 8) . Thus, the student immediately saw the effect of his or her rating on the list of search results that everyone in the community would have seen as well—the posts that had the highest ratings appeared brighter and with larger font. This emphasized the contribution made to the community and created a feeling of self-actualization.
We turned our attention to the Reciprocation and Fairness Theory [ 10] and the Common Bond Theory [ 34] to encourage students to read others students' posts. According to many experiments in behavioral economics [ 10], people tend to reciprocate and strive for fairness in their interactions with others. According to the Common Bond Theory [ 34], people may contribute to a community because they want to engage in relationships with members of the community. To engage the students in reciprocal relationships, we hypothesized that providing the students with visual feedback about the type of relationships that they develop by reading each other's posts would stimulate them to balance their relationships, making them more symmetrical and "fair." We designed a new social visualization, which represented the symmetry of relationship between the viewer and all other students (see Fig. 9).
The visualization divided the 2D space according to dimensions corresponding to how often the viewer-student read the posting of another student (Y-axis) and how often other students read the postings of the viewer-student (X-axis). Each student in the community was represented as a point in the space. The viewer was always at the (0, 0) postion in the lower left corner. The distance between the viewer and a given student along each axis depends on how "close" the relationship was between the viewer and the student. In the beginning, when none of the students had read any posts yet, the distance between the viewer and each of the other students was at its maximum, so the values of both coordinates were (1, 1), or "double invisible" (the viewer hadn't read anything by the student and vice versa). Therefore, at the beginning, all of the other students were clustered at the upper right corner of the square. Later, students moved down and to the left, and asymmetries arose as the students read each other's posts. Some students emerged as "pop-stars" and moved toward the top left corner since their posts were frequently read by the viewer, but they were not aware of the viewer's posts. Other students became "secret admirers" of the viewer's posts and they moved toward the bottom right corner. As seen in Fig. 9, most students evolved a symmetrical relationship with the viewer by slowly moving toward the lower-left corner along the diagonal. Our hypothesis was that the visualization would stimulate students who were "pop-stars" to look at other students in their "secret admirer" corner, read their postings, and respond or comment on them, thus balancing their relationships. We confirmed the hypothesis in a one-term classroom experiment with a control and test group. We saw that the test group students engaged in more symmetrical relationships with their colleagues and read more articles. More details about the approach and its evaluation can be found in [ 31].
Of course, we had to make the common assumption that is made in many adaptive and recommender systems: that viewing posts (the student clicks on the post) is the same as reading posts. The system can only track the number of "views" but it cannot know if the viewed posts were actually "read." In general, the evaluation of all three incentive mechanisms in the classroom experiments using the different versions of Comtella with control and test groups of students showed that each of the mechanisms was very effective in stimulating student behavior that was intended (contributing more papers, ratings, or reading papers). Both the Cpoints and the immediate gratification approaches stimulated twice as many ratings in the test group as in the control group. We showed that an adaptive rewards mechanism could orchestrate the desired pattern of collective behavior: The time-adaptation of the rewards stimulated students to make contributions earlier. We learned that it is important to make the student aware of the rewards for different actions at any given time. More details about the evaluation of these systems are available in [ 8], [ 36], [ 37], and [ 43].
1. The design of pedagogically-grounded, learner-centered social learning environments is a long-term direction where a lot of work is needed. I illustrated some aspects of the problem of finding appropriate content, e.g., annotation and recommendation, emphasizing approaches that are user-friendly and user-centered, like tags, folksonomies, and interfaces, allowing users to understand and manipulate collaborative recommendations by adjusting the influence of their friends or other users. However, I didn't even scratch the surface of the problem of how to make recommendations pedagogically sound. Currently, most of the content on the Participative Web is not annotated with respect to pedagogy. It is unrealistic to expect that pedagogical annotations will be contributed in sufficient volume to keep up with the amount of newly added content. Data mining may provide a solution to this problem. Similarly to the approach suggested by Brooks and Montanez [ 4], it may be possible to generate annotations automatically. Data mining based on usage analysis, as suggested by McCalla [ 29], may help identify successful patterns of learning and, in combination with collaborative filtering, provide pedagogically sound, even if unexplained, suggestions. Techniques for this will probably appear in the new area of educational data mining.
2. Another interesting direction is content sequencing. While the learner-centered postulate dictates that the learner is always in control, the user's behavior can be subtly influenced by the environment, reordering search results, changing the available links or tags, and providing appropriate visual interface to allow the learner to browse in a way that makes pedagogical sense. The interface of iBlogVis [ 19] did not have any underlying pedagogical principles or goals, but many adaptive hypermedia systems [ 6], [ 42] have manipulated the links and their appearance according to pedagogical goals. How to do this with content that is not designed "in house" but is provided by users, however, is an open question.
3. Finding collaborators is a very important direction that has also not been explored much. Trusting people is more important than trusting content. Integrating trust and reputation mechanisms with expertise matching and pedagogical matching needs a lot more research.
4. The design of incentive mechanisms to encourage learning, exploration, participation, and contributions in social learning environments is still under-explored. Most experiments have been done in open systems with the presence of other incentives, such as course grades (as in Comtella) or in large but closed systems where users participate for fun (e.g., MovieLens). It is not clear what incentives would be effective to encourage a self-centered learner to explore and learn more complex knowledge during her fragmentary learning experiences when searching information for a given narrow purpose.
5. The design and grading of coursework can be regarded as a mechanism design problem. The importance of coursework design and the design of grading schemes increases with the trend of educational institutions becoming accreditation/certification authorities that attest to the learning achievements and knowledge of students obtained both in formal learning environments and in self-directed learning on the Web. There is very little work in how such accreditations can be put in place, and how grading will work, considering the freedom and lack of structure of learning. In any case, however, grades have a very strong motivational effect on students and an appropriate grading scheme or coursework weighting scheme can be used as an incentive mechanism to focus the attention and efforts of learners in a desired way. Considering the design of grading schemes for coursework as a mechanism design problem seems to be an interesting unexplored area.
• The author is with the Computer Science Department, University of Saskatchewan, 178.8 Thorvaldson Bldg., 110 Science Place, Saskatoon, SK, S7N 5C9 Canada. E-mail: email@example.com.
Manuscript received 22 Dec. 2008; revised 5 Jan. 2009; accepted 7 Jan. 2009; published online 14 Jan. 2009.
For information on obtaining reprints of this article, please send e-mail to: firstname.lastname@example.org, and reference IEEECS Log Number TLT-2008-12-0117.
Digital Object Identifier no. 10.1109/TLT.2009.4.
Julita Vassileva is a professor in the Computer Science Department at the University of Saskatchewan, Canada. She is a coeditor of the International Journal of Continuing Engineering Education and Life-Long Learning. She serves on the editorial board of the User Modeling and User-Adapted Interaction Journal and as the vice-president of User Modeling Inc. Dr. Vassileva holds the NSERC/Cameco Prairie Chair for Women in Science and Engineering, one of five such regional chairs in Canada sponsored by NSERC.