an ontology-folksonomy visualization and interaction which offers an intuitive interface for the maintenance and manipulation of a domain ontology and a tag cloud;
an efficient and automatic method to compute relations among tags and domain concepts using measures of semantic relatedness (MSRs);
an ontology-based enhancement of semantic relatedness; this enhancement relies on ontology subsumption relationships to contextualize values of the measures of semantic relatedness.
The visual representation of the ontology ( Fig. 4, item C) changes to emphasize the concepts relevant for the selection being made. More precisely, ontological concepts referenced in the content of the selected lesson change color to become visually distinctive.
The tag cloud ( Fig. 4, item B) is populated with tags related to the selected lesson.
The educator selects (in the visual representation of the ontology, Fig. 4, item C) a concept that (s)he wants to inspect. As soon as the concept is selected, the tag cloud changes, displaying the tag color saturation according to the computed relatedness to the selected concept. The educator is then free to choose a tag (from the tag cloud) that (s)he finds the most relevant for the selected concept and drag-and-drop it over the concept. Once this is done, a pop-up menu appears offering different kinds of relationships for establishing a connection between the selected concept-tag pair. As soon as the selection is made, the ontology is updated allowing the educator to see his(her) changes in real time. The educator can also postpone a decision for later in which case this potential relation is automatically added to the user's notes for later reflection.
1. The perceived value of the tag-concept visualization and user interaction for ontology maintenance in learning environments (Section 4).
2. The effectiveness of MSR, WMSR, and nWMSR measures for ontology maintenance using folksonomies based on our proposed method (Section 5).
RQ1—What is the perceived intuitiveness and usability of the proposed method for ontology maintenance?
RQ2—Is there any relation of the perceived intuitiveness of the ontology maintenance process with the used ontology visualization and interaction interfaces?
RQ3—Is there any difference in the perceived value of the proposed ontology maintenance method between different groups of participants—instructors, teaching assistants, and research students/practitioners?
RQ4—What are the most and least valued characteristics of the proposed ontology maintenance method?
4.1.1 Design To investigate the perceived usefulness of the proposed ontology maintenance method, we wanted to study users' impressions after a session with the tool supporting the proposed method. Users' observations were obtained through a questionnaire, which was used after the session with the tool. Once data were collected, we used quantitative and qualitative (coding and content analysis) methods for data analysis.
4.1.2 Participants For our experiment, participants were recruited in October 2009 from Simon Fraser University, Athabasca University, University of Belgrade, and a private Canada-based company developing and offering technology and content for professional training. Overall, 22 persons (17 men and five women) responded to our invitation and all of them successfully completed all the steps of the experiment. The participants were also asked to express their role in online education. We distinguished between the following three roles:
Research students/practitioners—Persons who had done research related to online education, or practiced online education in industry through software and content development and delivery. There were eight participants in this group and they had on average 6.75 years of experience ( ).
4.1.3 Materials The LOCO-Analyst tool with its features for ontology maintenance was presented to the participants. To demonstrate implemented features of the ontology maintenance process in the LOCO-Analyst tool, we created video clips describing each individual feature in detail. The clips also served as a guide on how to use the implemented functionality and made sure that its interpretation was clearly carried to the participants of the study. These videos are available on the website of LOCO-Analyst. 3 The participants were provided with a complete and correct domain ontology (i.e., ACM CCS) and a set of collaborative tags; the set is described later in Section 5.
The evaluation of the ontology maintenance method was done together with a general evaluation of all the other features of the LOCO-Analyst tool using a questionnaire. While the general questionnaire consisted of 21 questions, three questions specifically addressed the ontology maintenance method. The three questions had the statements as shown in Table 1 and answers to them had two parts: 1) a five-level Likert scale answers where each level had an associated code on the 1-5 scale expressing the level of agreement with the statement (i.e., from Strongly Disagree—1 to Strongly Agree—5); and 2) an open-ended part allowing participants to further reflect on the asked question in a free text form. The latter part was optional. Each question in the questionnaire had an URL of the specific video clip to which the question was related.
4.1.4 Procedures The participants were presented with guidelines that explained the purpose of the evaluation and outlined the steps they should take. In a nutshell, the participants were asked to watch the demo videos explaining the functionality of the tool. They were then asked to download the tool and try the presented functionality. They were also encouraged to send any further clarification questions to the evaluation team. In the guidelines, we asked them to perform the implemented functionalities of the method for comprehension and maintenance operators outlined in Section 2.1. Together with the guidelines, we also supplied the questionnaire. Once finished, the participants were asked to send the completed evaluation questionnaire back within a week from the time of their initial acceptance to participate in the study. Finally, after receiving the answers from all the participants, we entered answers into an Excel spreadsheet for further analysis.
4.1.5 Content Analysis To analyze the observations in open-ended questions, we followed the approach introduced in [ 35]. Initially, we developed a coding scheme based on the participants' answers. The coding scheme consisted of three general categories: 1) Positive comments—expressing positive opinions without any concerns; 2) Positive comments with some observations—expressing positive opinions, but the participants either had some observations that questioned some decisions or suggested some improvements; and 3) Negative comments—expressing either negative observations or some concerns questioning the decisions made in the design. Each of these three categories were further subcategorized into three new subcategories, namely: 1) Feedback features—observations about specific feedback mechanisms supported by the user interface of LOCO-Analyst (not applicable to the ontology maintenance features of interest for this paper); 2) Intuitiveness—observations about the intuitiveness of the user interface; and 3) General comments—conceptual comments, applicable to different features of LOCO-analyst (not necessarily to ontology maintenance).
The early version of the coding scheme was first tested by two raters. To perform the testing, they applied the scheme to five randomly selected answers to each of the three questions ( Table 1). Consequently, they fine-tuned the scheme and revised the usage guidelines. In the next step, the two raters applied the fined-tuned scheme independently to rate all the answers. This was followed by a meeting of the two raters where all the differences in the assigned codes to each individual answer were reconciled. Finally, to evaluate the reliability of the inter-rater agreement, we used Cohen's kappa. The result of 0.88 of Cohen's kappa can be interpreted as an almost perfect agreement according to the conventional interpretation [ 4].
4.2.1 Quantitative Analysis Before discussing the specific results, we report the internal reliability of the collected Likert scale data. For this, we used the standard Cronbach's coefficient. We obtained which is higher than 0.80, the value typically used as a minimal threshold for reliability.
To evaluate the perceived level of intuitiveness and usability of the proposed method for ontology maintenance (i.e., RQ1), we used the descriptive statistics ( Table 1). The presented values are based on the participants' responses to the questions using the five-level Likert scale.
It is apparent that almost all participants strongly appreciated the ontology visualization and interaction proposed in our ontology maintenance method (Q2). That is, 20 out of 22 participants strongly agreed that the process is intuitive and easy to accomplish. Just slightly lower, but still very highly recognized is the intuitiveness of the ontology maintenance process (Q1). For the question about the suitability of the use of student generated collaborative tags (Q3), the descriptive statistics reveal a high approval by participants. Overall, the participants expressed very positive attitude about the intuitiveness and usability of the proposed method. Still, some salient comments emerged in the open-ended answers, which are reported in the results of the qualitative analysis.
To determine if there is any relation between the perceived intuitiveness of the ontology maintenance process and the ontology visualization and interaction interfaces (i.e., RQ2), we calculated Pearson's bivariate correlation (two-tailed) between observations stated in the answers to Q1 and Q2. The results reveal that there is a significant association of the proposed ontology visualization and interaction with the intuitiveness and ease of use of the proposed maintenance method ( , ). These results corroborate our previous experimentation results where educators also indicated that a graph-based visualization of ontologies is rather intuitive for the ontology representation [ 24]. Yet, that experiment [ 24] also revealed that an ontology visualization is not enough and can even be confusing if there is no effective interface for the interaction of users with the visualization.
To address RQ3, we used one-way ANOVA to test if there is any difference in the perceived value of the proposed ontology maintenance method among the three groups of the participants. For each of the three questions, our results showed no significant difference between the three groups (i.e., , ; , ; and , .
4.2.2 Qualitative Analysis The goal of the qualitative analysis was to investigate the most and least valued characteristics of the proposed ontology maintenance method (i.e., to address RQ4). Table 2 presents the percentage of the total number of answers as per their categorization obtained by applying the coding scheme in our content analysis. Please note that not every participant provided answers to all the open-ended parts of the questions, as they were optional (i.e., 72.74 percent participants provided open-ended answers to Q1, 68.19 percent to Q2, and 54.44 percent to Q3). The responses of the participants are predominantly grouped in the first two categories—positive comments and positive comments with some observations. This directly addresses our RQ1 and further corroborates the results of the Likert scale responses, confirming an overall positive perception of the intuitiveness and ease of use of the proposed method and its tooling.
To address RQ4, we provide here the specific qualitative observations of the participants. We start with the observations related to Q1. A large majority of positive comments stressed the importance of the visualization of the ontology in the process and that it was reportedly a missing feature in the other related tools (the participants already experienced some of these tools such as Protégé). The participants mentioned some specific features related to the ontology maintenance and leveraging collaborative tags. In particular, a few participants appreciated the use of “drag-and-drop.” The participants also appreciated the supported navigation through ontologies/folksonomies and ontology editing such as “ the simplified method of adding new topics as subclasses or related topics.” Finally, the participants appreciated the implemented functionality to search ontologies with keywords, as important for large-scale (real-world) ontologies.
On the other hand, some participants, in spite of appreciating the given visualization, expressed some concerns on the lack of enough guidance: “ Visualization is good. However, the interaction with drag-and-dropping the words from tag cloud to the ontology concepts could be made more evident in the interface (display some tips that it can be done, etc.).” In fact, this type of observations is in accordance with our other experiment in the area of ontology engineering [ 24] where participants also expressed a need for better guidance in the ontology development process. This is certainly an important topic to be investigated in future research and to be carefully addressed in the development of similar types of tools. This also indicates that a more explicit user interface intervention is needed in addition to the supported tag coloring. Also, the participants raised another important concern—how to effectively support ontology comprehension when there are so many crossing links representing properties among concepts in the ontology visualization. Indeed, this has recently been recognized as an important research challenge in the semantic technologies research community [ 33]. Only some preliminary work has been done proposing a more comprehensive visualization based on different coupling metrics [ 20].
An observation of another participant is even more critical in this regard, since it points out that the current approach lacks any indication if a tag has already been included into the ontology: “ This can be a problem if [an] ontology has many concepts and it's hard to visually see if a tag doesn't appear in the ontology. In this case, [a] teacher must first search for the tag using [the] search field. A solution to this can be to color differently tags already included in the ontology, or filter just the tags which are not in the ontology (using, for instance, checkbox).” This can certainly be a valuable input for improving the intuitiveness of the support tool. This is in line with the HCI research which indicates that differences in color are detected faster than any other visual variables [ 55]. Although this is to some extent leveraged in our research, there are certainly many other aspects that should be investigated.
From the above comments on the process, it is very clear that the majority of the participants fully equated the maintenance process with its actual tooling support, i.e., visualization and interaction interfaces. This corroborates the earlier reported association in the quantitative results. In addition to the already mentioned observations, the participants, in response to Q2 from Table 1 about the proposed visualization and interaction interfaces, also indicated the appreciation of the use of different colors, effective use of the small screen space for complex visualizations, and that the tool uses “ no excessive and useless options, no[t] trying flashy effects.” Also, they indicated that the tool had a better visualization comparing to the other ontology tools they knew of, such as Protégé.
When asked about the usefulness of the collaborative tags for ontology maintenance (in Q3 from Table 1), some participants wondered if collaborative tagging is useful at all since students (users of ontology-based learning systems) do not see the ontology most of the time. We concur that ontologies should not be visible to the end users, as most of the software artifacts are not anyhow. Yet, collaborative tags reflect, at least to a certain extent, the community's shared conceptualization of a given domain. As such they have a rather similar purpose to ontologies in terms of knowledge sharing. Indeed, our motivation for the use of collaborative tags was consistent with the opinion of other participants “ ...because students can be considered as people who are, at least partially, familiar with the area the topic is coming from, and because of the quantity of tags which help to make better tag cloud.” Another participant stated that “ most of the times instructors/content authors are not sure what concepts they should include within their domain ontology. These tags come from a real context of usage and interaction and can perfectly reflect the concepts of the domain ontology.” While some participants indicated a need for more automation of the process and a possible automatic inclusion of tags into the ontology, we intentionally did not push this functionality, as our previous study [ 24] indicated a strong preference of educators to be in the control of the ontology engineering process. Thus, our ontology-folksonomy visualization and interaction only indicates (color saturation) the relevant tags. Based on that educators can make a decision on which tags are to be integrated into the ontology under maintenance.
Some participants also pointed out some possible threats of the use of collaborative tags: “Unless the students are familiar with the domain than the use of the collaborative tags for the ontology extension is not a reliable solution” and that “the collaborative tags may not [be] correct and relevant to the ontology at all.” That is, students might not always tag things in terms relevant for the ontology or we further say that they might not tag relevant content for the ontology at all. The purpose of our MSR, WMSR, and nWMSR (from Sections 2.2-2.3) is exactly to compute semantic relatedness of tags with a selected concept in the ontology visualization. The values of those metrics are in the range 0-1. The color of strongly related tags (closer to 1) will be darker and of weakly related ones (closer to 0) lighter. The saturation of the color can go to the point to become invisible in the case when there is no semantic relatedness. Based on these relatedness measures and their reflections through tag colors, educators (i.e., ontology maintainer) can make informed decisions.
Finally, the participants from industry, although having positive comments about the tool, were reserved about the applicability of the approach for their target population of learners—workplace training where learners typically want to go through the content in a minimal time and are not interested in additional interaction (even tagging). Thus, collaborative tags would be hard to produce in that context. The observation is valid for some domains, but the adoption of social technologies in the corporate sector call for similar studies in that context [ 8].
D. Ga evi is with the School of Computing and Information Systems, Athabasca University, 1 University Drive, Athabasca, AB T9S 3A3, Canada. E-mail: email@example.com.
A. Zouaq is with the Department of Mathematics and Computer Science, Royal Military College of Canada, Office 320, CP 17000, Succursale Forces, Kingston, ON K7K 7B4, Canada. E-mail: firstname.lastname@example.org.
C. Torniai is with the Oregon Health & Science University Library, 3181 SW Sam Jackson Park Rd. - LIB, Portland, OR 97239-3098.
J. Jovanovi is with the Department of Software Engineering, FON - Faculty of Organizational Sciences (School of Business Administration), University of Belgrade, Jove Ilica 154, Belgrade 11000, Serbia.
M. Hatala is with the School of Interactive Arts and Technology, Simon Fraser University, 250 102nd Avenue, Surrey, BC V3T 0A3, Canada.
Manuscript received 4 Sept. 2010; revised 17 Jan. 2011; accepted 23 Feb. 2011; published online 30 Mar. 2011.
For information on obtaining reprints of this article, please send e-mail to: email@example.com, and reference IEEECS Log Number TLT-2010-09-0112.
Digital Object Identifier no. 10.1109/TLT.2011.21.
1. It is important to stress that domain ontologies, in learning environments, can be connected with course ontologies, ontologies for competences, learning flows and designs, and other dimensions important for a learning process (e.g., LOCO ontology framework [ 29]).
2. As we had several options for representing tag clouds, we conducted a small-scale pilot study showing participants different kinds of tag clouds, among which was a tag cloud that used different colors. The results of this pilot showed that the “standard” one (typical for the majority of apps that male use of tag cloud) was perceived by the participants as the most intuitive, and thus the reason for its selection in our approach.
Dragan Ga evi received the Dipl.Ing, MSc, and PhD degrees in computer science from the University of Belgrade. He is a Canada Research Chair in semantic technologies and an associate professor at Athabasca University. He is also an adjunct professor at Simon Fraser University. His research interests are in semantic technologies, software language engineering, technology-enhanced learning, learning analytics, and services computing. He is a coauthor of numerous publications, frequent keynote speaker, and event organizer in the areas of his research interests. He was a cofounder of the International Conferences on Software Language Engineering (SLE) and Learning Analytics & Knowledge (LAK). More information can be found at http://dgasevic.athabascau.ca.
Amal Zouaq is an assistant professor at the Royal Military College of Canada. Her research interests include natural language processing, semantic web, ontology engineering, knowledge extraction, and technology-enhanced learning. She serves as a member of the program committee of various conferences in technology-enhanced learning and semantic web and she is part of the editorial review board of the Interdisciplinary Journal of E-Learning and Learning Objects and a member of the editorial board of the Journal of Emerging Technologies in Web Intelligence. She also serves as a reviewer for many conferences and journals in knowledge and data engineering, natural language processing, e-learning, and the semantic web. More information can be found at http://azouaq.athabascau.ca.
Carlo Torniai is currently an assistant professor at the Oregon Health & Science University in Portland. Throughout his work experience, he has developed and used semantic web technologies in several domains. In particular, his research interests include best practices of ontology development for multimedia and biomedical resource annotations and exploring interactions between social and semantic web in learning environments.
Jelena Jovanovi received the BS, MSc, and PhD degrees in informatics and software engineering from the University of Belgrade in 2003, 2005, and 2007, respectively. She is an assistant professor of computer science with the Department of Software Engineering, FOS - School of Business Administration, University of Belgrade, Serbia. Her research interests are in the areas of semantic technologies, web technologies, technology-enhanced learning, and knowledge management. She is a member of the GOOD OLD AI research network. More information can be found at http://jelenajovanovic.net.
Marek Hatala received the PhD degree in cybernetics and artificial intelligence from the Technical University of Kosice. He is an associate professor and a graduate chair in the School Interactive Arts and Technology at Simon Fraser University. His research interests are in the areas of knowledge management, artificial intelligence, distributed systems, user modeling, interoperability, security and trust policies, e-learning, and collaborative systems. More information can be found at http://www.sfu.ca/~mhatala.