The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.03 - July-September (2011 vol.4)
pp: 275-290
Published by the IEEE Computer Society
Ilias Verginis , Dept. of Inf. & Telecommunica tions, Nat. & Kapodistrian Univ. of Athens, Athens, Greece
Evaggelia Gouli , Dept. of Inf. & Telecommunica tions, Nat. & Kapodistrian Univ. of Athens, Athens, Greece
Agoritsa Gogoulou , Dept. of Inf. & Telecommunica tions, Nat. & Kapodistrian Univ. of Athens, Athens, Greece
Maria Grigoriadou , Dept. of Inf. & Telecommunica tions, Nat. & Kapodistrian Univ. of Athens, Athens, Greece
ABSTRACT
The paper presents the facilities offered by the open learner model maintained in the web-based, adaptive, activity-oriented learning environment SCALE (Supporting Collaboration and Adaptation in a Learning Environment), in order to guide online students who become disengaged and support their reengagement. The open learner model (OLM_SCALE) contains and represents information about students' performance level, the minimum, maximum, and average performance level calculated from all the students enrolled in the specific subject matter, and students' interactions with the system. The paper gives an outline of the SCALE environment and OLM_SCALE and presents an empirical study that was conducted. The results of the study revealed that: 1) students positively admitted the facilities of OLM_SCALE and 2) the exploitation of OLM_SCALE can effectively lead disengaged students to work in an engaged way.
Introduction
The contemporary tendencies for supporting and promoting students' learning process in undergraduate curricula suggest the use of learning environments [ 29], [ 6], [ 32], [ 26]. A negative aspect of this trend is that students might become disengaged when using tutoring software and try to game the system by moving rapidly through problems without really studying them and by seeking the final hint that might give the answer away [ 1]. Recognizing this fact, many researchers have placed focus on developing pedagogical approaches for the detection and guidance of online students who become disengaged. The majority of those approaches are based on models that are trained through extensive analysis of the log files that represent students' interactions with the learning environment. Specifically, Cocea and Weibelzahl [ 19] propose the exploitation of several data mining techniques in order to find the best method and the indicators for disengagement prediction. The authors argue that motivational level could be predicted from very basic data commonly recorded in log files, such as events related to reading pages and taking tests. The students identified to be disengaged are engaged in a dialog in order to assess their self-efficacy, self-regulation, and other related motivation concepts. Baker et al. [ 2] propose a machine-learned Latent Response Model that is highly successful at discerning which students frequently game the system in a way that is correlated with low learning. Specifically, the research team uses three data sources in order to train the model to predict how frequently a student gamed the system. The results of the empirical study shows that the model is successful at recognizing students who game the system and show poor learning. Johns and Woolf [ 22] propose a dynamic mixture model based on Item Response Theory (DMM-IRT) to detect students' motivational level and estimate their proficiency. The model reveals the students' motivational level by identifying three values (1, 2, and 3) for the corresponding motivational variable. Values 1 and 2 refer to the unmotivated students that either try to game the system by exhausting the hints to reach the final hint that gives the correct answer (value 1), or try to quickly guess answers to find the correct answer (value 2). Value 3 refers to the motivated students. The data from 320 students were used to train the models (e.g., estimation of specific models' parameters) and 80 students to test the models' accuracy. The results of the corresponding experiments suggest that the DMM_IRT model can better predict students' responses compared to a model that does not account for motivation. In the work of Baker et al. [ 3], a detailed log file analysis is used as input for the actions performed by the animated agent named “Scooter the Tutor.” Scooter interacts with students (by expressing negative emotion to gaming students), aiming to reduce the incentive to game, and helps students learn the material that they were avoiding by gaming, while affecting nongaming students as minimally as possible. Whenever “Scooter” detects a gaming student, he provides him/her with supplementary exercises focused on exactly the material the student bypassed by gaming.
Although the aforementioned approaches manage to identify and guide the disengaged students, they require time consuming and skillful log file analysis in order to retrieve data suitable for training the specific models. Since a web-based learning environment can generate thousands of lines of information per hour, specific applications designed to analyze and impact meaning to raw log file text are required.
Recently, a new proposal for the detection and guidance of online students who become disengaged has been introduced. This proposal is based on the principles of the Open Learner Model [ 14], [ 15], [ 13], aiming to help students focus reflection on their learning and progress. Learner models are models of learners' knowledge, difficulties, and misconceptions and are essential for an adaptive learning environment to behave differently for different students. Learner models are usually accessible to the students they model. Open learner models are learner models that are accessible to the student being modeled and sometimes also to other users (e.g., peers, teachers, instructors, tutors). It has been argued that the act of viewing representations of students' understanding can raise their awareness of their developing knowledge and difficulties at the learning process [ 7] (for more details see Section 3).
Arroyo et al. [ 1] argue that noninvasive interactions can change a student's engagement state. More specifically, they propose the use of an open learner model as a means to guide students into reengagement. Through the open learner model, performance and progress charts accompanied by tips and encouragement are presented to students, aiming to reduce gaming and enhance learning, while at the same time generating a more positive perception of the system and of the learning experience.
Along the same line as noninvasive interactions based on the principles of the open learner model, we propose the use of the open learner model as a means for the detection and guidance of online students who become disengaged. More specifically, we extend the work of Arroyo et al. [ 1] by including in the open learner model not only performance and progress charts, but also a representation of students' working behavior (see Section 3.2).
The open learner model described in this work (OLM_ SCALE) was developed in the frame of a web-based, adaptive, activity-oriented learning environment referred to as Supporting Collaboration and Adaptation in a Learning Environment (SCALE) [ 21].
In order to investigate the impact of OLM_SCALE in guiding/stimulating disengaged students to work in a more effective and engaged way, we conducted an empirical study (see Section 4). The main research questions of the empirical study were: 1) Can OLM_SCALE stimulate students to work in an effective and engaged way? and 2) What are the students' opinions about the effectiveness of OLM_SCALE in supporting the learning process in the context of an introductory Informatics and Telecommunications course?
The rest of the paper is structured as follows: In the next sections, the principle functions and capabilities of the SCALE environment are presented focusing on those characteristics that were exploited in the context of this work. Following, an empirical application of the SCALE environment in the context of an introductory computer science course is described and discussed. The paper concludes with further research directions.
2. An Outline of the SCALE Environment
The SCALE environment (available at http://hermes. di.uoa.gr:8080/scale) enables students to:

    1. work on individual and collaborative activities which are proposed by the environment with respect to the students' knowledge level,

    2. collaborate in synchronous or asynchronous mode depending on the underlying collaborative activity,

    3. access informative and tutoring feedback components according to their preferences,

    4. have control on the navigation route through the provided activities and feedback components, and

    5. inspect their learner model, which provides information about their interaction with the system (e.g., knowledge level, received feedback components, activities' elaboration attempts).

A detailed description of the design principles and the characteristics of the SCALE environment is given in [ 21]. In the following, the presentation focuses on the characteristics maintained in the open learner model and the characteristics of the individual activities that students worked on in the context of the study presented in this work.
2.1 Activities in SCALE
An activity in SCALE serves a specific learning goal, which corresponds to fundamental concept(s) of the subject matter (e.g., understand how the while loop works in the context of the “Introduction to Informatics and Telecommunications” subject matter). The learning goal is further analyzed to learning outcomes that may be classified to the Comprehension level, the Application level, the Checking-Criticizing level, and the Creation level [ 21] (e.g., specify the number of iterations of a while loop, identify the control variable(s), and correct a program to work according to the desired number of repeats). An activity may consist of one or more subactivities that address and realize the outcomes of the activity. Also, a subactivity may consist of one or more question items. The activities/subactivities may have different difficulty levels and different degrees of importance for the accomplishment of the underlying goal with respect to the educational function (e.g., elicitation/assessment of the student's prior knowledge concerning specific concepts and construction of new knowledge) and the addressed learning outcomes. Also, the activity/subactivity may follow a didactical approach and may require the use of a specific educational tool that supports and facilitates the elaboration of the activity/subactivity. The activities are discriminated to the following categories:

    prerequisite activities that aim at the ascertainment of the student's prior knowledge concerning the prerequisite concepts and also at his/her familiarization with these concepts,

    learning activities that support the knowledge construction process, and

    evaluation activities that aim to assess the degree of achieving the expected learning outcomes and the student's overall knowledge construction as well as to enable the refinement of the student's knowledge in the context of the corresponding concept. These activities may serve either the formative/summative assessment process of the underlying concept or the assessment of prior knowledge of those concepts which have the specific concept as prerequisite.

Depending on the educational function that the activity serves and the underlying outcomes, the assessment may be done 1) automatically by the system, 2) by peers (peer and collaborative assessment), or 3) by the teacher.
Fig. 1 is a screen shot of the SCALE environment presenting the available activities for the concept of “Algorithms” in the context of the subject matter “Introduction to Informatics and Telecommunications.”


Fig. 1. Screenshot (translated in English) of the concept “Algorithms” that includes four individual activities (type of activity) following various didactical approaches namely: Questions, Concept map, and Exploring. All the activities are automatically assessed by the system (form of assessment).




While working out the activities in SCALE, the student may have access to multiple informative and tutoring feedback components. The informative feedback components (i.e., correctness or incorrectness of response and performance feedback) inform students about their current state. The tutoring feedback components aim to tutor/guide students and are structured in two levels, activity level and subactivity level. The feedback components of the subactivity level refer to the concepts of the subactivity under consideration, while at the activity level, the available feedback components are more general and address concepts/topics of the activity. The tutoring feedback components are associated with various types of knowledge modules (feedback types) and are structured in two forms, explanatory form and exploratory form. The explanatory form may include knowledge modules such as a description or a definition of the concept/topic, and the correct response, while the exploratory form may include:

    1. an image,

    2. an example,

    3. advice or an instruction on how to proceed,

    4. a question giving students a hint on what to think about,

    5. a case study,

    6. a similar activity followed by its answer, and

    7. selected/indicative answers given to the specific activity by other students.

The way a student works out an activity/subactivity in SCALE can be reflected through 1) the feedback components he/she received in order to elaborate the activity, 2) the corresponding elaboration attempts, and 3) the minimum, maximum, and average knowledge level as well as the average elaboration attempts regarding the specific activity/subactivity. We define as a student's working behavior the feedback components she/he received in order to elaborate on the activity and the elaboration attempts she/he made. The open learner model maintained in SCALE reflects the students' working behavior, focusing on the received feedback and the elaboration attempts. Moreover, at the peer level, students have the possibility to compare their own working behavior to their peers and the average peer (for more details, see the following section).
3. Open Learner Model in SCALE
3.1 Open Learner Models
The learner model is a representation of the knowledge, difficulties, and misconceptions of the student. A learner model is essential for an adaptive educational environment in order to provide the adaptation effect, i.e., to behave differently for different students [ 5]. During students' interaction with the environment, the learner model data are updated to reflect his/her current state and beliefs. Open learner models are learner models that not only drive the individualization of an adaptive educational environment but also externalize the maintained information to the user being modeled, and sometimes also to other users [ 23], [ 14]. According to Kay [ 23], the underlying principle of the open learner model is that, if the learner model can be helpful in determining how a machine teaches, it should be possible to make it available to students so that they can improve their own learning through better self-knowledge. Moreover, the open learner model aims to encourage reflection and awareness by provoking the student to think about his/her own beliefs, and to defend himself/herself if he/she disagrees with any representation in the model [ 7]. In recent years, there has been an increasing interest in the field of adaptive learning environments in opening their models to the students. As stated in the work of Kay [ 23], the externalization (opening) of the learner model helps students become aware of their current knowledge in ways that enable them to 1) identify suitable learning goals and 2) facilitate pursuing them. Bull and Kay [ 14] identify several purposes for opening the learner model:

    1. improving accuracy of the learner model by allowing students to contribute information,

    2. promote reflection,

    3. help plan and/or monitor learning based upon the foundation of information available in the learner model,

    4. facilitate collaboration because students can improve understanding of themselves and each other by gaining information from their respective learner model(s),

    5. afford students greater control over learning through greater control over their learner model, and

    6. the privacy issue of the right to view data about oneself.

3.2 Simple and Complex Representation of Open Learner Models
As stated in the works of Bull [ 11] and Bull et al. [ 16], the externalization of the open learner model can be either in simple or in complex form. Simple learner models typically show a student's knowledge level as a subset of expert knowledge, and complex learner models include concept maps, hierarchical knowledge structures, and detailed description of knowledge and misconceptions [ 12]. Complex representations are not only difficult to implement and integrate in an interface, but also require a detailed definition of relations between the concepts of the subject matter under consideration. Simple presentations of open learner models display a student's level of achievement in a series of topics or concepts through a variety of graphical formats, such as skill meters and part shaded bars, graphs, boxes, tables, text, etc.
Examples of adaptive educational systems that include simple representation of the learner model data can be found in the works of Bull [ 11], Bull et al. [ 16], and Bull and Kay [ 15]. The more recently developed of those systems, as well as the one described in the work of Arroyo et al. [ 1], are presented in Table 1 in terms of:

    1. the user's characteristics being modeled (knowledge, problematic areas, misconceptions, other characteristics),

    2. the domain/discipline applied,

    3. the graphical formats used,

    4. the alternative views offered, and

    5. the presentation of students' working behavior (i.e., in what different ways do students get informed about their interaction with the environment).

As can be seen in Table 1, the presented educational environments adopt two major types of approaches in order to present students' working behavior mainly whenever they face difficulties at problematic or misconception areas: The first type concerns the provision of individualized instructions, guiding questions, descriptions of how problems have been solved, tips, and encouragement whenever students face difficulties [ 25], [ 34], [ 10], [ 1]. The second type of approach concerns the visualization of 1) correct versus incorrect attempts, 2) coverage and correctness, and 3) knowledge, difficulties, and misconceptions [ 8], [ 27], [ 30], [ 9], [ 12].

Table 1. Presentation of Various Contemporary Research Efforts, Using Simple Learner Models


Table 1. (Continued)


An open issue in the area is the representation of the students' working behavior regarding the exploitation of the system's feedback components: What feedback did a student receive in order to elaborate on the requested problems/exercises? Did she/he attempt more than once to answer the questions? To what extent did the system help him/her find the correct answers?
3.3 Open Learner Model Maintained in SCALE (OLM_SCALE)
In an attempt to resolve the aforementioned issue following the same line of research by the approaches presented in Table 1, we developed the simple open learner model maintained in the SCALE environment (OLM_SCALE).
OLM_SCALE combines and expands ideas coming from the areas of computer-based interaction and collaboration analysis [ 20], [ 31] and open learner modeling. In particular, we collect raw data from students' interactions with the system using a set of indicators and visualize this information along with comparative information coming from selected peers aiming to support the learning process at awareness and metacognitive levels.
Students' awareness is supported by presenting the values of the indicators in the same form in which they were retrieved by the interaction analysis process. For example, the presentation of the students' correct versus incorrect attempts to elaborate on an activity and/or whether an activity has already been worked out or not by a specific student supports the students' awareness. The students' metacognition is supported by information which aims to promote the processes of self-evaluation and reflection of the results of the interaction analysis so that students initiate self-regulating actions. This information is presented to students through the calibrated (by a predefined form) value of specific indicators. For example, the presentation of students' knowledge level, minimum, maximum, and average knowledge level through a graphical format (e.g., progress bar) supports their metacognition.
Specifically, we designed a set of indicators that focus on individual activities and reflect the structure of SCALE's educational material (i.e., the Activities—see Section 2.1). The indicators aim to:

    Reflect the student's knowledge by using skill meters (metacognitive level). Indicators for the student's performance level at activity, subactivity, and question items levels.

    Offer a comparison to peers views of the learner model data. Indicators for answers given by other peers (awareness level), the minimum, maximum, and average performance levels, calculated from all the students enrolled in the specific subject matter (metacognitive level).

    Present the student's working behavior (awareness and metacognitive level). Indicators for students' interactions with the system (received feedback components, activities/subactivities elaboration attempts, minimum, maximum, and average knowledge levels as well as average elaboration attempts).

OLM_SCALE follows the simple model representation, that uses skill meters, as the structure of SCALE's educational material (i.e., the Activities—see Section 2.1) is already hierarchical and more complex learner views would require the definition of additional relations between the various concepts. As stated in the work of Mitrovic and Martin [ 28] a simple representation of the open learner model data that uses skill meters can have a positive effect on students' learning and metacognition. Moreover, the simple skill meter representation of the open learner model data has been found to be an adequate representation for sharing learner models with peers and instructors [ 17], [ 24].
Fig. 2 illustrates the main screen of OLM_SCALE for the concept of “Algorithms” in the context of the subject matter “Introduction to Informatics and Telecommunications.” Four activities are included in the specific concept, i.e., Definition of an Algorithm, Pseudocode, Sequential Search, and Binary Search. For each activity the following indicators are illustrated:

    1. the current knowledge level ( metacognitive—based on skill meters),

    2. the elaboration attempts ( awareness— i.e., how many times the activity's questions have been submitted),

    3. the activity's status (i.e., how many subactivities have already been elaborated) ( awareness),

    4. the minimum, maximum, and average knowledge level ( metacognitive), and

    5. average elaboration attempts ( awareness).

Also, the learner model includes functionalities that allow students: 1) to choose whether the information held in the model will be visible from their costudents, and 2) to select their preferred feedback types.


Fig. 2. OLM_SCALE Screenshot of a specific learner-model's main screen of the concept “Algorithms” (translated in English).




As can be seen in Fig. 2, the specific student inspects his own model and compares his understanding of the target concept to that of the two peers he has chosen to inspect their models. For each activity, the following information is externalized:

    1. the student's knowledge level,

    2. the minimum, average, and maximum knowledge levels,

    3. the activity's elaboration attempts (e.g., for the activity Definition of an Algorithm, the specific student attempted twice to elaborate on the corresponding subactivities),

    4. the activity's status presented either by the fraction indicating elaborated subactivities/available subactivites (e.g., for the activity Pseudocode the specific student has elaborated correctly two out of the three available subactivities) or by a specific icon indicating that all the available subactivites have already been elaborated (e.g., for the activity Definition of an Algorithm, the specific student elaborated correctly on all the available subactivities),

    5. the activity's average elaboration attempts, and

    6. the peers' knowledge level and the corresponding elaboration attempts and status. By pressing the Open/Close button, the student can choose either to open or close the model to his/hers peers. Through the Users button, the student can choose the peers whose models she/he would like to inspect.

Fig. 3 illustrates OLM_SCALE's main screen of the activity “Sequential search” of the concept “Algorithms.” The specific activity includes five subactivities. For each subactivity, the following indicators are illustrated, aiming to reflect the student's working behavior: 1) the knowledge level ( metacognitive), 2) the elaboration attempts ( awareness), and 3) the type of the available feedback components and the corresponding accesses ( awareness). For example, Subactivity 1 is supported by three types of feedback components: Advice, Hint, and Correct Answer. The specific student worked out Subactivity 1 after one attempt and after receiving once the advice-type feedback and once the hint-type feedback. The advice-type and hint-type feedback components are knowledge modules aiming to guide the student toward the correct answer. The answer-type feedback component includes the correct answers of the corresponding question items.


Fig. 3. Screenshot of OLM_SCALE's screen of the activity “Sequential Search” (translated in English).




Moreover, the student can compare his/her working behavior with the average peer data (average knowledge level, average elaboration attempts, average feedback type/accesses) or with the working behavior of his/her selected peers.
Fig. 4 illustrates OLM_SCALE's screen of the question items of Subactivity 4 in the context of the “Sequential search” activity. Both question items are multiple choice questions with one correct answer. For each possible answer, the number of students that selected the specific answer is illustrated ( awareness). For example, regarding Question 1, five out of 100 students chose the first answer as the correct one. By pressing the Details button, the student has access to the answers given by the peers whose models he/she inspects.


Fig. 4. Screenshot of the OLM_SCALE's screen of Subactivity 4 of the “Sequential Search” activity (translated in English).




4. The Empirical Study
During the academic year of 2007-2008, SCALE and the corresponding educational material developed were used for the first time, aiming to improve the teaching and learning processes of the undergraduate course “Introduction to Informatics and Telecommunications” [ 37]. The course is compulsory and is taught 3 hours per week. The course objectives are as follows: 1) to give students a strong background in the following topics of computer science: data storage, data manipulation, operating systems, networking and Internet, algorithms, and programming languages, 2) to make students comfortable with computers and eliminate any fears about computers, and 3) to establish basic foundations of further study.
The students that participated as members of the experimental group during the academic year 2007-2008 expressed their satisfaction regarding the supported facilities and characterized SCALE as a valuable and supportive educational environment in learning. Although no significant difference was found between the experimental and control groups in the pretest, the students of the experimental group managed to outperform the control group students in the posttest (i.e., course final exams). This fact suggests that limitations and problems imposed by the traditional teaching method in introductory computer science courses can be covered in an effective way by working out the corresponding educational material embedded in the SCALE environment. On the other hand, 27.8 percent of the students acted rather disengaged and worked out the educational material simply by trying to guess the correct answer. The results of the empirical study showed that these students scored significantly lower in the course final exams. This fact suggests that students working in a web-based learning environment should be guided/tutored through the most effective way of exploiting the supported facilities.
In order to address the aforementioned limitation, we developed an open learner model for SCALE and conducted an empirical study in order to investigate the issue of guiding/tutoring the disengaged students to work in a more effective and engaged way. The new empirical study was conducted during the winter semester of the academic year 2008-2009 in the context of the aforementioned course “Introduction to Informatics and Telecommunications.”
4.1 Research Questions
The main research questions of the empirical study were:

    1. Can OLM_SCALE stimulate students to work in an effective and engaged way?

    2. What are students' opinions about the effectiveness of OLM_SCALE in supporting the learning process in the context of an “Introduction to Informatics and Telecommunications” course?

4.2 Subjects
One hundred and fifty-four first year students that enrolled to the course “Introduction to Informatics and Telecommunications” in the Department of Informatics and Telecommunications at the National and Kapodistrian University of Athens participated in the study. All participants were aged between 18 and 23 yeas and attended General Lyceum in Greece during their secondary education years. The Unified Lyceum consists of three classes (one class per year) and, along with the Technical Vocational Schools, comprises the postcompulsory secondary education in Greece. During secondary education, the majority of the participants (73 percent) attended the technological course cycle and the rest (27 percent) attended the practical course cycle. The participants derived from the technological course cycle attended during their last year of secondary education an algorithms-related course (2 hours per week). The only overlap between the topics covered in the frame of the course “Introduction to Informatics and Telecommunications” and the courses attended by the participants derived from the technological course cycle was the topic of algorithms.
4.3 Procedure
In order to investigate the effectiveness of the open learner model in the context of the specific course, educational material in the form of individual activities was developed. This material exploits the learning design of the SCALE environment and can be used by the teacher as laboratory-based exercises or as homework, and/or by the student as a means to deepen his/her knowledge of the underlying topics or prepare him/herself for the corresponding university courses [ 37].
During the first lecture of the course the two responsible teachers presented an outline of the covered topics. Following that, one of the teachers presented the SCALE environment, the developed educational material, and the open learner model. Results (see Section 4.4.3) were obtained from system logs and questionnaires.
The eight week empirical study consisted of the following phases:

    1. Pretest (first week)—lasted 1.5 hours. All students participating in the empirical study took the preachievement test.

    2. Working out activities—lasted 7 weeks (first week-seventh week). The participating students worked out the activities embedded in the SCALE environment. It was suggested to a) access the environment and work out the corresponding activities on a week's basis following the material of the lectures and b) use OLM_SCALE. The estimated weekly time that students had to work with SCALE was 2 hours. At the end of the third week, a log file analysis was performed in order to reveal a) the students that were engaged/disengaged in working out the activities and b) the students that used/did not use the open learner model. The students who did not use OLM_SCALE were prompted (via e-mail) to do so. The same log file analysis was repeated at the end of the seventh week of the empirical study.

    3. Posttest (eighth week)—lasted 3 hours. All students took the postachievement test (course's final exam).

    4. Filling the questionnaire (eighth week)—lasted 30 minutes. The participating students were asked to express their opinion concerning the open learner model.

4.4 Task and Materials
All students attended the weekly lectures and studied the relevant educational material (course book and lecture notes), in order to prepare themselves for the final exams. The lecture notes and supplementary material (e.g., announcements concerning the course and answers to questions posted during the lecture) were delivered to students through the course management system ( http://eclass.di.uoa.gr).
Moreover, educational material in the form of individual activities was developed and delivered through the SCALE environment, covering the following topics:

    1. data storage,

    2. data manipulation,

    3. operating systems,

    4. networking and Internet, and

    5. algorithms.

Each activity consisted of one or more subactivities, and each subactivity of one or more question items. The activity/subactivity addressed learning outcomes of the comprehension and/or application levels. The question items were:

    1. multiple choice questions with one correct answer,

    2. multiple choice questions with more than one correct answers,

    3. matching questions,

    4. two-tier questions, where the second tier explores the students' reasons for the choice made in the first tier [ 34], [ 33], and

    5. open answer questions (assessed by the teacher).

The activities under consideration cover all difficulty levels, provide multiple and different kind of feedback types, and are automatically assessed by the system (except for the open answer questions).

4.4.1 Pretest and Posttest During the first week of the course all students participated in the pretest, in order to identify their prior knowledge (10 multiple choice and five open answer questions). Each question scored 10 points. Two indicative items from the topics of operating systems and algorithms, respectively, were: 1) Do you agree with the following statement?—“A process in the ready state goes to the running state as soon as it becomes a job and the CPU is able to execute it”—Justify your answer, 2) Using the bubble sort algorithm, manually sort the following list and show your work in each pass using the table [14, 7, 23, 31, 40, 56, 78, 9, 2].

During the last week of the course, all students participated in the course final exam (posttest). The posttest aims to reveal the differences in students' conceptions with the pretest, after their involvement with the weekly lectures, the course educational material, and the SCALE environment. The students had to answer the same questions they worked out in the pretest. The evaluation of both the pretest and the posttest was performed by the two course teachers in a 10-point scale (1-10) for each question. The final score of each question was the mean of the two evaluators' scores.


4.4.2 Data Collection In order to investigate the research questions, quantitative data were collected in form of scores awarded by the two evaluators when grading students' pretests and posttests. Moreover, qualitative and quantitative data were obtained through the analysis of: 1) the log files automatically created by SCALE, regarding students' exploitation of the available feedback components and 2) the questionnaires filled by the students.
4.4.3 Data Analysis Achievement measures. For each student, the following data were obtained: 1) one pretest mean score calculated from the scores of the 15 pretest questions and 2) one posttest mean score calculated from the scores of the 15 posttest questions. In order to identify significant differences between pretest and posttest performance, we conducted one way ANOVA based on the pretest and posttest mean scores (dependent variables).

Moreover, having as an objective to investigate students' exploitation of SCALE facilities and, in particular, the exploitation of OLM_SCALE, we analyzed SCALE log files in order to assign each student to a category, according to the extent she/he used the open learner model. Following that, in order to reveal the disengaged students, we identified different sequences of actions that students performed in order to answer a question correctly after having initially submitted a wrong answer to it. By using two-step cluster analysis, we assigned each student to a category according to the sequence of actions she/he mostly performed. Finally, we conducted one way ANOVA on the students' posttest performance (dependent variables) in order to investigate significant differences regarding these categories.

Questionnaire. The evaluation questionnaire, filled in by the participating students, consisted of Likert-scale type questions asking students to express their opinion about the usefulness, adequacy, and usability of OLM_SCALE (15 items; indicative item is “It is useful to monitor my learning progress through the Learner Model option”). Students' answers could vary from 1 to 5 (1 indicates “I strongly disagree,” while 5 indicates “I strongly agree”). Additionally, the students were given the option to express their opinion about each one of these questions, as well as to make comments and suggestions for the improvement of the open learner model. For the presentation of the quantitative data, we used means, standard deviations, and percentage of students. The percentage of students appearing in the following Results section corresponds to the number of students who expressed their agreement (4 (I agree) or 5 (I strongly agree)).

Results.First Research Question. Can OLM_SCALE stimulate students to work in an effective and engaged way?

To determine whether OLM_SCALE stimulates students to work in an effective and engaged way, we performed a log file analysis (during the third week of the empirical) in order to reveal the disengaged students. This analysis was based on the actions that students mostly performed before reaching the correct answers of an subactivity after initially submitting wrong answers.

In the SCALE environment, whenever a student submits a wrong answer, she/he has the possibility to identify and correct his/her errors (e.g., by receiving tutoring feedback components or by restudying the corresponding topic of the course book) and then resubmit the answer.

The log file analysis revealed that some students had extremely low resubmitting time (i.e., the time elapsed between the initially wrong submission and the finally correct submission) and a considerable high average rate of subactivities' elaboration attempts.

In the SCALE environment, whenever a student submits his/her answers to the question items of a subactivity, the subactivity's elaboration attempts is increased by one. The combination of the subactivities' difficulty level and the high average rate of subactivities' elaboration attempts indicates that the student might try to guess the correct answers by selecting random answers (for subactivities containing multiple choice and/or matching questions items). For example, if a subactivity contains two question items with four possible answers each, and the subactivity's elaboration attempts is high (e.g., greater than five), then the student might try to reach the correct answer by blind guessing. Moreover, if a student reaches the correct answer by receiving only the “ correct answer” feedback component without first receiving any of the available tutoring feedback components, this suggests that his/her working behavior might not be the proper one.

Accordingly to the works of Cocea and Weibelzahl [ 18] and Beck [ 4], we presume that these students were rather disengaged while working in the environment and were only trying to guess the correct answer, in contrast with the rest of the students who either restudied the question, consulted relevant educational material, or received relevant tutoring feedback components before resubmitting the answer.

In order to identify the presumed disengaged students, we calculated 1) the elapsed time between the initial submission of a wrong answer and the finally correct submission to the same question and 2) the estimated time for random submission of an answer to the specific question. For example, if the estimated time for random submission for a specific activity was 20 seconds and a student reaches the correct answer in less than 20 seconds (after initially submitting wrong answer), then this attempt to answer the question is considered a blind guess, suggesting the student was rather disengaged when answering the specific question. The estimated time for random resubmission was calculated from the data derived by the two course teachers that deliberately tried to submit blind guesses to each activity's questions as quickly as possible until they reached the correct answer. Comparing these times (the actual resubmission time and the estimated time for random resubmission), we divided students in two subgroups:

    Disengaged. Students that try again to answer the question very rapidly (less than the estimated time).

    Engaged. Students that try again to answer the question after a considerable time interval (equal or more than the estimated time) or after receiving tutoring feedback components.

The next step was to classify each student according to their resubmitting time as engaged or disengaged by using two-step cluster analysis.

Moreover, we classified each student according to the extent she/he used OLM_SCALE. More specifically, we calculated for each student the rate $Mu \;{=}$$T_{M}/T_{T}$ ( Model Usage), where $T_{T}$ represents the total time spent working in the SCALE environment up to the end of the third week of the empirical study and $T_{M}$ represents the time spent using the open learner model during the same period.

Due to the fact that the students' Mu rates followed normal distribution (the Kolmogorov-Smirnov fit test was applied), we divided the students in two subgroups ( model users and nonmodel users) by the following rule:

    Nonmodel Users$({\schmi{\schmi NMU}})$ . Students whose $Mu $ rate had up to 30 percent probability to occur.

    Model Users$({\schmi{\schmi MU}})$ . Students whose $Mu $ rate had more than 30 percent probability to occur.

The 30 percent probability boundary between model users and nonmodel users has been chosen arbitrarily by dividing the students' normal distribution in three thirds (each one having approximately up to $33.33333\ldots\%$ probability to occur. In order to decrease the number of groups that would emerge from those three initial groups and to simplify the corresponding mathematic, we decided to characterize the students of the lower third of the distribution as nonmodel users (i.e., 74 students). The rest of the students (of the two upper thirds of the distribution) were characterized as model users (i.e., 80 students). Our future plans include the testing of other probabilities in order to compare the results.

The results of the students' classification according to their way of working (engaged-disengaged) and according to the extent they used the open learner model are presented in Table 2.

Table 2. Classification of Students According to Their Way of Working (Engaged-Disengaged) and to the Extent They Used the Open Learner Model (Nonmodel Users-Model Users) at the End of the Third Week of the Empirical Study


As can be seen in Table 2, a great percentage of the disengaged students were nonmodel users (80.8 percent) and a great percentage of the engaged students were model users (81.5 percent). This fact was an indication that the exploitation of the facilities offered by OLM_SCALE can stimulate students to work in an engaged way. In order to verify this assumption, we encouraged the students who did not use OLM_SCALE to do so by informing them (via e-mail) that using OLM_SCALE might enhance and support their learning. More specifically, these students were encouraged to access OLM_SCALE, to make the information held in it available to their peers (i.e., open their model), and to choose their peers whose models they would like to inspect. At the end of the seventh week of the empirical study, we repeated the aforementioned log file analysis in order to investigate whether the third week's e-mail intervention would result in the students having a more engaged way of working. The analysis revealed that several changes of the students' types had occurred. More specifically, we registered the students' change of types. The results are shown in Table 3.

Table 3. Registered Changes of Students' Types (Third versus Seventh Week of the Empirical Study)—Ranked According to Occurrences


As can be seen in Table 3, 60 out of 74 students (81 percent) of the initially nonmodel users (74 students—see Table 2) responded positively to our suggestion to use OLM_SCALE (60 students—see Table 3, Groups 2, 3, 5, 9). More specifically, 37 out of 59 initially disengaged, nonmodel users not only became model users, but also managed to improve their way of working toward a more engaged way ( Table 3, Group 2). Thirteen out of 59 initially disengaged, nonmodel users (22 percent) responded positively to our suggestion to use OLM_SCALE but continued to work in an disengaged way ( Table 3, Group 3). Seven out of 59 initially disengaged, nonmodel users (11.8 percent) avoided using OLM_SCALE and continued working in a disengaged way ( Table 3, Group 6). Finally, two out of 59 initially disengaged, nonmodel users (3.3 percent) avoided using OLM_SCALE but managed to improve their way of working toward a more engaged way ( Table 3, Group 11).

The majority (97.5 percent) of the initially model users (80 students—see Table 2) continued to use OLM_SCALE until the end of the experimental study (78 students—see Table 3, Groups 1, 4, 8). The reader may notice that almost all initially type E_MU students (engaged, model users) remained in the same category throughout the duration of the empirical study.

Moreover, we examined the performance differences (pretest versus posttest) regarding the groups that contained more than 10 students (i.e., four groups: Group 1 (64 students), Group 2 (37 students), Group 3 (12 students), and Group 4 (13 students)). No significant difference was found in the one-way ANOVA between the four groups on the pretest performance (see Table 4, pretest column). As can be seen in Table 4, the four groups where initially equivalent in their pretest performance ( $p>0.05$ ).

Table 4. Evaluation of the Pretest and Posttest—between Groups (Group 1, Group 2, Group 3, Group 4) One-Way ANOVA


As can be seen in Table 4 (posttest column), the results of the one-way ANOVA revealed that the mean posttest performances are not equal across the groups and that at least one of the group means is significantly different from at least one other group mean ( $p<0.05$ ). In other words, the fact that the significance value of the F test is less than 0.05 suggests that the mean posttest performances of the four groups differ in some way. In order to obtain which groups are different and which are not, we performed LSD Multiple Comparison tests based on the four groups (see Table 5)

Table 5. One-Way ANOVA Multiple Comparison between Groups (Group 1, Group 2, Group 3, Group 4)


As can be seen in Table 5, the mean of Group 1 is 0.39 points higher than the mean of Group 2. Since the significance level is larger (i.e., 0.07) than the required 0.05 alpha level, we conclude that the differences in posttest performance for Group 1 and Group 2 are not significant. This fact suggest that the students of Group 2, although initially disengaged and nonmodel users: 1) managed to improve their way of working toward a more engaged way, 2) became model users, and 3) improved their posttest performance at the same rate as the students that throughout the run of the empirical study were model users and worked in an engaged way (i.e., students of Group 1).

On the other hand, the means of Group 3 and Group 4 are significantly lower from the means of the other groups. Specifically, the students of Group 3 scored 2.02 and 1.62 points lower than the students of Group 1 and Group 2, respectively. The students of Group 4 scored 1.65 and 1.26 points lower than the students of Group 1 and Group 2, respectively. This fact suggests that the students of Group 3 and Group 4 remained disengaged while working in the environment and, although they used the open learner model, they could not improve their posttest performance at the same rate as the students of Group 1 and Group 2. This could be explained due to the fact that these students tried to elaborate on the requested activities either by trying to blind guess the correct answers or by trying to guess the correct answers through OLM_SCALE.

Second Research Question. What are the students' opinions about the effectiveness of OLM_SCALE in supporting the learning process in the context of an introduction to the Informatics and Telecommunications course?

In order to explore students' opinions toward OLM_ SCALE, we examined students' responses to the questionnaire (see Section 4.4.3). In Table 6, students' responses to question items relevant for this study are presented.

Table 6. Students' Opinion (Percentage of Students) toward OLM_SCALE and the Usefulness/Usability of the Facilities Provided


A considerable number of students stated that it is useful to monitor their learning progress through OLM_SCALE ( Table 6, item Q1), but seem reluctant to make the information held in their model visible to their costudents ( Table 6, item Q2). Indicative comments for the specific question were “ I don't feel comfortable to open the results of my work to my costudents,” “ I had no previous knowledge about the covered topics. I was afraid of having poor results when working out the activities.” On the other hand many students like to monitor their friends/costudents' learning process through the learner model ( Table 6, item Q3). An indicative comment for the specific question is “ I feel comfortable to have the possibility to check out my costudents' learning process, so that I can adjust my own working pace in SCALE environment.” Finally, the majority of the students liked the possibility to have at their disposal lots of indicators concerning not only their own working progress but also their friends' and their costudents' ( Table 6, items Q4-Q7).

5. Discussion
The SCALE environment is a web-based, adaptive, activity-oriented learning environment that aims to support the learning process by engaging students in working out individual or collaborative activities. While working out the activities, students may have access to feedback tailored to their own preferences and are informed about the correctness or incorrectness of their answers through the assessment process, done either automatically by the system or by the instructor through the SCALE authoring environment. The open learner model maintained in SCALE (OLM_SCALE) includes indicators that aim not only to externalize students' progress and working behavior but also to support their learning process at the awareness and metacognitive levels. Specifically, the open learner model contains and represents information about:

    1. the student's performance level at activity, subactivity, and question items level ( metacognitive),

    2. the minimum, maximum, and average performance level, calculated from all the students enrolled in the specific subject matter ( metacognitive),

    3. the activity/subactivity status (i.e., whether the student has submitted his/her answer to the activity/ subactivity under consideration) ( awareness),

    4. the elaboration attempts ( awareness), and

    5. received feedback components ( awareness).

SCALE was used in the context of the introductory computer science course in the Department of Informatics and Telecommunications at the University of Athens. The empirical study that was conducted showed that the exploitation of OLM_SCALE can guide the online students who become disengaged toward reengagement. More specifically, although 38 percent of the participating students initially did not use OLM_SCALE and worked in a rather disengaged way, 63 percent of these students not only managed to improve their way of working toward a more engaged way after they were prompted to use OLM_SCALE, but also improved their posttest performance at the same rate as the students that throughout the run of the empirical study were model users and worked in an engaged way. This fact suggests that noninvasive interactions based on the principles of the open learner model can help students focus reflection on their learning process and coax them to reengagement. It seems that including performance and progress charts in the open learner model and encouraging reflection of the students' working behavior can effectively lead disengaged students to work in an engaged way. In the frame of this empirical study, no further tips or encouragement were given to the participating students. Our results are in line with the results of Arroyo et al. [ 1] in that disengaged students became engaged by accessing the open learner model. But, our work goes one step further, showing that additional tips (which may require skillful log file analysis) and encouragement are not necessary (as reported in the work of Arroyo et al. [ 1] ); students may think of their interaction and reflect both on their own working behavior as well as on their costudents' and friends' working behavior. The log file analysis performed in the frame of the empirical study presented in Section 4 has been conducted in order to investigate whether OLM_SCALE can stimulate students to work in an effective and engaged way. Our results show that simply through the exploitation of the facilities offered by OLM_SCALE and without performing any log file analysis, disengaged students can be guided into reengagement. Twenty-two percent of the students that initially did not use OLM_SCALE and worked in a rather disengaged way, but responded positively to our suggestion and did use OLM_SCALE, continued to work in a rather disengaged way. These students improved their posttest performance at a significantly lower rate as the students that either throughout the run of the empirical study were model users or became model users after our suggestion. Moreover, 12 percent of the students that initially did not use OLM_SCALE and worked in a rather disengaged way, but chose not to use OLM_SCALE even after our suggestion, continued working in a rather disengaged way until the end of the empirical study. Finally, 3 percent of the students that initially did not use OLM_SCALE and worked in a rather disengaged way but did not respond positively to our suggestion and chose not to use OLM_SCALE managed to work in an engaged way at the end of the empirical study.
A considerable percentage of the students (43 percent) chose by themselves to use OLM_SCALE from the beginning of the study and were found to work in a rather engaged way. Ninety-seven percent of these students continued to use OLM_SCALE and to work in a rather engaged way until the end of the empirical study. The remaining 3 percent of these students, although they continued to use OLM_SCALE, were found to work in a rather disengaged way at the end of the empirical study.
6. Conclusion and Future Work
The work presented in this paper aims to investigate the use of the open learner model as a means for the detection and guidance of online students who become disengaged. This investigation has been accomplished through the exploitation of the open learner model (OLM_SCALE) maintained in the SCALE environment. The results of the corresponding empirical study suggest that the exploitation of the facilities offered by OLM_SCALE can help students focus reflection in their learning process and coax them to reengagement. The participating students expressed their satisfaction regarding the SCALE environment and, in particular, they characterized the open learner model maintained in SCALE as a valuable and supportive means in learning.
A limitation of the current study is that, although the SCALE environment supports the elaboration of collaborative activities, in the current study, only individual activities as well as the corresponding indicators are supported. We plan to develop and supplement the material with collaborative activities and to extend the open learner model with the corresponding indicators, aiming to promote the cultivation of collaborative learning skills.
In the current study, the indicators used in the open learner model aim to support the learning process at the awareness and the metacognitive levels and are presented in a specific form. Open issues/questions that need to be examined may focus on: 1) whether the extension of the facilities offered by the open model by including indicators that aim to support the learning process at the guiding level may enhance the students' learning process, and 2) whether multiple/alternative presentations of the content of the learner model may result in higher usage of its facilities.

    The authors are with the Department of Informatics and Telecommunications, National and Kapodistrian University of Athens, Panepistimiopolis, Ilissia, Athens 15784, Greece. E-mail: {iliasver, lilag, rgog, gregor}@di.uoa.gr.

Manuscript received 8 Feb. 2010; revised 29 Oct. 2010; accepted 25 Jan. 2011; published online 30 Mar. 2011.

For information on obtaining reprints of this article, please send e-mail to: lt@computer.org, and reference IEEECS Log Number TLT-2010-02-0014.

Digital Object Identifier 10.1109/TLT.2011.20.

REFERENCES



Ilias Verginis received the DiplIng degree from the Department of Informatics, Technical University of Vienna, in 1995, and the MSc degree from the National and Kapodistrian University of Athens, Faculty of Nursing, in 1999. He is currently working toward the PhD degree in computer science at the Department of Informatics and Telecommunications, University of Athens, and is a teacher of informatics in secondary education. His current research interests include the areas of adaptive educational hypermedia systems and user modeling techniques. He has four publications in international journals, four papers in proceedings of international conferences, and a contribution of an international book chapter.



Evangelia Gouli received the BA degree in mathematics from the Department of Mathematics, National and Kapodistrian University of Athens, in 1990, the MSc degree in computer studies from the University of Essex, United Kingdom, in 1992 and the PhD degree from the Department of Informatics and Telecommunications, National and Kapodistrian University of Athens in 2007. She is a mathematics and computer science teacher in secondary education and an adjunct lecturer in the Department of Sciences of Preprimary Education and of Educational Design at the University of the Aegean. Her research work focuses on the areas of didactics of informatics, adaptive learning environments, web-based educational systems, collaborative learning, distance learning, blended learning, computer science education, student modeling, and student assessment. She has six publications in international journals, two contributions of international book chapters, 31 papers in proceedings of international conferences, and more than 120 citations to her research work.



Agoritsa Gogoulou received the BA degree in computer science from the University of Crete in 1992 and the MSc and PhD degrees from the Department of Informatics and Telecommunications of the University of Athens in 2002 and 2008, respectively. She is a teacher of informatics in secondary education and an adjunct lecturer at the Department of Primary Education at the University of Athens. She is a research fellow and member of the Education and Language Technology Group, Department of Informatics and Telecommunications, University of Athens. Her research work focuses on the areas of didactics of informatics, CSCL, adaptive learning environments, web-based education, educational assessment, and learner modeling. She has participated in 10 National and European Union projects. She has seven publications in international journals, 27 papers in proceedings of international conferences, and more than 50 citations to her research work.



Maria Grigoriadou received the BA degree in physics from the University of Athens in 1968 and the DEA and doctorate degrees from the University of Paris VII in 1972 and 1975, respectively. She is now a professor in Education and Language Technology and head of the Education and Language Technology Group in the Department of Informatics and Telecommunications, University of Athens. Her current research interests concern the areas of adaptive learning environments, web-based education, ITS, educational software, natural language processing tools, and computer science education. She has won seven awards, has participated in 30 national and European Union projects, in 15 of which she was the project manager and/or senior scientist, and has four invited talks to her credit. She has served as a program committee member of several international conferences and coorganizer of two international workshops. She has 41 publications in international journals, 11 contributions in international book chapters, 135 in proceedings of international conferences, and more than 800 citations to her research work. She is a member of the AACE, IADIS, EDEN, Kaleidoscope, LeMoRe, and the IEEE.
23 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool