The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.02 - April-June (2009 vol.2)
pp: 157-166
Published by the IEEE Computer Society
Leena Razzaq , Worcester Polytechnic Institute, Worcester
Jozsef Patvarczki , Worcester Polytechnic Institute, Worcester
Shane F. Almeida , Worcester Polytechnic Institute, Worcester
Manasi Vartak , Worcester Polytechnic Institute, Worcester
Mingyu Feng , Worcester Polytechnic Institute, Worcester
Neil T. Heffernan , Worcester Polytechnic Institute, Worcester
Kenneth R. Koedinger , Carnegie Mellon University, Pittsburgh
ABSTRACT
Content creation is a large component of the cost of creating educational software. Estimates are that approximately 200 hours of development time are required for every hour of instruction. We present an authoring tool designed to reduce this cost as it helps to refine and maintain content. The ASSISTment Builder is a tool designed to effectively create, edit, test, and deploy tutor content. The Web-based interface simplifies the process of tutor construction to allow users with little or no programming experience to develop content. We show the effectiveness of our Builder at reducing the cost of content creation to 40 hours for every hour of instruction. We describe new features that work toward supporting the life cycle of ITS content creation through maintaining and improving content as it is being used by students. The Variabilization feature allows the user to reuse tutoring content across similar problems. The Student Comments feature provides a way to maintain and improve content based on feedback from users. The Most Common Wrong Answer feature provides a way to refine remediation based on the users' answers. This paper describes our attempt to support the life cycle of content creation.
Introduction
Although intelligent tutors have been shown to produce significant learning gains in students [ 1], [ 8], few intelligent tutoring systems (ITSs) have become commercially successful. The high cost of building intelligent tutors may contribute to their scarcity and a significant part of that cost concerns content creation. Murray [ 13] asked why there are not more ITS and proposed that a major part of the problem was that there were few useful tools to support ITS creation. In 2003, Murray et al. [ 14] reviewed 28 authoring systems for learning technologies. Unfortunately, they found that there are very few authoring systems that are of "release quality," let alone commercially available. Two systems that seem to have "left the lab" stage of development are worth mentioning: APSPIRE [ 10], an authoring tool for Constraint-Based Tutors [ 11], and Carnegie Learning [ 3] for their work on creating an authoring tool for Cognitive Tutors by focusing on creating a graphical user interface for writing production rules. Writing production rules is naturally a difficult software engineering task, as flow of control is hard to follow in production systems.
Murray after looking at many authoring tools [ 13] said, "A very rough estimate of 300 hours of development time per hour of online instruction is commonly used for the development time of traditional computer-assisted instruction (CAI)." While building intelligent tutoring systems is generally agreed to be much harder, Anderson et al. [ 2] suggested that it took a ratio of development time to instruction time of at least 200:1 hours to build the Cognitive Tutor.
We hope to lower the skills needed to author tutoring system content to the point that normal classroom teachers can author their own content. Our approach is to allow users to create example-tracing tutors [ 7] via the Web to reduce the amount of expertise and time it takes to create an intelligent tutor, thus reducing the cost. The goal is to allow both educators and researchers to create tutors without even basic knowledge of how to program a computer. Toward this end, we have developed the ASSISTment System; a Web-based authoring, tutoring, and reporting system.
Worcester Polytechnic Institute (WPI) and Carnegie Mellon University (CMU) were funded by the Office of Naval Research (which funded much of the CMU effort to build Cognitive Tutors) to explore ways to reduce the cost associated with creating cognitive model-based tutors used in tutoring systems [ 7]. In the past, ITS content has been authored by programmers who need PhD-level experience in AI computer programming as well as a background in cognitive psychology. The attempt to build tools that open the door to nonprogrammers led to Cognitive Tutor Authoring Tools (CTATs) [ 1] which the last two authors of this paper had a hand in creating.
The ASSISTment System emerged from CTAT and shares some common features, with the ASSISTment System's main advantage of being completely Web-based.
Over time, tutoring content may grow and become difficult to maintain. The ASSISTment System contains tutoring for over 3,000 problems and is growing everyday as teachers and researchers build content regularly. As a result, quality control can become a problem. We attempted to address this problem by adding features to help maintain and refine content as it is being used by students, supporting the life cycle of content creation.
While template-based authoring has been done in the past [ 16], we believe that the ASSISTment System has some novel features. In this paper, we describe the ASSISTment Builder which is used to author math tutoring content and present our estimate of content development time per hour of instruction time. We also describe our efforts to incorporate variablization into the Builder. With our server-based system, we are attempting to support the whole life cycle of content creation that includes error correction and debugging as well. We present our work toward easing the maintenance, debugging, and refining of content.
2. The ASSISTment System
The ASSISTment System is joint research conducted by Worcester Polytechnic Institute and Carnegie Mellon University and is funded by grants from the US Department of Education, the National Science Foundation, and the Office of Naval Research. The ASSISTment System's goal is to provide cognitive-based assessment of students while providing tutoring content to students.
The ASSISTment System aims to assist students in learning the different skills needed for the Massachusetts Comprehensive Assessment System (MCAS) test or (other state tests) while at the same time assessing student knowledge to provide teachers with fine-grained assessment of their students; it assists while it assesses. The system assists students in learning different skills through the use of scaffolding questions, hints, and messages for incorrect answers (also known as buggy messages) [ 19]. Assessment of student performance is provided to teachers through real-time reports based on statistical analysis. Using the Web-based ASSISTment System is free and only requires registration on our Website; no software need be installed. Our system is primarily used by middle- and high-school teachers throughout Massachusetts who are preparing students for the MCAS tests. Currently, we have over 3,000 students and 50 teachers using our system as part of their regular math classes. We have had over 30 teachers use the system to create content.
Cognitive Tutor [ 2] and the ASSISTment System are built for different anticipated classroom use. Cognitive Tutor students are intended to use the tutor two class periods a week. Students are expected to proceed at their own rate letting the mastery learning algorithm advance them through the curriculum. Some students will make steady progress, while others will be stuck on early units. There is value in this in that it allows students to proceed at their own paces. One downside from the teachers' perspective could be that they might want to have their class all do the same material on the same day, so they can assess their students. ASSISTments were created with this classroom use in mind. ASSISTments were created with the idea that teachers would use it once every two weeks as part of their normal classroom instruction, meant more as a formative assessment system and less as the primary means of assessing students. Cognitive Tutor advances students only after they have mastered all of the skills in a unit. We know that some teachers use some features to automatically advance students to later lessons because they might want to make sure all the students get some practice on Quadratics, for instance.
We think that no one system is "the answer" but that they have different strengths and weaknesses. If the student uses the computer less often, there comes a point where the Cognitive Tutor may be behind on what a student knows, and seem to move along too slowly to teachers and students. On the other hand, ASSISTments do not automatically offer mastery learning, so if students struggle, it does not automatically adjust. It is assumed that the teacher will decide if a student needs to go back and look at a topic again.
We are attempting to support the full life cycle of content authoring with the tools available in the ASSISTment System. Teachers can create problems with tutoring, map each question to the skills required to solve them, bundle problems together in sequences that students work on, view reports on students' work, and use tools to maintain and refine their content over time.
2.1 Structure of an ASSISTment
Koedinger et al. [ 7] introduced example-tracing tutors which mimic cognitive tutors but are limited to the scope of a single problem. The ASSISTment System uses a further simplified example-tracing tutor, called an ASSISTment, where only a linear progression through a problem is supported which makes content creation easier and more accessible to a general audience.
An ASSISTment consists of a single main problem, or what we call the original question. For any given problem, assistance to students is available either in the form of a hint sequence or scaffolding questions. Hints are messages that provide insights and suggestions for solving a specific problem, and each hint sequence ends with a bottom-out hint which gives the student the answer. Scaffolding problems are designed to address specific skills needed to solve the original question. Students must answer each scaffolding question in order to proceed to the next scaffolding question. When students finish all of the scaffolding questions, they may be presented with the original question again to finish the problem. Each scaffolding question also has a hint sequence to help the students answer the question if they need extra help. Additionally, messages called buggy messages are provided to students if certain anticipated incorrect answers are selected or entered. For problems without scaffolding, a student will remain in a problem until the problem is answered correctly and can ask for hints which are presented one at a time. If scaffolding is available, the student will be programmatically advanced to the first scaffolding problems in the event of an incorrect answer on the original question.
Hints, scaffolds, and buggy messages together help create ASSISTments that are structurally simple but can address complex student behavior. The structure and the supporting interface used to build ASSISTments are simple enough so that users with little or no computer science and cognitive psychology background can use it easily. Fig. 1 shows an ASSISTment being built on the left and what the student sees is shown on the right. Content authors can easily enter question text, hints, and buggy messages by clicking on the appropriate field and typing; formatting tools are also provided for easily bolding, italicizing, etc. Images and animations can also be uploaded in any of these fields.


Fig. 1. The Builder and associated student screen.




The Builder also enables scaffolding within scaffold questions, although this feature has not often been used in our existing content. In the past, the Builder allowed different lines of scaffolds for different wrong answers but we found that this was seldom used and seemed to complicate the interface causing the tool to be harder to learn. We removed support for different lines of scaffolding for wrong answers but plan to make it available for an expert mode in the future. In creating an environment that is easy for content creators to use, we realize that there is a trade-off between ease of use and having a more flexible and complicated ASSISTment structure. However, we think that the functionality that we do provide is sufficient for the purposes of most content authors.

2.1.1 Skill Mapping We assume that students may know certain skills and rather than slowing them down by going through all of the scaffolding first, ASSISTments allow students to try to answer questions without showing every step. This differs from Cognitive Tutors [ 2] and Andes [ 20] which both ask the students to fill in many different steps in a typical problem. We prefer our scaffolding pattern as it means that students get through items that they know faster and spend more time on items they need help on. It is not unusual for a single Cognitive Tutor Algebra Word problem to take 10 minutes to solve, while filling in a table of possibly dozens of substeps, including defining a variable, writing an equation, filling in known values, etc. We are sure, in circumstances where the student does not know these skills, that this is very useful. However, if the student already knows most of the steps, this may not be pedagogically useful.

The ASSISTment Builder also supports the mapping of knowledge components, which are organized into sets known as transfer models. We use knowledge components to map certain skills to specific problems to indicate that a problem requires knowledge of that skill. Mapping between skills and problems allows our reporting system to track student knowledge over time using longitudinal data analysis techniques [ 4].

In April 2005, our subject matter expert helped us to make up knowledge components and tag all of the existing eighth grade MCAS items with these knowledge components in a 7-hour-long "coding session." Content authors who are building eighth grade items can then tag their problems in the Builder with one of the knowledge components for eighth grade. Tagging an item with a knowledge component typically takes 2-3 minutes. The cost of building a transfer model can be high initially, but the cost of tagging items is low.

We currently have more than 20 transfer models available in the system with up to 300 knowledge components each. See [ 18] for more information about how we constructed our transfer models. Content authors can map skills to problems and scaffolding questions as they are building content. The Builder will automatically map problems to any skills that its scaffolding questions are marked with.

2.2 Problem Sequences
Problems can be arranged in problem sequences in the system. The sequence is composed of one or more sections, with each section containing problems or other sections. This recursive structure allows for a rich hierarchy of different types of sections and problems.
The section component, an abstraction for a particular ordering of problems, has been extended to implement our current section types and allows for new types to be added in the future. Currently, our section types include "Linear" (problems or subsections are presented in linear order), "Random" (problems or subsections are presented in a pseudorandom order), and "Choose Condition" (a single problem or subsection is selected pseudorandomly from a list, the others are ignored).
We are interested in using the ASSISTment system to find the best ways to tutor students and being able to easily build problem sequences helps us to run randomized controlled experiments very easily. Fig. 2 shows a problem sequence that has been arranged to run an experiment that compares giving students scaffolding questions to allowing them to ask for hints. (This is similar to an experiment described in [ 17].) Three main sections are presented in linear order: a pretest, experiment, and posttest sections. Within the experiment section, there are two conditions and students will randomly be presented with one of them.


Fig. 2. A problem sequence arranged to conduct an experiment.




2.3 Teacher Reports
The various reports that are available on students' work are valuable tools for teachers. Teachers can see how their students are doing on individual problems or on complete assignments. They can also see how their students are performing on each skill. These reports allow teachers to determine where students are having difficulties and they can adapt their instruction to the data found in the reports. For instance, Fig. 3 shows an item report which shows teachers how students are doing on individual problems. Teachers can tell at a glance which students are asking for too many bottom-out hints (cells are colored in yellow). Teachers can also see what students have answered for each question, whether the answer was correct, what percent of the class got the answer correct and individual students' percent correct for the whole problem set.


Fig. 3. An item report tells teachers how students are doing on individual problems.




2.4 Cost-Effective Content Creation
The ASSISTment Builder's interface, shown in Fig. 1, uses common Web technologies such as HTML and JavaScript, allowing it to be used on most modern browsers. The Builder allows a user to create example-tracing tutors composed of an original question and scaffolding questions. In the next section, we evaluate this approach in terms of usability and decreased creation time of content.

2.4.1 Methodology We wished to create new 10th grade math tutoring content in addition to our existing eighth grade math content. In September 2006, a group of nine WPI undergraduate students, most of whom had no computer programming experience, began to create 10th grade math content as part of an undergraduate project focused on relating science and technology to society. Their goal was to create as much 10th grade content as possible for this system.

All content was first approved by the project's subject matter expert, an experienced math teacher. We also gave the content authors a 1 hour tutorial on using the ASSISTment Builder where they were trained to create scaffolding questions, hints, and buggy messages. Creating images and animations were also demonstrated.

We augmented the Builder to track how long it takes authors to create an ASSISTment. This does ignore the time it takes authors to plan the ASSISTment, work with their subject matter expert, and any time spent making images and animated gifs. All of this time can be substantial, so we cannot claim to have tracked all time associated with creating content.

Once we know how many ASSISTments authors have created, we can estimate the amount of content tutoring time created by using the previously established number that students spend about 2 minutes per ASSISTment [ 5]. This number is averaged from data from thousands of students. This will give us a ratio that we can compare against the literature suggesting a 200:1 ratio [ 2].


2.4.2 Results The nine undergraduate content authors worked on their project over three seven-week terms. During the first term, Term A, authors created 121 ASSISTments with no assistance from the ASSISTment team other than meeting with their subject matter expert to review the pedagogy. Since we know from prior studies [ 5] that students being tutored by the ASSISTment system spend an average of 2 minutes per ASSISTment, the content authors created 242 minutes, or a little over 4 hours of content. The log files were analyzed to determine that authors spent 79 minutes (standard deviation $= 30$ minutes), on average, to create an ASSISTment. In the second seven weeks, Term B, the authors created 115 more additional ASSISTments at a rate of 55 minutes per ASSISTment. This increased rate of creation was statistically significant (p < 0.01), suggesting that students were becoming faster at creating content. To look for other learning curves, we noticed that in Term A, each ASSISTment was edited, on average, over the space of four days, while in Term B, the content authors were only editing an ASSISTment over the space of three days on average. This rate was statistically significantly faster than in Term A. Table 1 shows these results.

Table 1. Experiment Results


It appears that we have created a method for creating intelligent tutoring content much more cost effectively. We did this by building a tool that reduces both the skills needed to create content as well as the time needed to do so. This produced a ratio of development time to online instruction time of about 40:1 and the development time does decrease slightly as authors spend more time creating content. The determination of whether the ASSISTments created by our undergraduate content authors produce significant learning is work in progress. However, our subject matter expert was satisfied that the content created was of good quality.

3. Variabilization
An important limitation of the example-tracing tutor framework used by the present ASSISTment system is the inability of example-tracing tutors to generalize over similar problems [ 7]. A direct result of this drawback is that separate example-tracing tutors are required to be created for each individual problem regardless of similarities in tutoring content. This process is not only tedious and time-consuming, but the opportunities for errors can also increase on the part of the content creators. In our present system, about 140 (out of approximately 2,000) commonly used ASSISTments are "morphs"—ASSISTments which have been generated by subtly modifying (e.g., changing numerical quantities) existing ASSISTments.
Pavlik and Anderson [ 15] have reported that learners, particularly beginners, need practice at closely spaced intervals, while McCandliss et al. [ 9] and others claim that beginners benefit from practice on closely related problems. Applying these results to a tutoring system requires a significant body of content addressing the same skill sets. However, the time and effort required to generate morphs has been an important limitation on the amount of content created in the ASSISTment system. Through the addition of the variabilization feature—use of variables to create parameterized templates of ASSISTments—to the ASSISTment builder, we seek to extend our content-building tools to facilitate the reuse of tutoring content across similar problems.
3.1 Implementation
The variabilization feature of the ASSISTment builder enables the creation of parameterized template ASSISTments. Variables are used as parameters in the template ASSISTment and are evaluated while creating instances of the template ASSISTment—ASSISTments where variables and their functions are assigned values.
Our current implementation of variabilization associates variables with individual ASSISTments. Since an ASSISTment is made of the main problem, scaffold problems, answers, hints, and buggy messages, this implementation allows a broad use of variables. Each variable associated with an ASSISTment has a name and one or more values. These values may be numerical or may include text related to the problem statement. Depending on the degree of flexibility required, mathematical functions like those to randomly generate numbers, or those doing complex arithmetic can be used in variable values.
We also provide the option of defining relationships between variables in two ways. The first way is to define values of variables in terms of variables that have already been defined. If variables called x and y have already been defined, then we can define a new variable z to be equal to a function involving x and y, for instance, x∗y. The other way to define a relationship is to create what are called sets of variables. Values of variables in a set are picked together while evaluating them. For example, in a Pythagorean Theorem problem, having the lengths of the three sides of a right-angled triangle as variables in a set, we can associate certain values of the variables like 3-4-5 or 5-12-13 to represent the lengths of the sides of right triangles.
We now give an example of the process involved in generating a template-variabilized ASSISTment, and then, creating instances of this ASSISTment. The number of possible values for the variables dictates the number of instances of an ASSISTment that can be generated. The first step toward creating a template-variabilized ASSISTment from an existing ASSISTment is determining the possible variables in the problem.
After identifying possible variables, these variables are created through the variables widget and used throughout the ASSISTment. A variable has a unique name and one or more values associated with it. A special syntax in the form of %v{variable-name} is used to refer to variables throughout the Builder environment. Functions of these variables can be used in any part of the ASSISTment including the problem body by using the syntax %v{function()}. For example, $\%{\rm v}\{{\rm sqrt}(a\;{\hat} \; 2 + b\;{\hat }\;2)\}$ could ^ be used to calculate the length of the hypotenuse of a right triangle. Additional variables can be introduced to make the problem statement grammatically correct such as delimiters and pronouns.
Fig. 4 shows an existing ASSISTment addressing the Pythagorean Theorem with candidates for variables marked. This ASSISTment is commonly encountered by students using our system and it contains 13 hints: eight buggy messages, one main problem, and four scaffold problems.


Fig. 4. A variabilized ASSISTment on the Pythagorean Theorem. Variables have been introduced for various parts of the problem including numerical values and parts of the problem statement.




Generation of variables in the system is simple and follows the existing format of answers and hints. Maintaining consistency with other elements of the Builder tools minimizes the learning time for content creators. In the Pythagorean Theorem ASSISTment (shown in Fig. 4), we can make use of the set feature of variables to make sure that the correct values of the three sides of the triangle are picked together.
Once variables have been generated and introduced into problems, scaffold questions, answers, hints, and buggy messages as required, it is possible to create multiple instances of this ASSISTment using the Create button.
The number of generated ASSISTments depends on the number of values specified in the sets. Our system performs content validation to check if variables have been correctly generated and used, and alerts the content creator to any mistakes. The main advantage of variabilization lies in the fact that once a template-variablized ASSISTment is created, new ASSISTments including their scaffolds, answers, hints, and buggy messages can be generated instantly.
Our preliminary studies of variabilization, comparing the time required to generate five morphs using traditional morphing techniques (e.g., copy and paste) as opposed to generating five morphs using variabilization, indicate that in the former case, the average time required to create one morph is 20.18 (std 9.05) minutes, while in the latter case, this time is 7.76 minutes (std 0.56). Disregarding the ordering effect introduced due to repeated exposure to the same ASSISTment, this indicates a speedup by a factor of 2.6. Further studies are being done to assess the impact that variabilization can have in reducing content creation time. It is important to note that speedup heavily depends on the number of ASSISTments generated since creating one template-variabilized ASSISTment requires 38.8 (std 2.78) minutes, on average, as opposed to 20.18 (std 9.05) minutes for a morphed ASSISTment. However, the variabilized ASSISTment can be used to produce multiple instances of the ASSISTment, while the morph is essentially a single ASSISTment.
4. Refining and Maintaining Content
The ASSISTment project is also interested in easing the maintenance of content in the system. Because of the large number of content developers and teachers creating content and the large amount of content currently stored in the ASSISTment system, maintenance and quality assurance becomes more difficult.
4.1 Maintaining Content through Student Comments
We have implemented a way to find and correct errors in our content by allowing users to comment on issues. As seen in Fig. 5, students using the system can comment on issues they find as they are solving problems.


Fig. 5. Students can comment on spelling mistakes, math errors, or confusing wording.




Content creators can see a list of comments and address problems that have been pointed out by users.
We assigned an undergraduate student to address the issues found in comments. He reported working on these issues over five weeks, approximately 8 hours a week, scanning through the comments made since the system was implemented. There were a total 2,453 comments, and the student went through 216 comments during this time and 85 ASSISTments were modified to address issues brought up by students.
Therefore, this means that about 45 percent of the comments that the undergraduate student reviewed were important enough that he decided to take action. We originally thought that many students would not take commenting seriously and the percentage of comments that were not actionable would be closer to 95 percent, so we were pleased with this relatively high number of useful comments.
Given that the undergraduate student worked for 8 hours a week addressing comments, he estimates that 80 percent of that time was spent editing the ASSISTments. Since he edited a total number of 102 ASSISTments (including problems brought up by professors) over the five week period, on average, editing an ASSISTment took a little under 20 minutes.
Many comments were disregarded because they were either repeating themselves (ranging from a couple of repeats to 20 hits), or because they had nothing to do with the purpose of the commenting system.
During his analysis, the undergraduate student categorized the comments in Table 2.

Table 2. Categorization of Comments on Issues with ASSISTment Content


It was useful, when starting to edit an ASSISTment because of a comment, to find other comments related to that problem that might lead to subsequent corrections.
In addition, there was one special type of comment that pointed out visual problems from missing html code (included in the Migration issues). These indicated strange text behavior (i.e., words in italic, bolded, colored, etc.) because of unclosed html tags or too many breaks.
In a nutshell, we believe that this account strengthens the importance of the commenting system in maintaining and improving a large body of content such as we have in the ASSISTment system.
4.2 Refining Remediation
There is a large literature on student misconceptions, and ITS developers spend large amounts of time developing buggy libraries [ 21] to address common student errors which requires expert domain knowledge as well as cognitive science expertise. We were interested in finding areas where students seemed to have common misconceptions that we had inadvertently neglected to address with buggy messages.
If a large percentage of students were answering particular problems with the same incorrect answer, we could determine that a buggy message was needed to address this common misconception. In this way, we are able to refine our buggy messages over time. Fig. 6 shows a screenshot of a feature we constructed to find and show the most common incorrect answers. In this shot, it is apparent that the most common incorrect answer is 5, answered by 20 percent of students. We can easily address this by adding a buggy message, as shown in Fig. 6.


Fig. 6. Common wrong answers for problems are shown to help with remediation.




5. Conclusions and Contributions
In this paper, we have presented a description of our authoring tool that grew out of the CTAT [ 7] authoring tool. When CTAT was initially designed (by the last two authors of this paper as well as Vincent Aleven), it was mainly thought of as a tool to author cognitive rules. CTAT supports the authoring of both example-tracing tutors, which do not require computer programming but are problem-specific, and cognitive tutors, which require AI programming to build a cognitive model of student problem solving but support tutoring across a range of problems. Writing rules is time-intensive. CTAT allowed authors to first demonstrate the actions that the model was supposed to be able to "model-trace" with CTAT's Behavior Recorder. This enabled users to author a tutor by demonstration, without programming.
It turned out that the demonstrations that CTAT would record for this seemed like good tutors sometimes, and that we might not ever have to write rules for the actions. The CTAT example-tracing tutors mimic a cognitive tutor, in that they could give buggy messages and hint messages. When funding for ASSISTments was given by the US Department of Education, it made sense to create a new version of a simplified CTAT, which we call the ASSISTment Builder. This builder is a simplification of the CTAT example-tracing tutors in that they no longer support the writing of production rules at all and only allow a single directed line of reasoning. Is this a good design decision? We are not sure. There are many things ASSISTments are not good for (such as telling which solution strategy a student used) but the data presented in this paper suggest that they are much easier to build than cognitive tutors. They both take less time to build and also require a lower threshold of entry (learning to be a rule-based programmer is very hard and the skill set is not common as very few professional programmers have ever written a rule-based program (i.e., in a language like JESS ( http://www.jessrules.com/jess/)).
What don't we know that we would like to know? It would be nice to do an experiment that pitted the CTAT rule-based tutors against ASSISTments, give both teams an equal amount of money, and see which produces better tutoring. By better tutoring, we mean which performs better on a standard pretest/posttest type of analysis to see if students learn more from either system. We assume that the rule-based cognitive tutor would probably lead to better learning, but it will cost more to get the same amount of content built. How much better does the system have to be to justify the cost? There are several works where researchers built two different systems to compare them [ 6], [ 12]. One work where researchers build two different systems and tried to make statements of which one is better is Kodaganallur's work [ 6]. They built a model-tracing tutor and a constraint-based tutor, and expressed the opinion that the constraint-based tutor was easier to build but they thought it would not be as effective at increasing learning. However, they did not collect student data to substantiate the claim of better learning from the model-tracing tutors. We need more studies like this to help figure out if example-tracing tutors/ASSISTments are very different from model-tracing tutors in terms of increasing student learning. The obvious problem is that few researchers have the time to build two different tutoring systems.
There is clearly a trade-off between the complexity of what a tool can express and the amount of time it takes to learn to use a tool. Very simple Web-based answering systems (like www.studyisland.com) sit at the "easy to use end" in that they only allow simple question-answer drill-type activities. Imagine that is on the left. At the other extreme, to the far right, is Cognitive Tutors which are very hard to learn to create and to produce content, but offer greater flexibility in creating different types of tutors. Where do we think ASSISTments sit on this continuum? We think that ASSISTments is very close to the Web-based drill-type systems but just to the right. We think that CTAT created example-tracing tutors sit a little bit to the right of ASSISTments but still clearly on the left end of the scale.
Where do other authoring tools sit on this spectrum? Carnegie Learning researchers Blessing et al. are putting a nice GUI onto the tools to create rule-based tutors [ 3] which probably sits just to the left of rule-based tutors. It is much harder to place other authoring tools onto this spectrum, but we guess that ASPRIRE [ 10], a system to build constraint-based tutors, sits just to the left of Blessing's tool, based upon the assumption that constraint-based tutors are easier to create than cognitive rule-based tutors, but still require some programming.
We think that there is a huge open middle ground in this spectrum that might be very productive for others to look at. The difference is what level of programming is required by the user. Maybe it is possible to come up with a programming language simple enough for most authors that gives some reasonable amount of flexibility so that a broader range of tutors could be built that would be better for student learning.
In summary, we think that some of the good aspects of the ASSISTment Builder and associated authoring tools include: 1) they are completely Web-based and simple enough for teachers to create content themselves; 2) they capture some of the aspects of Cognitive Tutors (i.e., bug messages, hint messages, etc.) but at less cost to the author; and 3) they support the full life cycle of tutor creation and maintenance with tools to show when buggy messages need to be added, and tools to get feedback from users, and of course, allowing teachers to get reports. We make no claim that these are the optimal set of features, only that they represent what we think might represent a reasonable complexity versus ease-of-use trade-off.

Acknowledgments

The authors would like to thank all of the people associated with creating the ASSISTment system listed at www. ASSISTment.org including investigators Kenneth Koedinger and Brian Junker at Carnegie Mellon. They would also like to acknowledge funding from the US Department of Education, the National Science Foundation, the US Office of Naval Research, and the Spencer Foundation. All of the opinions expressed in this paper are those solely of the authors and not those of our funding organizations.

    L. Razzaq, J. Patvarczki, S.F. Almeida, M. Vartak, M. Feng, and N.T. Heffernan are with the Computer Science Department, Worcester Polytechnic Institute, 100 Institute Road, Worcester, MA 01609.

    E-mail: {leenar, patvarcz, almeida, mvartak, mfeng, nth}@wpi.edu.

    K.R. Koedinger is with the Carnegie Mellon University, Pittsburgh, PA 15213-3891. E-mail: koedinger@cmu.edu.

Manuscript received 28 Dec. 2008; revised 20 Mar. 2009; accepted 23 Apr. 2009; published online 7 May 2009.

For information on obtaining reprints of this article, please send e-mail to: lt@computer.org, and reference IEEECS Log Number TLTSI-2008-12-0131.

Digital Object Identifier no. 10.1109/TLT.2009.23.

References



Leena Razzaq received the MS degree in computer science from Worcester Polytechnic Institute. She is currently working toward the PhD degree in computer science at the same university. She is interested in intelligent tutoring systems, human-computer interaction, and user modeling. She is a member of the ASSISTment Project under the role of content director where she has been in charge of authoring tutoring content. She spends a large amount of time in middle schools in the Worcester area, helping teachers to use the system in their classrooms, and running randomized controlled studies to determine the best tutoring practices. Her research is focused on studying how different tutoring strategies in intelligent tutoring systems affect students of varying abilities and how to adapt tutoring systems to individual students.



Jozsef Patvarczki received the BS degree from Budapest Tech, Hungary, and the MS degree from the University of Applied Sciences, Germany. He is currently working toward the PhD degree in computer science at Worcester Polytechnic Institute. His primary interests lie in the areas of scalability, load-balancing, networks, intelligent tutoring systems, and grid computing, particularly, load modeling and performance tuning. He has also worked in the areas of database layout advisor and Web-based applications. His research also contributed to the infrastructure design of the ASSISTment system and produced a robust 24/7 system.



Shane F. Almeida received the BS degree in computer science and the MS degrees in computer science and electrical and computer engineering from Worcester Polytechnic Institute. He was previously a researcher and a lead developer for the ASSISTment Project. While a graduate student, he received funding from the US National Science Foundation and the US Department of Education. He now develops software for wireless controller technology and products for mobile devices while continuing to consult for the ASSISTment Project.



Manasi Vartak is an undergraduate student at Worcester Polytechnic Institute majoring in computer science and mathematics. She plans to graduate in 2010. She worked on a semester-long independent study project involving the ASSISTment System. As a part of this project, she added a "template" feature to the ASSISTment System which enables the rapid generation of isomorphic content while also increasing flexibility by allowing the system to interface with third party software.



Mingyu Feng received the BS and MS degrees in computer science from Tianjin University, China. She is currently working toward the PhD degree in computer science at Worcester Polytechnic Institute. Her primary interests lie in the areas of intelligent tutoring systems, particularly, student modeling and educational data mining. She has also worked in the area of cognitive modeling and psychometrics as well. Her research has contributed to the design and evaluation of educational software, has developed computing techniques to address problems in user learning, and has produced basic results on tracking student learning of mathematical skills.



Neil T. Heffernan received the BS degree from Amherst College, Massachusetts. He then taught middle school in inner city Baltimore, Maryland, for two years, after which he received the PhD degree in computer science from Carnegie Mellon University, building intelligent tutoring systems. He currently works with teams of researchers, graduate students, and teachers to build the ASSISTment System, a Web-based intelligent tutor that is used by more than 3,000 students as part of their normal mathematics classes. He has more than 50 peer-reviewed publications and has received more than $8 million in funding on more than a dozen different grants from the US National Science Foundation, the US Department of Education, the US Army, the Spencer Foundation, the Massachusetts Technology Transfer Center, and the US Office of Naval Research.



Kenneth R. Koedinger received the MS degree in computer science from the University of Wisconsin in 1986 and the PhD degree in psychology from Carnegie Mellon University (CMU) in 1990. He is a professor of human-computer interaction and psychology at Carnegie Mellon University. He has authored more than 190 papers and has won more than 16 major grants. He is a cofounder of Carnegie Learning, a company marketing advanced educational technology, and directs the Pittsburgh Science of Learning Center (see LearnLab.org).
35 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool