Design of Teacher Assistance Tools in an Exploratory Learning Environment for Algebraic Generalization
3.2.1 Rhythm Detection In the early stages of the development of their thinking about generalization, students may find it difficult to perceive the structure of models in their minds. One strategy that teachers can exploit is observing the way in which students describe patterns, or how they construct them using tiles. Sometimes students will have some implicit structure in their minds and this will be evident in their actions, even if they cannot make it explicit. The teacher can at this point encourage students to focus on their implicit structure, thus providing scaffolding toward students making their structure explicit and understanding how it relates to the generality of a pattern [ 18].
To support the teacher in this aim, one of the modules of the eGeneraliser is responsible for detecting “rhythm” in the way that students place tiles onto the canvas of the eXpresser. When a repetitive sequence of tile placements is detected in the actions of the student, this can be used by the system in order to provide feedback to the student suggesting the creation of such a building block in order to construct their model in a more structured way (repeating the building block to create their model, rather than using single tiles at a time). At the same time, an instance of the corresponding indicator is also inferred by the eGeneraliser (this is the seventh indicator in Table 4 of the Appendix, available in the online supplemental material).
In order to detect these regularities in students' actions, the tile placements made by a student are converted internally by the system into a sequence of positions on the canvas. The sequence of tile positions is analyzed using two sliding windows containing subsequences from the whole sequence. The sliding windows traverse the sequence and, for each pair of positions, the distance between the windows is calculated using a string similarity metric. The use of a similarity metric, rather than a precise comparison, allows for small differences in consecutive repetitions of the same structure (for example, to allow for students' “slips” in using the eXpresser). Those windows that exhibit a higher number of repetitions of high similarity to other subsequent windows are selected as possible indications of rhythm in the actions of the student. We refer the interested reader to [ 17] for more details of the process.
3.2.2 Apparent Solution on Canvas One important challenge when providing feedback to students is understanding whether the student's current construction is a valid solution. The answer to this question has ramifications for the system's entire strategy for providing support and many other considerations depend on this one: Is the student's construction general or not? How does it relate to the student's final expression? Are the local expressions correct (i.e., the rules about how to color individual patterns within the overall model)? Have they been combined correctly to obtain to the global expression (i.e., the final expression for coloring the whole model)?
Comparing two constructions made up from square tiles is relatively easy, but there are several difficulties in our case. The first difficulty stems from the fact that the construction of patterns using the eXpresser is highly exploratory. Given a task model, it can generally be constructed in large numbers of different ways, e.g., using large or small patterns, with or without overlaps between patterns, on different parts of the canvas, etc. The second difficulty arises from the dynamicity of task solutions. Students are expected to make constructions that “animate,” i.e., that generalize correctly for any values of the task variables; but our studies have shown that many students are content to make just one instance of the model and they expect the system to do all the rest of the work for them. Detecting that they have “finished” their construction has two aspects therefore: first, the system needs to detect that they have created a correct solution; and second, the system needs to evaluate the generality of the solution.
Regarding the first of these aspects, a module of the eGeneraliser is responsible for detecting constructions that have the same appearance as a valid solution. In this context, having “the same appearance” means looking the same from the point of view of the student, regardless of internal structure or actual equality tile-by-tile. For example, students will perceive a “stepping stones” model with five red tiles as looking the “same” as a model with four red tiles (see Figs. 1a and 1b). Our algorithm is based on constructing a “mask” based on each of the known solutions to the task (we recall that known solutions are identified using the Task Design Tool). This mask is projected onto the student's construction to see if there is a match for some value of the task variables. If a match is found, then the indicator Apparent Solution on Canvas is turned “on”; otherwise, it is turned as “off” (this is the first indicator in Table 7 of the Appendix, available in the online supplemental material).
There is also another intelligent module that is responsible for evaluating whether the construction built by the student is general, meaning that it is a correct solution for any value of the task variables and not only for some, i.e., that it cannot be “messed up” by changing the variables (c.f. the concept of “messing up” in dynamic geometry learning environments [ 19]). The algorithm used is again based on superposition of masks onto the student's construction for different values of the task variables. The number of values that need to be tested depends on the type of the task, e.g., for a linear task that has just one task variable, only two values need to be tested. If a match is found for all the sampled values, the student's construction qualifies as “unmessable.” If the student checks the generality of a model that is “unmessable” by using the animation features of the eXpresser to explore several possible values for the task variables, the indicator Unmessable Model Animated is turned “on”; otherwise, it remains “off” (this is the fourth indicator in Table 7 of the Appendix, available in the online supplemental material).
1. define and view “higher level,” derived indicators from the current set of “low-level” indicators, for example, occurrences of sequences of indicators;
2. see how many times a particular indicator has occurred, for example, the level of achievement of each task goal over the whole class, so that the teacher can reinforce in the next lesson those aspects of the task that students found difficult.
S. Gutierrez-Santos, D. Pearce-Lazard, and A. Poulovassilis are with the London Knowledge Lab, Birkbeck, University of London, 23-29 Emerald St., WC1N 3QS London, United Kingdom.
E-mail: firstname.lastname@example.org, email@example.com, firstname.lastname@example.org.
E. Geraniou is with the Institute of Education, University of London, 20 Bedford Way, WC1H 0AL London, United Kingdom.
Manuscript received 4 Apr. 2011; revised 8 Aug. 2011; accepted 30 Sept. 2011; published online 11 Sept. 2012.
For information on obtaining reprints of this article, please send e-mail to: email@example.com, and reference IEEECS Log Number TLT-2011-04-0044.
Digital Object Identifier no. 10.1109/TLT.2012.19.
1. All students' names (both screenshot and text) have been changed.
Sergio Gutierrez-Santos received the telecommunications engineering degree in 2002 and the PhD degree in computer science in 2007 from University Carlos III of Madrid. Since then, he has been with the London Knowledge Lab. His research interests center on artificial intelligence (especially emergent technologies) and its application to problems in teaching and learning.
Eirini Geraniou received the MSc degree in mathematics from the University of Crete and the PhD degree in mathematics education from the University of Warwick. She is a lecturer at the Institute of Education, University of London. Her research interests include educational design of material and activities for mathematics, teaching and learning mathematics with ICT, algebraic ways of thinking, students motivation in learning mathematics, and advanced mathematical thinking.
Darren Pearce-Lazard received the MA degree in mathematics from Cambridge University and the PhD degree in computer science from the University of Sussex. He is interested in collaborative learning and task state-space navigation, especially in combination with sophisticated approaches to collaboration. Currently, he is a visiting research fellow at the Ideas Lab at the University of Sussex.
Alexandra Poulovassilis received the MA degree in mathematics from Cambridge University and the MSc and PhD degrees in computer science from Birkbeck. Her research interests center on information management, integration, and personalization. Since 2003, she has been the codirector of the London Knowledge Lab, a multidisciplinary research institution which aims to explore the future of knowledge and learning with digital technologies.