The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.02 - April-June (2011 vol.4)
pp: 138-148
Published by the IEEE Computer Society
T Augustin , Inst. of Biomedicine & Health Sci., Joanneum Res., Graz, Austria
C Hockemeyer , Dept. of Psychol., Univ. of Graz, Graz, Austria
M Kickmeier-Rust , Dept. of Psychol., Univ. of Graz, Graz, Austria
D Albert , Dept. of Psychol., Univ. of Graz, Graz, Austria
ABSTRACT
The assessment of knowledge and learning progress in the context of game-based learning requires novel, noninvasive, and embedded approaches. In the present paper, we introduce a mathematical framework which relates the (problem solution) behavior of a learner in the game context to the learner's available and lacking competencies. We argue that a problem situation and its status at a certain point in time can be described by a set of game props and their current properties or states. In the course of the game, the learner can perform different actions to modify the props and, consequently, change the problem situation. Each action is evaluated with respect to its correctness or appropriateness for accomplishing a given task which, in turn, enables conclusions about the competence state of the learner. This assessment procedure serves as the basis for adaptive interventions, for instance, by providing the learner with guidance or feedback.
Introduction
It is quite evident that digital learning games are a dawning educational technology. The idea is to utilize the rich and appealing potential of modern computer games and their immense intrinsic motivational potential for educational purposes [ 1]. The vision that at least a small portion of time spend on playing computer games can be used for learning is fascinating and desirable and, consequently, there is a rapidly increasing body of research and development in this field. The core strength of game-based learning is that such games—in a very natural way—are capable of making learning and knowledge appealing and important to the learner. Moreover, learning games serve the educational needs of the “Nintendo generation” and the “digital natives,” who grew up with “twitch speed” computer games, MTV, action movies, and the Internet [ 2]. Authors like Mark Prensky argue that this context has emphasized certain cognitive aspects and deemphasized others, thus, the demands on education have changed [ 2]. Although there is an ongoing debate about such ideas, computer games can be considered as powerful tools children and adolescents are familiar with.
We nevertheless have to note that the educational potential of computer games depends on the learner's motivation to play and, therefore, also to learn. A number of authors accomplished pioneering work in terms of selecting game genres and game design for successful learning games (e.g., [ 3], [ 4], [ 5]). Additionally, it is of vital importance to tailor the concrete game play and gaming experience to the individual learners and to provide them with didactically meaningful and individualized guidance and support. Moreover, it is necessary to find an appropriate balance between gaming and learning and, maybe more importantly, between the challenges through the game and the abilities of the learner. The underlying idea is the following: If the player is bored by a too easy game play, or the challenges are too difficult to be accomplished, the player will quit playing the game very soon. This idea is common to most entertainment games. In a learning game, however, we need to establish the same principle from a learning perspective. Ideally, the increase in challenge should match individual abilities and individual learning progress. Essentially, this idea matches the foundations of adaptive or intelligent tutorial systems (cf. [ 6]).
An intelligent adaptation to preferences, motivational and emotional states, to learning progress, learning objectives and interests, and, above all, to the learner's abilities is crucial for being educationally effective and for retaining the user's motivation to play and to learn. This adaptation is not trivial, however. It requires a subtle balance between the challenges through the game and the abilities of the learner. Unfortunately, such balance is very fragile and, due to the coexistence of gaming and learning aspects, likely more complex than adaptation and personalization in conventional educational settings.
Research on adaptive and intelligent tutoring basically focussed on adaptive presentation and adaptive navigation support [ 7]. The data basis for adaptation is most often querying the learner, asking for preferences, or providing typical test items for assessing the user's knowledge and learning progress. This strategy is not feasible in an immersive learning game, however. In contrast to conventional adaptive tutoring and knowledge testing, the adaptive knowledge assessment within such games is massively restricted by the game play, the game's narrative, and the game's progress. Typical methods of knowledge assessment would suddenly and seriously destroy immersion and, consequently, also the gaming and learning process. What is required is an assessment procedure that is strictly embedded in the game's story and covered by game play activities. In some sense, this is similar to work in the area of embedded assessment and student tracking [ 8]. Those methods and theories, however, are predominantly driven by human (i.e., the teacher's) interpretation and evaluation of knowledge, understanding, and learning progress. In the present paper, we introduce a method for a “machine-driven” assessment of knowledge and learning progress in a noninvasive and embedded way. The core idea is to avoid any queries or interruptions but to monitor and interpret the learner's behavior in gaming situations. Subsequently, psychopedagogical interventions (e.g., providing the learner with appropriate guidance, feedback, cheer, or hints) can be triggered on the basis of probabilistic conclusions drawn by the system.
The present approach has been developed in two projects focusing on game-based learning: ELEKTRA ( www.elektra-project.org) and 80Days ( www.eightydays.eu). Both projects have the ambitious and visionary goal to utilize the advantages of computer games and their design fundamentals for educational purposes, and to address the disadvantages of game-based learning as far as possible. A group of interdisciplinary European partners contribute to the development of a sound methodology for designing educational games and to the development of a comprehensive game demonstrator based on a state-of-the-art 3D adventure game. To illustrate our approach, we refer to a concrete example from the ELEKTRA demonstrator game, which is a typical first person adventure game (cf. Fig. 1). The aim is to save Lisa and her uncle Leo, a researcher, who have been kidnapped by the evil Black Galileans. During this journey, the learner needs to acquire specific concepts from a eighth grade physics course. Learning occurs in different ways, ranging from hearing or reading to freely experimenting. After finding a magic hour glass, the learner is in company of the ghost of Galileo Galilei ( Fig. 1a), who is the learner's (hidden) teacher.


Fig. 1. Screenshots from the ELEKTRA demonstrator game on physics. (a) The ghost of Galileo Galilei, who is the learner's (hidden) teacher; (b) the slope device.




To learn about the straight propagation of light, for instance, the learner experiments with a torch and blinds on a table in the basement of uncle Leo's villa, or with a device that allows balls of different materials rolling down a slope ( Fig. 1b). These skills are important to understand that light propagates straight, as opposed to the curved trajectories of other objects. This, in turn, is important for the game play, because to continue in the game, the learner has to unlock a door by exactly hitting a small light sensor with a laser beam. The experimenting is accompanied and observed by Galileo who, if necessary, can also provide feedback or guidance. The goal of using the slope device is to make the various balls (of wood, plastic, hollow, or solid iron, etc.) fall into a given hole. As shown in Fig. 1b, the learner can adjust a magnet and a fan to alter the trajectories of the balls. On the contrary, a laser beam cannot be influenced by such external forces.
The important point is that by continuously interpreting the learner's actions in terms of his or her knowledge, the system gathers information about the learner's learning progress. If, as an example, the learner continuously tries to affect the trajectory of a plastic ball by increasing the magnetic force, the system eventually concludes that the learner lacks the knowledge that plastic is not affected by magnetic force. If, at the same time, the fan is adjusted properly, the system can also conclude that the related knowledge is available. Since, usually, a single observation cannot provide reliable evidence, we are relying on a probabilistic approach, which means that with each observation we are updating the probabilities that certain knowledge is available.
Although this is a rather simple example, it illustrates the underlying ideas of our assessment method. The following parts of the manuscript will specify these ideas in more detail, including the basics definitions and the mathematical formalism of the assessment procedure. Subsequently, we will illustrate the approach with an exact walk-through, using the well known and suitably limited problem space of the Tower of Hanoi. Finally, we will provide a conclusion on the hand of more realistic examples and give an outlook to future work.
2. Basic Definitions
Imagine a certain situation in a digital learning game with an educational intention, for example the aforementioned slope device or the experimenting with a torch and blinds. To describe such a virtual scenario in a formal way, let $O$ be the set of (educationally relevant) and manipulable objects, no matter if torch, fan, slider control, or even space ship. These objects can be used to define all possible gaming situations a user can be confronted with. For simplicity of notation, we assume that $O =\{o_1, \ldots, o_N\}$ . Furthermore, for $1\le n\le N$ , let ${\cal S}_n$ be the set of all possible “states” of the $n$ th object $o_n$ . These states can be of quite different character, like for instance, a multidimensional vector describing position and orientation of an object in the (virtual) space, or simply two values (“on” and “off”) for a switch. This allows us to describe each gaming situation as an $N$ -tuple $(c_1,\ldots,c_N)$ , where $c_i\in {\cal S}_i$ for all $1\le i\le N$ . For the sake of simplicity, let us agree on the following convention: If $c_n=\emptyset$ , then the $n$ th object does not appear in the situation. If, on the other hand, $c_n\ne \emptyset$ , then the $n$ th object $o_n$ appears in the gaming situation and can be manipulated by the user. It is important to note here that not every $N$ -tuple in ${\cal S}_1\times \cdots \times {\cal S}_N$ represents a meaningful gaming situation. If, for instance, the aim of the game is to teach optics (e.g., by conducting virtual experiments like, for instance, in the ELEKTRA demonstrator game), then it would be pointless to present two blinds, a mounting rail, and a screen, but no torch. In order to exclude such meaningless situations, let ${\cal S}\subset {\cal S}_1\times \cdots \times {\cal S}_N$ be the set of all “meaningful” gaming situations.
Furthermore, a problem situation or task is assumed to be a tuple $(i,{\cal T})\in {\cal S}\times 2^{{\cal S}}$ with the following interpretations in mind: 1) $i\in {\cal S}$ is the initial state, a user is confronted with; and 2) ${\cal T}\subset {\cal S}$ is the set of solution states. If one of the solution states in ${\cal T}$ is accomplished by the user, then the task is completed successfully. Finally, the set of different problem situations (tasks) is denoted by ${\cal P}$ .
To master a certain problem situation, a person can perform different actions to modify the gaming situation. In the ELEKTRA blinds problem, for example, the learner might turn on the torch, vary the torch's orientation, or move a blind. With the slope device, they can change the position of the sliders for the strength of the fan or the magnet (see Fig. 1b). Additionally, we assume that any problem can be solved in a finite number of steps. Note that, within the context of game-based learning, this is quite plausible to assume since, in general, a person can only proceed with the next level of difficulty, if all the problems of a given level are successfully solved. This general idea is formalized as follows: Let ${\cal A}$ be a nonempty set of actions a user can perform. The element $q\in {\cal A}$ stands for that action that quits the current task. Furthermore, let $R\subset {\cal S}\times {\cal A}$ denote a “compatibility relation” with the following interpretation in mind: $(s,a)\in R$ if and only if action $a$ is performable in the gaming situation $s$ . Finally, let $f:R\rightarrow {\cal S}$ be a “transition function” in the following sense: If a user performs action $a$ in the gaming situation $s$ , then the gaming situation $f(s,a)$ results. In the following, let us consider a fixed problem situation $(i,{\cal T})\in {\cal P}$ . Then, a finite sequence


$$X(i,{\cal T}):=\langle (s_1,a_1),(s_2,a_2),\ldots,(s_n,a_n)\rangle$$

is called a solution process for problem $(i,{\cal T})$ if and only if the following conditions are satisfied:

    1. $s_1=i$ ;

    2. $(s_t,a_t)\in R$ for all $t=1,\ldots,n$ ;

    3. $f(s_t,a_t)=s_{t+1}$ for all $t=1,\ldots,n-1$ ;

    4. $s_t\not\in {\cal T}$ , for all $1\le t\le n$ ;

    5. $a_t\ne q$ , for all $1\le t\;{\le}$$n-1$ .

Note that, if $f(s_n,a_n)\in {\cal T}$ , then the task is completed successfully. If, on the other hand, $a_n=q$ , then the user quits the task before completion.
3. The Updating Rule
To interpret the learner's actions in terms of his or her knowledge, we have to link the observed actions to the underlying skills. There are a lot of definitions of the terms “knowledge,” “competence,” or “skill.” As a working definition, we refer to “skill” as an atomic and well-defined entity of knowledge or ability, which ideally can be formalized as a proposition of two related concepts. This view is in accordance with definitions articulated by [ 9] and [ 10]. Although it is a rather low-level and limited view, it is suitable, even necessary, for the aim of a fine-grained knowledge assessment. This, in turn, is a prerequisite for providing the learner with fine-grained guidance and feedback.
To proceed successfully through the game, we assume that a set $E$ of skills (in the following also referred to as elementary competencies) is required. Following the assumptions of Knowledge Space Theory (e.g., [ 11], [ 12], or [ 13]) and Competence-based Knowledge Space Theory (e.g., [ 9], [ 14], [ 15], [ 16], or [ 17]), we assume prerequisite relations between the skills in $E$ . This means that some skills are prerequisites to other skills. To give a very simple example, to “know that the distance between blind aperture and light source is in inverse proportion to the diameter of the resulting light spot,” it is necessary to “know what light is” or to “know what an inverse proportion is.” For a more realistic example, which is adopted from the 80Days project on game-based learning in geography, see Fig. 5b.
According to these relationships, not every collection of skills provides a well-defined “competence state.” For this reason, let ${\cal C}$ be a family of subsets of $E$ , containing at least $E$ , and the empty set $\emptyset$ . The elements in ${\cal C}$ are referred to as competence states, and the tuple $(E,{\cal C})$ is denoted as competence structure (cf. [ 17], [ 18]).
According to Competence-based Knowledge Space Theory, we assume that at a certain point in time, any person is in a specific, yet not directly observable, competence state in ${\cal C}$ . The present approach attempts to specify the learner's competence state by interpreting his or her actions. Formally, we assume that for any problem solution process $X(i,{\cal T})$ , there exists a conditional probability distribution


$$L(\bullet \vert X(i,{\cal T})):{\cal C}\rightarrow [0, 1],$$

with the following interpretation in mind: $L(C\vert X(i,{\cal T}))$ is the conditional probability that a person has all the elementary competencies in $C$ but none of the competencies in $E\setminus C$ , given that the solution process $X(i,{\cal T})$ has been observed.
In the following, let us consider a fixed problem situation $(i,{\cal T})$ , and an associated problem solution process $X(i,{\cal T})$ . Then, without loss of generality, we can assume that, for a fixed number $n\in {\hbox{\rlap{I}\kern 2.0pt{\hbox{N}}}}$ ,


$$X(i,{\cal T}) =\langle (s_1,a_1),(s_2,a_2),\ldots,(s_n,a_n)\rangle,$$

with $f(s_n,a_n)\not\in {\cal T}$ . Furthermore, let us assume that, subsequent to action $a_n$ , the user performs action $a_{n+1}$ in the gaming situation $s_{n+1}=f(s_n,a_n)$ . If the resulting problem solution process is denoted as $X^{\prime }(i,{\cal T})$ ,


$$X^{\prime }(i,{\cal T}) :=\langle (s_1,a_1),(s_2,a_2),\ldots,(s_n,a_n),(s_{n+1},a_{n+1})\rangle,$$

then we are confronted with the problem of computing the conditional probability distribution $L(\bullet \vert X^{\prime }(i,{\cal T}))$ from the given probability distribution $L(\bullet \vert X(i,{\cal T}))$ and the observation that the user performed action $a_{n+1}$ in situation $s_{n+1}$ .
To solve this problem, the multiplicative updating rule by Falmagne and Doignon [ 19] is adapted to our needs:

    1. If action $a_{n+1}$ in situation $s_{n+1}$ provides evidence in favor of the elementary competency $c$ , then increase the probability of all competence states in ${\cal C}$ containing $c$ , and decrease the probability of all competence states not containing $c$ .

    2. If action $a_{n+1}$ in situation $s_{n+1}$ provides evidence against the elementary competency $c$ , then decrease the probability of all competence states in ${\cal C}$ containing $c$ , and increase the probability of all competence states not containing $c$ .

This means that we have to assign to each possible action an interpretation with respect to required and/or missing skills, i.e., if a learner performs a certain action we can assume that s/he has certain required skills while some others are apparently missing (otherwise we would have observed a better suited action). In the ELEKTRA slope device, e.g., observing a learner selecting full fan speed for a plastic ball rolling down the slope would show that this learner knows that the plastic ball's flight is influenced by the fan. However, s/he apparently does not know that a plastic ball is very light and, therefore, a much lower fan speed is needed.
To formalize this general idea, let 1$E(i,{\cal T})\subset E$ denote those elementary competencies in $E$ which are necessary to understand and solve the problem situation $(i,{\cal T})$ . Furthermore, let us assume two “skill assignment” functions $\sigma^{(i,{\cal T})}:R\rightarrow 2^{E(i,{\cal T})}$ and $\rho^{(i,{\cal T})}:R\rightarrow 2^{E(i,{\cal T})}$ with the following interpretations in mind: If action $a$ is performed in the gaming situation $s$ , then we can surmise that the user has all the elementary competencies in $\sigma^{(i,{\cal T})}(s,a)\equiv \sigma (s,a)$ (“supported skills”), but none of the competencies in $\rho^{(i,{\cal T})}(s,a)\equiv \rho (s,a)$ (“unsupported skills”). Furthermore, to compute $L(\bullet \vert X^{\prime }(i,{\cal T}))$ from the given probability distribution $L(\bullet \vert X(i,{\cal T}))$ , let us fix two parameters $\zeta_0$ and $\zeta_1$ with $\zeta_0>1$ and $\zeta_1>1$ . Then, for a competence state $C\in {\cal C}$ , let


$$L(C\vert X^{\prime }(i,{\cal T})):={\zeta^{(i,{\cal T})}(C)L(C\vert X(i,{\cal T}))\over \sum_{C^{\prime }\in {\cal C}}\zeta^{(i,{\cal T})}(C^{\prime })L(C^{\prime }\vert X(i,{\cal T}))},$$

(1)


with the parameter function $\zeta^{(i,{\cal T})}(C)\equiv \zeta (C)$ defined as


$$\zeta^{(i,{\cal T})}(C):=\prod_{c\in C\cap \sigma (s_{n+1},a_{n+1})}\zeta_0 \prod_{c\in (E(i,{\cal T})\setminus C)\cap \rho (s_{n+1},a_{n+1})}\zeta_1.$$

(2)


It is important to note here that $\zeta^{(i,{\cal T})}(C)$ is set to 1 if $C\cap \sigma (s_{n+1},a_{n+1})=\emptyset$ and $(E(i,{\cal T})\setminus C)\cap \rho (s_{n+1},a_{n+1})=\emptyset$ . Furthermore, if $X^{\prime }(i,{\cal T})=\langle (s_1,a_1)\rangle$ , then we postulate that


$$L(C\vert X^{\prime }(i,{\cal T})):={\zeta^{(i,{\cal T})}(C)L(C\vert (i,{\cal T}))\over \sum_{C^{\prime }\in {\cal C}}\zeta^{(i,{\cal T})}(C^{\prime })L(C^{\prime }\vert (i,{\cal T}))},$$

(3)


where $L(\bullet \vert (i,{\cal T}))$ is the initial distribution at the beginning of the task, that is, before action $a_1$ is observed. Note that, at the beginning of the game (i.e., prior to the first problem situation), either the initial distribution is estimated from an entry test, or the competence states are assumed to be uniformly distributed:


$$L(C\vert (i,{\cal T}))={1\over \vert {\cal C}\vert },\quad \forall C\in {\cal C}.$$

Alternatively, let us assume that the user has already solved some of the problem situations in ${\cal P}$ . If the final problem solution process is denoted as $X(i,{\cal T})$ , then the conditional probability distribution $L(\bullet \vert X(i,{\cal T}))$ is used as initial distribution for the next problem situation $(i^{\prime },{\cal T}^{\prime })$ .
4. Partitioning Competence Structures
In realistic applications, we are confronted with the problem that, in general, the number of competence states is huge, and that, consequently, the probability updates (in the sense of (1) and (3)) cannot be realized in real time as it is necessary to keep an undisturbed game experience. If, as an example, we have a relatively small set $E$ of 50 skills, then the resulting competence structure $(E,{\cal C})$ can have up to


$$\vert 2^E\vert = 2^{50}=1.1259 \times 10^{15}$$

different competence states. If, in order to consider a more realistic example, the number of elementary competencies in $E$ is doubled, then there can be up to


$$\vert 2^E\vert = 2^{100}=1.267651 \times 10^{30}$$

competence states, which is far beyond the number of stars in the observable universe (about 3 to $7\times 10^{22}$ ). In order to reduce the computational demand of the updating process, the present paper proposes a modified update by restricting the underlying competence structure $(E,{\cal C})$ to a smaller set of elementary competencies.To this end, let us consider the set $E(i,{\cal T})$ of those elementary competencies in $E$ which are necessary to understand and solve the problem situation $(i,{\cal T})$ (cf. Section 3). Then, a trivial consideration shows that the competence states in ${\cal C}$ can be restricted to the elementary competencies in $E(i,{\cal T})$ : If we define


$${\cal C}(i,{\cal T}):=\{C\cap E(i,{\cal T}):C\in {\cal C}\},$$

then the tuple $(E(i,{\cal T}),{\cal C}(i,{\cal T}))$ is referred to as $(i,{\cal T})$ -restricted competence structure. Furthermore, in order to restrict the conditional probability distributions $L(\bullet \vert X(i,{\cal T}))$ and $L(\bullet \vert (i,{\cal T}))$ to the $(i,{\cal T})$ -restricted competence structure ${\cal C}(i,{\cal T})$ , let


$$[C]:=\{C^{\prime }\in {\cal C}:C^{\prime }\cap E(i,{\cal T}) = C\cap E(i,{\cal T})\}, \quad C\in {\cal C}.$$

Then, a mathematical argument shows that


$$L^{(r)}(C\cap E(i,{\cal T})\vert X(i,{\cal T})):=\sum_{C^{\prime }\in [C]}L(C^{\prime }\vert X(i,{\cal T})),\quad C\in {\cal C},$$

(4)


and


$$L^{(r)}(C\cap E(i,{\cal T})\vert (i,{\cal T})):=\sum_{C^{\prime }\in [C]}L(C^{\prime }\vert (i,{\cal T})),\quad C\in {\cal C},$$

(5)


are probability distributions on ${\cal C}(i,{\cal T})$ . For a formal proof, see [ 20].
Furthermore, by applying the multiplicative updating rule (1) to the restricted probability distribution $L^{(r)}(\bullet \vert X(i,{\cal T}))$ , we obtain that


$$\eqalign{L^{(r)}(K\vert X^{\prime }(i,{\cal T}))&={\zeta^{(i,{\cal T})}(K)L^{(r)}(K\vert X(i,{\cal T}))\over \sum_{K^{\prime }\in {\cal C}(i,{\cal T})}\zeta^{(i,{\cal T})}(K^{\prime })L^{(r)}(K^{\prime }\vert X(i,{\cal T}))},\cr&\quad K\in {\cal C}(i,{\cal T}),}$$

(6)


with the parameter function $\zeta^{(i,{\cal T})}(K)\equiv \zeta (K)$ defined according to (2). It is noteworthy and of central importance for the rest of this paper that an update of the (unrestricted) probability distribution $L$ can be accomplished by updating the restricted distribution $L^{(r)}$ . The following theorem specifies the relationship between the (unrestricted) distribution $L$ and its restricted counterpart $L^{(r)}$ :

Theorem 1. For every competence state $C\in {\cal C}$ , and every problem solution process $X(i,{\cal T})$ ,



$$L(C\vert X(i,{\cal T}))= {L^{(r)}(C\cap E(i,{\cal T})\vert X(i,{\cal T}))\over L^{(r)}(C\cap E(i,{\cal T})\vert (i,{\cal T}))} L(C\vert (i,{\cal T})).$$

(7)


For a mathematical proof of Theorem 1, see [ 20]. It is important to note that the initial distributions $L(\bullet \vert (i,{\cal T}))$ and $L^{(r)}(\bullet \vert (i,{\cal T}))$ are given by definition (cf. Section 3). Furthermore, we have to note that, in general, the restricted competence structure ${\cal C}(i,{\cal T})$ is much smaller than the original structure ${\cal C}$ . If, for instance, $\vert E(i,{\cal T})\vert =10$ , then the number of competence states in ${\cal C}(i,{\cal T})$ is bounded by


$$\vert 2^{E(i,{\cal T})}\vert =2^{10}=1,\!024.$$

Even if the number of elementary competencies in $E(i,{\cal T})$ is increased to 15, then the maximum number of competence states is relatively small (as compared to the original competence structure):


$$\vert 2^{E(i,{\cal T})}\vert = 2^{15}=32,\!768.$$

This shows that, in general, an update of the restricted probability distribution


$$L^{(r)}(\bullet \vert X(i,{\cal T})):{\cal C}(i,{\cal T})\rightarrow [0, 1]$$

is less CPU-intensive than a (direct) update of the unrestricted distribution


$$L(\bullet \vert X(i,{\cal T})):{\cal C}\rightarrow [0, 1].$$

Thus, in order to reduce the computational load of the updating process, the following strategy is advisable:

    1. Within a given problem situation $(i,{\cal T})$ , we update the restricted probability distribution $L^{(r)}$ according to (6).

    2. The unrestricted distribution $L$ is updated only after task completion. This is done according to Theorem 1.

    3. The final distribution $L(\bullet \vert X(i,{\cal T}))$ is used as initial distribution for the next problem situation $(i^{\prime },{\cal T}^{\prime })$ .

5. The Adaptive Problem Selection Process
Let us assume that a user has successfully completed a given problem situation $(i,{\cal T})$ . Furthermore, let $X(i,{\cal T})$ denote the observed problem solution process. Then, the problem is to tailor the next problem situation to the competence state of the user. To this end, let us assume an “assignment-function” $\alpha :{\cal C}\rightarrow 2^{{\cal P}}$ with the following interpretation in mind: If a person is in competence state $C$ , then one of the problem situations in $\alpha (C)\subset {\cal P}$ is presented to the user. In order to ensure that there is an adequate problem situation for every competence state, we assume that for every $C\in {\cal C}$ , $\alpha (C)\ne \emptyset$ .
A problem arises from the fact that the given probability distribution $L(\bullet \vert X(i,{\cal T}))$ provides only probabilistic information on the user's competence state. To solve this problem, we combine the probability distribution $L(\bullet \vert X(i,{\cal T}))$ with the (deterministic) assignment function $\alpha$ in a straightforward way:

    1. Use $L(\bullet \vert X(i,{\cal T}))$ to select a competence state in ${\cal C}$ .

    2. If, in the first step, the competence state $C$ is selected, then one of the problem situations in $\alpha (C)$ is presented to the user. For the sake of variety, the problem situations are selected randomly from $\alpha (C)$ .

Finally, if at the outset, no information on the competence state of a user is available, then one of the problem situations in ${\cal P}$ is chosen at random. If, on the other hand, an initial distribution on ${\cal C}$ is derived from an entry test, then the initial distribution is combined with the deterministic assignment function $\alpha$ as stated above.
The following section provides a hypothetical application to the Tower of Hanoi, which should illustrate the basic concepts of the presented model. It is important to note, however, that this example is not representative for realistic applications in that the number of competencies and problem situations is quite small. Despite its limited applicability to realistic digital games, it has the didactic advantage that all the computations can easily be carried out, even without any programming skills while a real example from a game-based learning application like ELEKTRA would be too large to be presented in a paper in such detail.
6. The Tower of Hanoi: A Walk-Through Demonstration
To provide a walk-through demonstration of the outlined approach, we refer to the problem-oriented game of the Tower of Hanoi, invented by the French mathematician Edouard Lucas in 1883. We exemplify our approach with the simple problem scenario of the Tower of Hanoi for several reasons: On the one hand, this “game” offers an illustrative field of application since it provides a well known and easy to understand problem scenario. On the other hand, the Tower of Hanoi perfectly matches the nature of many game-based problem scenarios and can easily be extended toward more complex and difficult settings. Finally, this type of game covers the very nature of our approach, that is, melding competence structures and problem spaces to enable the system to assess and—to a certain degree—understand a learner's behavior in the game.
The Tower of Hanoi consists of three pegs in a row, and a stack of disks of differing size. At the start, all the disks are on the first peg, from the largest disk at the bottom to the smallest disk at the top. A player is only allowed to move one disk at a time from one peg to another, and at no time may a larger disk be placed on a smaller disk. The aim of the puzzle is to move the disks from the starting peg 1 to the destination peg 3. In the following, we concentrate on three different puzzles, referred to as $d_2$ (two-disk puzzle), $d_3$ (three-disk puzzle), and $d_4$ (four-disk puzzle), respectively. Consequently, ${\cal P}=\{d_2,d_3,d_4\}$ .
In diagnosing the competence state of a person, we assume that a person can have one or more of the following elementary competencies in $E:=\{c_2,c_3,c_4\}$ :

    $c_2$ : ability to solve problem $d_2$ ;

    $c_3$ : ability to reduce problem $d_3$ to problem $d_2$ ;

    $c_4$ : ability to reduce problem $d_4$ to problem $d_3$ .

To demonstrate the basic concepts of the presented model, we assume that the ability to solve problem $d_2$ (i.e., the elementary competency $c_2$ ) can be surmised from the ability to reduce problem $d_3$ to problem $d_2$ (i.e., the elementary competency $c_3$ ). Similarly, we assume that the elementary competency $c_3$ can be surmised from the elementary competency $c_4$ . By these assumptions, we derive at the following four competence states: $\emptyset$ , $\{c_2\}$ , $\{c_2,c_3\}$ , and $\{c_2,c_3,c_4\}$ (cf. Fig. 2).


Fig. 2. Graphical representation of the competence structure $(C,{\cal C})$ .




To make sure that the selected problem situations are tailored to the competence state of a user, we specify the following assignment function $\alpha :{\cal C}\rightarrow 2^{{\cal P}}$ :


$$\eqalign{\alpha (\emptyset ) &:=\{d_2\},\ \alpha (\{c_2\} ):=\{d_2,d_3\},\ \cr\alpha (\{c_2,c_3\} ) &:=\{d_3,d_4\},\ \alpha (\{c_2,c_3,c_4\} ):=\{d_4\}.}$$

Note that, by these definitions, a user is either confronted with the most complex problem situation he or she is capable of solving, or the next more complicated one. Additionally, let us assume a player, who is in the (unknown) competence state $\{c_2,c_3\}$ .
Since, at the outset, no information on the user's competence state is available, a problem situation is chosen at random. Assume that, at first, the player is confronted with the three-disk puzzle: $(i_1,{\cal T}_1)=d_3$ . Then, $E(i_1,{\cal T}_1)=\{c_2,c_3\}$ and


$${\cal C}(i_1,{\cal T}_1)=\{\emptyset,\{c_2\},\{c_2,c_3\} \} .$$

Furthermore, the initial distribution $L(\bullet \vert (i_1,{\cal T}_1))$ is assumed to be the uniform distribution:


$$L(C\vert (i_1,{\cal T}_1))={1\over 4}, \quad \forall C\in {\cal C}.$$

Then, according to (5), the restricted initial distribution on ${\cal C}(i_1,{\cal T}_1)$ is equal to


$$\eqalign{&L^{(r)}(\emptyset \vert (i_1,{\cal T}_1))=L^{(r)}(\{c_2\} \vert (i_1,{\cal T}_1))={1\over 4}, \cr&\quad L^{(r)}(\{c_2,c_3\} \vert (i_1,{\cal T}_1))={1\over 2}.}$$

In order to demonstrate the multiplicative updating rule, let us assume that the player solves the three-disk puzzle in seven steps (the minimal number of steps to solve $d_3$ ) (cf. Fig. 3).


Fig. 3. The solution behavior of a fictive person dealing with the three-disk puzzle $d_3$ .




Action $a_1$ provides evidence in favor of the elementary competency $c_3$ . Furthermore, according to the competence structure $(E,{\cal C})$ , the elementary competency $c_2$ is a prerequisite for $c_3$ , that is, a person who has the elementary competency $c_3$ has also the elementary competency $c_2$ . Thus, we conclude that $\sigma^{(i_1,{\cal T}_1)}(s_1,a_1)\equiv \sigma (s_1,a_1)=\{c_2, c_3\}$ and $\rho^{(i_1,{\cal T}_1)}(s_1,a_1)\equiv \rho (s_1,a_1)=\emptyset$ . Finally, to update the restricted initial distribution $L^{(r)}(\bullet \vert (i_1,{\cal T}_1))$ , let $\zeta_0=\zeta_1:=2$ . Then, according to (2),


$$\zeta (C) = \prod_{c\in C\cap \sigma (s_{1},a_{1})}\zeta_0 \prod_{c\in (E(i_1,{\cal T}_1)\setminus C)\cap \rho (s_{1},a_{1})}\zeta_1 = \prod_{c\in C\cap \{c_2,c_3\} }\zeta_0,$$

and consequently,


$$\zeta (\emptyset )=1,\quad \zeta (\{c_2\} )=2, \quad \zeta (\{c_2,c_3\} )=4.$$

Thus, (6) yields


$$\eqalign{L^{(r)}(C\vert \langle (s_1,a_1)\rangle )&={\zeta (C)L^{(r)}(C\vert (i_1,{\cal T}_1))\over \sum_{C^{\prime }\in {\cal C}(i_1,{\cal T}_1)}\zeta (C^{\prime })L^{(r)}(C^{\prime }\vert (i_1,{\cal T}_1))} \cr&={\zeta (C)L^{(r)}(C\vert (i_1,{\cal T}_1))\over {11\over 4}},}$$

which provides the following update of the initial distribution $L^{(r)}(\bullet \vert (i_1,{\cal T}_1))$ :


$$\eqalign{&L^{(r)}(\emptyset \vert \langle (s_1,a_1)\rangle ) = {1\over 11},\ L^{(r)}(\{c_2\} \vert \langle (s_1,a_1)\rangle ) = {2\over 11},\cr&\quad L^{(r)}(\{c_2,c_3\} \vert \langle (s_1,a_1)\rangle ) = {8\over 11}.}$$

Similarly, Action $a_2$ provides evidence in favor of the elementary competencies $c_2$ and $c_3$ . Consequently, it is $\sigma (s_2,a_2)=\{c_2,c_3\}$ and $\rho (s_2,a_2)=\emptyset$ , which shows that the restricted probability distribution $L^{(r)}(\bullet \vert \langle (s_1,a_1)\rangle )$ can be updated as follows:


$$L^{(r)}(C\vert \langle (s_1,a_1),(s_2,a_2)\rangle ) = {\zeta (C)L^{(r)}(C\vert \langle (s_1,a_1)\rangle )\over {37\over 11} },$$

(cf. (6)). Furthermore, if the multiplicative updating rule is consecutively applied to Actions $a_1$ to $a_7$ , then the following conditional probabilities result:


$$\eqalign{L^{(r)}(\emptyset \vert \langle (s_1,a_1),\ldots,(s_7,a_7)\rangle ) & = {1\over 4,\!225},\cr L^{(r)}(\{c_2\} \vert \langle (s_1,a_1),\ldots,(s_7,a_7)\rangle ) & = {128\over 4,\!225},\cr L^{(r)}(\{c_2,c_3\} \vert \langle (s_1,a_1),\ldots,(s_7,a_7)\rangle ) & = {4,\!096\over 4,\!225} .}$$

Finally, in order to tailor the next problem situation to the competence state of the user, we have to specify the unrestricted distribution $L(\bullet \vert \langle (s_1,a_1),\ldots,(s_7,a_7)\rangle )$ . This can be done according to Theorem 1:


$$\eqalign{L(\emptyset \vert \langle (s_1,a_1),\ldots,(s_7,a_7)\rangle ) & = {1\over 4,\!225},\cr L(\{c_2\} \vert \langle (s_1,a_1),\ldots,(s_7,a_7)\rangle ) & = {128\over 4,\!225},\cr L(\{c_2,c_3\} \vert \langle (s_1,a_1),\ldots,(s_7,a_7)\rangle ) & = {2,\!048\over 4,\!225},\cr L(\{c_2,c_3,c_4\} \vert \langle (s_1,a_1),\ldots,(s_7,a_7)\rangle ) & = {2,\!048\over 4,\!225} .}$$

If we assume that, based on these probabilities, the competence state $\{c_2,c_3,c_4\}$ is selected (cf. Section 5), then the player is confronted with the four-disk puzzle $d_4$ next: $(i_2,{\cal T}_2)= d_4$ . Note that, by definition, $\alpha (\{c_2,c_3,c_4\} )=\{d_4\}$ . Since $E(i_2,{\cal T}_2)=\{c_2,c_3,c_4\}$ , the restricted competence structure ${\cal C}(i_2,{\cal T}_2)$ is equal to the original structure: ${\cal C}(i_2,{\cal T}_2)={\cal C}$ .
Different from the previously discussed three-disk problem, let us now assume that the player is on the wrong track (cf. Fig. 4):


Fig. 4. The solution behavior of a fictive person dealing with the four-disk puzzle $d_4$ .




Action $a_8$ indicates that the user is incapable of reducing the four-disk problem $d_4$ to the three-disk problem $d_3$ . Consequently, $\rho (s_8,a_8)=\{c_4\}$ , and $\sigma (s_8,a_8)=\emptyset$ . Therefore, the definition of $\zeta (C)$ yields


$$\zeta (C) = \prod_{c\in C\cap \emptyset }\zeta_0 \prod_{c\in (\{c_2,c_3,c_4\} \setminus C)\cap \{c_4\} } \zeta_1 = \prod_{c\in (\{c_2,c_3,c_4\} \setminus C)\cap \{c_4\} }\zeta_1,$$

and


$$\zeta (\emptyset )=\zeta (\{c_2\} )=\zeta (\{c_2,c_3\} )=2,\quad \zeta (\{c_2,c_3,c_4\} )=1,$$

(cf. (2)). Thus, by taking into account that the initial distribution $L(\bullet \;\!\vert \;\!(\;\!i_2,\;\!{\cal T}_2\;\!))$ is equal to $L(\bullet \;\!\vert \;\!(\;\!i_2,\;\!{\cal T}_2\;\!))\;\!=\!$$L(\bullet \vert \langle (s_1, a_1),\ldots ,(s_7,a_7)\rangle )$ , the multiplicative updating rule yields


$$\eqalign{& L(\emptyset \vert \langle (s_8,a_8)\rangle )={2\over 6,\!402},\quad L(\{c_2\} \vert \langle (s_8,a_8)\rangle )={256\over 6,\!402},\cr&\quad L(\{c_2,c_3\} \vert \langle (s_8,a_8)\rangle )={4,\!096\over 6,\!402},\cr\noalign{\vskip3pt}&\quad L(\{c_2,c_3,c_4\} \vert \langle (s_8,a_8)\rangle ) ={2,\!048\over 6,\!402}, }$$

(cf. (3)). Finally, let us consider Action $a_9$ : similar to Action $a_8$ , we conclude that $\sigma (s_9,a_9)=\emptyset$ and $\rho (s_9,a_9)=\{c_4\}$ . Consequently, we obtain strong evidence that the learner is in the competence state $\{c_2,c_3\}$ (cf. (1)):


$$\eqalign{ &L(\emptyset \vert \langle (s_8,a_8),(s_9,a_9)\rangle )={4\over 10,\!756} =0.0004,\cr\noalign{\vskip3pt} &L(\{c_2\} \vert \langle (s_8,a_8),(s_9,a_9)\rangle )={512\over 10,\!756} =0.0476,\cr\noalign{\vskip3pt} &L(\{c_2,c_3\} \vert \langle (s_8,a_8),(s_9,a_9)\rangle )={8,\!192\over 10,\!756} =0.7616,\cr\noalign{\vskip3pt} &L(\{c_2,c_3,c_4\} \vert \langle (s_8,a_8),(s_9,a_9)\rangle )={2,\!048\over 10,\!756} =0.1904.}$$

7. Conclusion and Outlook
The present paper provides a theoretical model to assess knowledge and learning progress in an embedded, noninvasive way. The approach was developed in the context of digital learning games, within which conventional assessment procedures are not appropriate or possible. The outlined approach is based on a mathematical framework describing a person's problem-solving behavior in an explorative and problem-oriented gaming situation.
A gaming situation is assumed to be a tuple $(i,{\cal T})$ , where $i$ is the initial state a user is confronted with, and ${\cal T}$ is the set of possible solution states. If one of the solution states in ${\cal T}$ is accomplished by the user, then the task is completed successfully. One might, as an example, think of a virtual room with a table, a torch, blinds, a screen, and a mounting rail in it. The learner's task might be to reduce the torch's light cone into a narrow beam of light on the screen, using the mounting rail and the two blinds. To master a problem situation $(i,{\cal T})$ , a person can perform different actions to modify the gaming situation. The learner might, for instance, turn on the torch, vary the torch's orientation, or move a blind. The central idea of the presented model is to interpret these actions regarding its correctness or appropriateness for accomplishing the task (e.g., narrowing the light cone). To formalize this general idea, we introduced the conditional probability $L(C\vert X(i,{\cal T}))$ that a person is in the competence state $C\in {\cal C}$ given that the solution process $X(i,{\cal T})$ has been observed.
Additionally, we adapted the multiplicative updating rule [ 19] to our needs: Let us assume that for a fixed solution process $X(i,{\cal T}):=\langle (s_1,a_1),(s_2,a_2),\ldots,(s_n,a_n)\rangle$ , the conditional probability distribution $L(\bullet \vert X(i,{\cal T}))$ is given. Then, the multiplicative updating rule formalizes the following intuitive idea: If action $a_{n+1}$ in gaming situation $s_{n+1}:=f(s_n,a_n)$ provides evidence in favor of the elementary competency $c$ , then increase the probability of all competence states containing $c$ , and decrease the probability of all competence states not containing $c$ . Similarly, if action $a_{n+1}$ in gaming situation $s_{n+1}$ provides evidence against the elementary competency $c$ , then decrease the probability of all competence states containing $c$ , and increase the probability of all competence states not containing $c$ . For a formalization of this general idea see (1). It is important to note here that a single observation might not be considered “meaningful.” However, with an increasing number of observations, and therefore also an increasing number of probability updates, the picture is gradually becoming clearer.
In realistic applications, we might be confronted with very large competence structures with millions of competence states, which might lead to the problem that the probability updates cannot be realized at runtime. To address this problem, we have introduced restricted probability distributions defined on a restricted set of elementary competencies (cf. (4) and (5)). The main result of the paper shows that an update of the (unrestricted) probability distribution $L$ can be accomplished by updating the restricted analogue $L^{(r)}$ (cf. Theorem 1). Consequently, in order to reduce the computational load of the updating process, the following strategies are advisable: 1) Within a given problem situation $(i,{\cal T})$ , the restricted probability distribution $L^{(r)}$ is updated according to (6). 2) The unrestricted probabilities are updated only after task completion. This is done according to Theorem 1. 3) The final probability distribution $L(\bullet \vert X(i,{\cal T}))$ is used as initial distribution for the next problem situation $(i^{\prime },{\cal T}^{\prime })$ .
A major advantage of the outlined approach is that it can be easily integrated in educational applications like, for instance, digital learning games, for which conventional knowledge testing is not suitable or possible. The mathematical framework of our model enables the system to monitor the learner's behavior and draw probabilistic conclusions about the user's competence state.
The above-mentioned examples—the slope device or the Tower of Hanoi—might be considered as simplifications. However, we want to emphasize that those examples are typical for the explorative, problem-oriented characteristic of (learning) games. It is important to note that our approach is not restricted to these examples, but is universally applicable to all kinds of actions or observable behavior. In principle, the approach can be applied to all kinds of (educational) games that on the one hand have an identifiable goal and on the other hand that allow a quantification of the approach to this goal. Of course, games that are based on the solution of more or less complex problems, which, in turn, enables the formalization of nontrivial problem spaces, are particularly suitable for the presented approach. Examples might be task-oriented adventure games, simulation games, or strategic games. In the 80Days demonstrator game, for instance, the learner is flying with a space ship over the Earth aiming at discovering certain places (cf. Fig. 5). In this example, the flying directions provide information about the learner's geography skills. A concrete application beyond the gaming context, namely a training based on a haptic simulator device in the medical domain (MedCAP; www.medcap.eu), was described by [ 21].


Fig. 5. (a) A screenshot from the 80Days demonstrator game on geography. The learner's task is to discover major European capitals by flying with a space ship. (b) The prerequisite structure for the underlying skill set.




The outlined approach was developed in the area of game-based learning and is a core component of microadaptivity, that is, the system's capability to interpret a learner's actions with respect to his or her knowledge and to respond adaptively in real time and on an individual basis [ 1]. The aim is to provide the learner with appropriate and tailored guidance, support, information, and feedback. The concept was developed in the context of the ELEKTRA project and is taken up and extended in the course of the direct successor 80Days. While ELEKTRA predominantly focussed on assessment and interventions in an educational sense, 80Days incorporates broader aspects such as the assessment and adaptation in terms of motivational states as well as implementing aspects of adaptive, interactive storytelling. In the context of both projects, we developed demonstrator applications (cf. [ 22], [ 23]), which were empirically evaluated. First, results indicate the usefulness and supportive quality of the microadaptive approach in general, as well as the validity of the noninvasive assessment in comparison to conventional knowledge tests [ 24]. The conceptual work, however, continues and will extend the “multidimensional vector approach” presented in this paper.
Future work will focus, in the first instance, on simulation studies aiming at the precision and efficiency of the multiplicative updating rule. Moreover, we will address the problem of further reducing the computational demands of the updating process, as well as the integration of further “assessment axes,” like for instance, motivational or emotional states of the learner. Finally, enabling the computer system to draw autonomously meaningful conclusions from a learner's problem-solving behavior in the virtual environment of games requires a categorization of actions, problem states, or competence states. Although this categorization limits the system's understanding of what is potentially going on, it does not limit the degrees of freedom in terms of gaming and learning. Future work will address this limitation by taking up ideas of emergent game design. 2

Acknowledgments

The authors are grateful to Eric Hamilton, Wolfgang Nejdl, and three anonymous reviewers for helpful comments on earlier drafts of this paper. The research leading to these results has received funding from the European Community's Sixth and Seventh Framework Programmes (FP6/2003-2006 and FP7/2007-2013) under Grant Agreement No. IST-027986 (ELEKTRA project) and ICT-215918 (80Days project).

    T. Augustin is with Joanneum Research, Institute of Biomedicine and Health Sciences, Elisabethstrasse 11a, 8010 Graz, Austria.

    E-mail: thomas.augustin@joanneum.at.

    C. Hockemeyer, M.D. Kickmeier-Rust, and D. Albert are with the Department of Psychology, University of Graz, Universitätsplatz 2/III, 8010 Graz, Austria. E-mail: cord.hockemeyer@uni-graz.at.

Manuscript received 15 Jan. 2009; revised 1 Sept. 2009; accepted 27 July 2010; published online 12 Aug. 2010.

For information on obtaining reprints of this article, please send e-mail to: lt@computer.org, and reference IEEECS Log Number TLT-2009-01-0006.

Digital Object Identifier no. 10.1109/TLT.2010.21.

1. In real applications, it is important to specify the problem situation $(i,{\cal T})$ in such a way that the “restricted” skill set $E(i,{\cal T})$ is much smaller than the overall skill set $E$ (cf. Section 4).

2. Emergent behavior occurs due to a nontrivial interaction of system components with each other and with the player, which gives rise to behavior that was not specifically intended by the developer [ 25].

References



Thomas Augustin received a diploma in mathematics and the PhD degree in psychology from the University of Regensburg, Germany, in 1999 and 2002, respectively. In 2002, he started working as a university assistant in the Department of Psychology, University of Graz, Austria. Currently, he is working at the Institute of Biomedicine and Health Sciences at Joanneum Research, Graz. His main research interests are mathematical models of human behavior, experimental psychology, and applied statistics.



Cord Hockemeyer received a diploma in computer science from the Braunschweig University of Technology, Germany, in 1993. He is currently working as project manager and researcher for the Department of Psychology at the University of Graz, Germany, and for the Knowledge Management Institute at the Graz University of Technology. His research focus is on efficient procedures for personalized training and assessment of competences. He is a member of the IEEE, the IEEE Computer Society, and the IEEE Education Society.



Michael D. Kickmeier-Rust received a diploma in psychology from the University of Graz, Austria, in 2005. Currently, he is working as a project coordinator in the Department of Psychology at the University of Graz. His research and development focus is on personalization in technology-enhanced learning, especially in the area of game-based learning.



Dietrich Albert received a diploma in psychology from the University of Göttingen, Germany, in 1966 and the DSc degree in psychology from the University of Marburg, Germany, in 1972. Since 1993, he has been working as a professor of psychology and head of the Cognitive Science Section in the Department of Psychology at the University of Graz, Austria. Since 2008, he has also worked as a key researcher in the Know-Center, Austria's competence center for knowledge management, and since 2009, as a senior scientist in the Knowledge Management Institute at the Graz University of Technology. His actual focus in research and development is on knowledge and competence structures, their applications, and empirical research.
6 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool