Designing for physical interaction with digital technology is an increasingly possible but also challenging task. With the advent of new technologies, such as biosensors worn on your body, interactive clothes, or wearable "computers" such as mobiles equipped with accelerometers, a whole space of possibilities for gesture-based, physical and body-based interaction is opened up. But how can we empower users through these technologies? Can they be used to create new learning technologies where our corporeal bodies are part of the learning process and perhaps even constitute the "knowledge"?
The field of human-computer interaction (HCI) was initially focused on designing for work tasks, measuring efficiency in terms of task completion time. The field attracted cognitive psychologists and computer scientists. Once computers became networked, a second wave of HCI research focused on collaboration tools—though still with a focus on work situations. To deal with the complexities of collaboration, sociologists and ethnographers were consulted, providing richer descriptions of what people do when they work together. Given the boom of social communication applications in Web 2.0, computer games, and various domestic technology for entertainment and multimedia, we now see a third wave of HCI-research that has a quite different focus [ 7
]. The goal of this new movement is to try and design for experiential values rather than efficiency, for entertainment and fun rather than work. This has brought a whole new dimension to the field and an outreach to groups such as graphics, game or toy designers, and artists of all kinds. HCI researchers now have to deal with highly elusive, subjective, and holistic qualities of interaction—qualities that are hard to design for, but also hard to validate through traditional measurements. How can you, for example, measure the tenderness of a touch?
Outside the HCI field, a wave of research in neurology, medicine, psychology, and sociology has resurrected emotion processes
from being cast as innate and ancient processes in humans that makes us less efficient, more error-prone, and, in general, gets in the way of rational thinking. The new insights in this field show that emotional processes are instead the basis for rational decision making [ 9
], for being and surviving in the world, and crucial in social processes [ 28
]. Emotion is seen as an embodied process, involving both body and mind. In a sense, emotion processes have to play the role of bridging the dualism gap between mind and body. This new perspective on emotion processes has been picked up by amongst others, artificial intelligence researchers who use it both to make machines more intelligent but also to try and recognize human emotion.
It is in the intersection between these three trends (new wearable technologies, third-wave HCI, and the new role of emotion processes) that the vision for new learning technologies interacting with/through body, mind, and emotion presented here will be placed. A particular theoretical perspective on how to understand emotion and body will be advocated—one that sees people as meaning-making, intelligent, active coconstructors of meaning, emotional processes, and bodily and social practices. Rather than seeing bodies as instruments to reinforce learning that first and foremost is aimed at residing in some part of our minds, we will advocate a position where mind and body should be seen as integrated wholes.
The advocated theoretical position will then be translated into an interactional
design approach that aims to empower users to express themselves with/through technology (references). This will be contrasted by a more reductionist design position that attempts to "measure" people and their experiences. Our view of emotions emphasizes their active production in the world [ 5
], [ 6
], and importantly, the significant role the physical body has in this production [ 9
], [ 10
]. We propose that there is a range of experiences that involves our corporeal bodies that may be noninstrumental and nongoal oriented, but still crucial to our being in the world. We end the paper by sketching a vision for how learning technologies could pick up on how to design for knowing and being in the world through body, mind, and emotion.
But first, let us present three systems and user studies of those systems to provide a feel for what kinds of bodily, emotional, and aesthetic experiences we may want to design for.
2. eMoto, Affector, and Affective Diary
2.1 eMoto: A Communication Service
The first example deals with personal communication in general and communication of emotions in particular in a mobile setting. It is an extended SMS-service for the mobile phone named eMoto
]. It was designed from an interactional view on communication between friends where users learn about each other's emotional expressions step by step as their friendships and use of eMoto devel-ops. In short, eMoto lets users send text messages between mobile phones, but in addition to text, the messages also have colorful and animated shapes in the background (see examples in Fig. 1
). The user writes the text message and then chooses which expression to have in the background from a big palette of expressions mapped on a circle. The expressions are designed to convey emotional content along two axes: arousal and valence. For example, aggressive expressions have high arousal and negative valence and are portrayed as sharp, edgy shapes, in strong red colors, with quick, sharp animated movements. Calm expressions have low arousal and positive valence which is portrayed as slow, billowing movements of big, connected shapes in calm blue-green colors.
Fig. 1. eMoto.
To move around in the circle, the user has to perform a set of gestures using the stylus pen (that comes with some mobile phones) which we had extended with sensors that could pick up on pressure and shaking movements. Users are not limited to any specific set of gestures but are free to adapt their gesturing style according to their personal preferences, see [ 50
]. The pressure and shaking movements can act as a basis for most emotional gestures people do, a basis that allows users to build their own gestures on top of these general characteristics.
Studies of eMoto showed that the circle was not used in a simplistic one-emotion-one-expression manner, mapping emotions directly to what you are experiencing at the time of sending an emoto [ 50
]. Instead, the graphical expressions were appropriated and used innovatively to convey mixed emotions, empathy, irony, expectations of future experiences, surrounding environment (expressing the darkness of the night), and, in general, a mixture of their total embodied experiences of life and, in particular, their friendship. The "language" of colors, shapes, and animations juxtapositioned against the text of the message was open-ended enough for our users to understand them and express themselves and their personality with them. There was enough expressivity in the colors, shapes, and animations to convey meaning, but at the same time, their interpretation was open enough to allow our participants to convey a whole range of messages. We look upon the colors, shapes, and animations as an open "surface" that users may ascribe meaning to.
Just to provide one example of how eMoto was used, consider the messages in Fig. 2
. In the first message, Agnes expresses her love to her boyfriend. This is probably not news to her boyfriend—no new information is conveyed—she just feels the urge to express it. The back-ground she has picked for her message comes from a part of the circle that we had intended to be somewhere in between angry and happy, but Agnes interprets it in her own way:
" This looks almost angry, but it is not, really. It is like, but oh...[...] It looks somewhat edgy, but at the same time, the way it is feels like it could be some kinds of warm streams with like love like this. But, it is somewhat too edgy for me to interpret it as love. I would have had somewhat more round, soft, maybe somewhat pulsating or something [...] It was this one that I felt fitted best to it."
Thus, she puts her own interpretation of what the love between her and her boyfriend looks like into this picture even if it does not perfectly depict their love to her. When Mona communicated her love to her boyfriend (second eMoto-message in Fig. 2
), she instead used her favorite color, green, to express herself:
" Green is my favorite color and my boyfriend knows that, so this is why it is green because he knows that I think that green is a lovely color, just as lovely as he is."
Both these messages are examples of the need to express those intimate "I am still here for you"—messages in styles that makes sense to the sender and receiver.
Fig. 2. eMoto-messages sent to boyfriends.
From the five participants' usage, we also noted that emotions were not singular states that existed within one person alone and that could be packaged up in a next information package to be delivered to their friends. Instead, the emotion process permeates the total situation, changing and drifting as a process between communicating friends. The results told us something about how emotional communication is more than transferring "information plus emotion" from one person to another.
As one of the users in the study expressed it:
" I leave out things I think are implicit due to the color...the advantage is that you don't have to write as much, it is like a body language. Like when you meet someone you don't say "I'm sulky" or something like that, because that shows, I don't need to say that. And, it's the same here, but here it's color."
To make it absolutely clear: eMoto does not extract emotional information from users, but lets users directly express emotions to the system, a process over which they have total control. They can, for example, express emotions that they are not feeling through shaking and squeezing the sensors of the stylus pen in different ways. While this may seem like lying, it is in fact crucial in any communication situation in order to make human relations work. It is a social responsibility [ 1
]. However, the idea of eMoto is to make the gestures reinforce whatever emotion the user expresses by reacting to the expressive gestures performed by the user. Hence, in the end, users may come to experience the emotion that they are expressing physically through shaking and pressing the extended stylus or was expressed by the partner to one of our study participants:
" When she was happy she showed that with her whole body. Not only her arm was shaking but her whole body. Meanwhile, a huge smile appeared on her lips."
Again, while a strong pressure and energetic shaking was intended to mean a strong negative emotion, rendering strong red colors and angular shapes with jerky movements, users could change the shape of their movement to express something else and inscribe meaning into the open "surface" of the movement itself.
How much a user is willing to reveal to a friend through eMoto is something that the two friends negotiate and decide between themselves in a moment-to-moment fashion. A system that would automatically reveal one user's emotional state to the other would certainly overstep those boundaries sometimes (and sometimes not). It is not a once-and-for-all given state of the friendship between the two us-ers.
From studying the messages sent between these five friends, we learned a lot more than what our users' emotional experiences had been during those two weeks. It became, for example, obvious who nourished the relationship(s) and who did not. The communication patterns between friends have to be handled delicately in order to not overstep the invisible borderline to where you feel intruded upon or neglected by the other. eMoto sometimes put a spotlight on those processes, perhaps making those imbalances more visible than otherwise.
Similar findings were obtained in the study and design process for the Affector
-system built by Sengers et al. [ 45
]. The researchers used themselves as designers, users, and evaluators of the system—a process they name an autobiographical design
Affector is a distorted video window connecting the neighboring offices of two friends (and colleagues). A camera located under the video screen captures video as well as "filter" information such as light levels, color, and movement. This filter information distorts the captured images of the friends that are then projected in the window of the neighboring office. The friends determine amongst themselves what information is used as a filter and various kinds of distortion in order to convey a sense of each other's mood. The distortions shown in Fig. 3
are based on visual algorithms created by Eunyoung "Elie" Shin, Rev Guron, and Phoebe Sengers (one of the researchers in the project), and take ambient sensor data as input. The mapping between sensors (e.g., light levels in the office) to effectors (specific sets of distortions) is accomplished through a set of rules (or filters) defined by the office occupants themselves. These rules select and combine visual distortions based on ambient information. Users of the system select and refine the rules until they seem, for them, to be accurately readable as expressing their friend's mood. The system is always on, which means that it is continuously conveying the presence and mood of the other person.
Fig. 3. Samples of Affector Output.
While the designers originally intended for this to communicate the emotional moods of the two participants to one another, it turned out that what was needed and what they ended up designing throughout the two-year process was to communicate something else. It became a tool for companionable awareness of the other in an aesthetically pleasing and creative way. It was not a simple identification of the partner's emotional mood, but a complex reading of what was going on in the other person's office, highlighting bodily movements, figuring out how this related to what they already knew about each others work life, and interpreting this.
The distortions of the video became the "surface" that was open enough to invite creative use, and allowed the two participants to put meaning to the expressions based perhaps not only on the visual expression, but also on all the other knowledge they had of each other's work life. Pressing deadlines, late night work, getting papers accepted, or knowledge of each other's private life was mixed into their interpretation and meaning-making processes in using Affector.
2.3 Affective Diary: A Personal Logging System
The third example deals with personal logs in general and in our case a diary in particular. An ordinary paper-based diary provides a useful means to express inner thoughts and record experiences of past events. It also provides a resource for reflection. We wanted to create a diary that would draw upon sensor data picked up from users' bodies, allowing users to go back in time and see their own physical and emotional reactions. With an informational view on how to save memorabilia from users' daily emotional and bodily experiences, we might have ended up with a tool that would have classified users' emotions, placed them along a timeline, telling the user what she had been experiencing during the day: "At 14.38 on Wednesday you were happy at level 0.9."
But similar to the design of the eMoto system, we instead wanted to empower the diary writers to make sense of the scraps and bits of data collected from their life.
In Affective Diary
, we wanted to explore reflection that goes beyond the purely intellectual experiences and aids users in remembering, and reflecting on, their embodied emotional experiences [ 48
]. The aim was to provide users with material working as a bridge to their everyday experiences.
In short, Affective Diary works as follows: As a person starts her day, she puts on the body sensor armband. During the day, the system collects time-stamped sensor data picking up movement
. At the same time, the system logs various activities on the mobile phone: text messages sent and received, photographs taken, and the presence of Bluetooth in other devices nearby. Once the person is back at home, she can transfer the logged data into her Affective Diary. The collected sensor data as shown in Fig. 4
is presented as somewhat abstract, ambiguously shaped, and colored characters placed along a timeline.
Fig. 4. Affective diary.
Movement activity as registered by a pedometer in the sensor armband is represented by of how upright the character is. Arousal is represented by the color of the character. Arousal is computed from a GSR measurement (Galvanic Skin Response) which measures how much electricity the skin leads—the more we sweat the more electricity the skin can lead. It is known that aspects of GSR are related to emotional arousal. The mobile data appear in the diary at the times when you took them above the characters. To help users reflect on their activities and physical reactions, the user can scribble diary notes onto the diary or manipulate the photographs and other data.
An in-depth study with four users indicates that users were able to make sense of the diary material and relate it to different events in their life [ 48
]. There was also evidence that they were able to recognize their bodily experiences through seeing the representation in the diary. By recognizing and reliving some experiences (and on occasion and somewhat paradoxically by not recognizing their own bodily reactions), they sometimes even learned something about themselves that they did not know before. Two of our participants went even further and started to reflect on aspects of their lives they wanted to change and then used Affective Diary to try and change their own behavior patterns. In this way, it became a learning tool, not because the system told them what to do, but because of their own reflection.
By using the diary, Erica, one of our participants, discovered that certain events affected her mood, e.g., a meeting with her boss that made her very agitated, see Fig. 5
. This was mirrored by the shape of the character in the diary and she could see that this mood persisted for a long time after the meeting. She says:
" We had a discussion about having vacation in July, although I really didn't want to have vacation then because I had nothing to do. That made me a little annoyed."
When Erica became aware of this she used Affective Diary to change her own behavior in stressful situations and even monitor how well she was doing. For instance, on midsummer's eve, a holiday which usually made her very stressed, she decided to take it easy. For that day/night, the diary showed blue low energy shapes, which she interpreted as having succeeded in staying calm and just en-joying the day.
Fig. 5. Erica's meeting with her boss. (a) The hour between 3 and 4 pm is depicted when the meeting happened that agitated Erica (lilac and red characters). (b) The hour from 4 and 5 pm is depicted where Erica is still agitated (red and lilac characters)—until toward the end of the hour (yellowish character).
In Affective Diary, the colored characters representing users' arousal and movement provide users a means to remember previous experiences, but their interpretation is not once and for all given. The colors are, again, "inscribable" surfaces where users can put their own meaning-making. As they can also scribble their own notes on top of the materials, they can put meaning into the patterns they discover.
Something we initially had not anticipated to see so strongly in these reflective processes was the extent to which the Diary influenced learning and changes in behavior. This occurred especially when our participants had used Affective Diary several times and could look back, consider and compare interpretations of different events. Ulrica, for example, worked through and reflected on her social and close relationships using the Affective Diary over the course of the study. Looking through her diary, she came to associate emotionally upsetting situations with figures that were colored blue and thus calm. She found, however, that a few hours after an event the figures would change color. This, she associated with her usual coping mechanism, jogging. On one occasion during the interviews, for example, Ulrica reasoned about her calmness as her son was telling her that he was moving to France:
" And then, I become like this kind of. I am sort of both happy and sad in some way. I like him and therefore it is sad that we see each other so little, I think. Then [at this time], I cannot really show it. Or there is no reason really, since there is nothing wrong about anything, it is just kind of sad."
Reflecting on the Diary's content, Ulrica expresses surprise that she was able to see a pattern emerge:
" But that it shows me how I work. That I... [In fact] I get quite surprised by that. By the fact that I can see this so clearly here [points to the figures on the screen]. Or that is how I interpret it anyway. That I'm not, that I am not so emotionally engaged in, eh... when I interact with people. That I am [emotionally engaged] only when I am alone, kind of."
She continues to say:
" I have gone back several times to the earlier days, and when I... read the figures I think that it sort of confirms what I have talked about now, what I said about how I function."
For Ulrica then, her reflections using the diary provided an explanation of why people sometimes misunderstood her and her emotional reactions. Further, it led her to conclude that she should let more of her inner feelings be expressed in the moment. In short, Ulrica used the diary to reflect on her past actions and, as a consequence, to decide to change some of her behaviors; a process of reflection, learning, and change appeared to result from using the diary.
2.4 Themes and Lessons Learned
Throughout these three example systems and the user studies of them, we can note how the themes from the introduction above reoccur.
All three make use of sensor technologies as a means to capture something else than what we normally express through written text. All three aims to create the kinds of experiences explored in the third-wave HCI—that is aesthetics of interaction, nontask-oriented communication, and building for experiences, creating in the interaction with others and the system.
All three also address emotion as a process embodied in the interaction, touching on and relating to physical, bodily processes. 1
None of the systems tries to represent these emotion processes inside the system or diagnose users' emotions based on their facial expressions or some other human emotion expression. Instead, they build upon the users own capabilities as meaning making, intelligent, active coconstructors of meaning, emotional processes, and bodily and social practices. In that sense, they are nonreductionist.
The user studies of the three systems all show that emotion is not and cannot be isolated from all other kinds of communication. The use mixes emotional communication with, for example, general awareness of the other's activities, communication of information, work-related issues, or bodily awareness. Bodily movement as the basis for communication and meaning-creation is crucial in all three.
An important lesson from these designs is that they have all left space, or "inscribable surfaces," open for users to fill with content [ 21
]. If users recognize themselves or others through the activities they perform at the interface—if they look familiar to the user through the social or bodily practice they convey—they can learn how to appropriate these open surfaces. The activities of others need to be visible and what can be expressed users should be allowed to shape over time. Making their activities visible should not be taken to mean that all the gory details of what is going on inside the application needs to be shown to the user. What we mean is that the representation needs to be carefully chosen to make it transparent vis-à-vis its inner workings and its relationship to the physical and social context and use [ 21
]. The mapping from gesture to color and animation in eMoto, the mapping from sensor input to video distortion in Affector, and the mapping from movement and arousal to the colorful characters in Affective Diary, need to be understandable and clear to the user. Their shape and form need to remind our users of their own bodily and social practices.
Other systems that exhibit some of these properties are, for example, the VIO-system by Kaye [ 27
] that leaves meaning-making entirely in the hands of its users, or the feather, shaker, and scent systems by Strong and Gaver [ 47
], where communication between the two participants is based on shaking, blowing or sending a scent to one another.
Let us now move to some of the background to why the three systems we described here were designed the way they were.
There has been a wave of new research on emotion in diverse areas such as psychology, neurology, medicine, and sociology. Neurologists have studied how the brain works and how emotion processes are a key part of cognition. Emotion processes are basically sitting in the middle of most processing going from frontal lobe processing in the brain, via brain stem to body and back [ 9
]. Bodily movements and emotion processes are tightly coupled. As discussed by Sheets-Johnstone, there is " a
generative as well as
expressive relationship between movement and emotion
" [ 46
]. Certain movements will generate emotion processes and vice-versa. But, emotions are not hard-wired processes in our brains, but changeable and interesting regulating processes for our social selves. As such, they are constructed in dialogue between ourselves and the culture and social settings we live in [ 28
], [ 32
], [ 33
]. Emotion is a social and dynamic communication mechanism. We learn how and when certain emotions are appropriate, and we learn the appropriate expressions of emotions for different cultures, contexts, and situations. The way we make sense of emotions is a combination of the experiential processes in our bodies and how emotions arise and are expressed in specific situations in the world, in interaction with others, colored by cultural practices that we have learned.
Lutz, for example, shows how a particular form of anger, named song
by the people on the south Pacific atoll Ifaluk, serves a very social role in their society [ 32
], [ 33
]. Song is, according to Lutz, "justifiable anger" and is used with kids and with those who are subordinate to you, to teach them appropriate behavior in, e.g., doing your fair share of the communal meal, failing to pay respect to elders, or acting socially inappropriately.
In ethnography, the work by Katz [ 28
] provides us with a rich account of how people individually and group-wise actively produce
emotion as part of their social practices. When he, for example, discusses anger among car drivers in Los Angeles, he shows how anger is produced as a consequence of a loss of embodiment with the car (as part of our body), the road and the general experience of traveling. He connects the social situation on the road; the lack of communicative possibilities between cars and their drivers; our prejudice of other's driving skills related to their cultural background or ethnicity, etc., and shows how all of it comes together explaining why anger is produced when, for example, we are cut off by another car. He even sees anger as a graceful way to regain embodiment after, e.g., having been cut off by another car.
3.1 A Holistic Perspective
While we have so far, in a sense, separated out emotion processes from other aspects of being in the world, there are those who posit that we need to take a holistic approach to understanding emotion. Emotion processes are part of our social ways of being in the world, they dye our dreams, hopes, and experiences of the world. If we aim to design for emotion, we need to place them in the larger picture of experiences if we are going to address aspects of aesthetic experiences in our design processes [ 36
], [ 11
Dewey, for example, distinguishes aesthetic experiences from other aspects of our life through placing it in between two extremes on a scale. On the one end of that scale, in everyday life, there are many experiences where we just drift and experience an unorganized flow of events, and, on the other end of the scale, we experience events that do have a clear beginning and end but that only mechanically connect the events with one another. Aesthetic experiences exist between those extremes. They have a beginning and an end; they can be uniquely named afterwards (e.g., "when I first heard jazz at the Village Vanguard") but in addition, the experience has a unity"there is a single quality that pervades the entire experience:
" An experience has a unity that gives it its name, that meal, that storm, that rupture of a friendship. The existence of this unity is constituted by a single quality that pervades the entire experience in spite of the variation of its constituent parts.
" ([ 11
, pp. 36-57])
In Dewey's perspective, emotion is:
" the moving and cementing force. It selects what is congruous and dyes what is selected with its color, thereby giving qualitative unity to materials externally disparate and dissimilar. It thus provides unity in and through the varied parts of an experience.
" ([ 11
, p. 44])
However, emotions are not static but change in time with the experience itself just as a dramatic experience does.
" Joy, sorrow, hope, fear, anger, and curiosity are treated as if each in itself were a sort of entity that enters full-made upon the scene, an entity that may last a long time or a short time, but whose duration, whose growth and career, is irrelevant to its nature. In fact, emotions are qualities, when they are significant, of a complex experience that moves and changes.
" ([ 11
, p. 43])
While an emotion process is not enough to create an aesthetic experience, emotions will be part of the experience and inseparable from the intellectual and bodily experiences.
In such a holistic perspective, it will not make sense to talk emotion processes as something separate from our embodied experience of being in the world. This is, in turn, very much in line with our argument above on how the three different systems were picked up and used as embodied emotional interactive systems.
3.2 Emotion in HCI: Design Approaches
3.2.1 Affective Computing As mentioned above, the idea that human rational thinking depends on emotional processing, was picked up by the artificial intelligence field. Picard wrote a groundbreaking book named Affective Computing that has had a major effect on both the AI and HCI fields. Her idea, in short, was that it should be possible to create machines that relates to, arises from, or deliberately influences emotion or other affective phenomena. The roots of affective computing really came from neurology, medicine, and psychology. It implements a biologistic perspective on emotion processes in the brain, body, and interaction with others and machines.
In HCI, several different approaches have been formed to address emotional experiences, body, and aesthetics in interaction. There is the affective computing stance by Picard [ 41
] and others, 2
hedonistic usability by Hassenzahl [ 17
], designing for visceral experiences by Norman [ 40
], and the interactional approach by Boehner et al. [ 5
], [ 6
], [ 23
]. To give the reader an understanding of the interactional approach to designing for embodied emotional experiences that we advocate here, let us give a brief account of the affective computing and hedonistic usability approaches to show the difference in approach and philosophy.
The most discussed and widespread approach in the design of affective computing applications is to construct an individual cognitive model of affect from first principles and implement it in a system that attempts to recognize users' emotional states through measuring the signs and signals we emit in face, body, voice, skin, or what we say related to the emotional processes going on in inside. Emotions, or affect, are seen as identifiable states. Based on the recognized emotional state of the user, the aim is to achieve an as life-like or human-like interaction as possible, seamlessly adapting to the user's emotional state and influencing it through the use of various affective expressions. This model has its limitations, both in its requirement for simplification of human emotion in order to model it, and its difficult approach into how to infer the end-users emotional states through interpreting our sign and signals. This said, it still provides for a very interesting way of exploring intelligence, both in machines and in people.
Examples of affective computing systems directed at the learning field include for example, Kort et al.'s work on affective learning. It is well known that students' results can be improved with the right encouragement and support [ 29]. Kort et al. propose an emotion model built on Russell's circumplex model of affect relating phases of learning to emotions [ 42]. The idea is to build a learning companion that keeps track of what emotional state the student is in and from that decides what help she needs.
Another application in the learning area from Picard's group is a leap chair with pressure sensors [ 39]. The chair classifies nine postures a student can have. The postures are related to affective states associated with a student's interest level. Similar to the other learning system, this system also proactively decides what the learner needs. 3.2.2 Hedonistic Usability Hassenzahl has picked up on the usability tradition and aims to add what they name "hedonistic usability" criteria to usability criteria, methods and design requirements [ 17]. His position is that apart from pragmatic qualities of interaction, such as being able to make a phone call, write a paper, or set up a Web page, users also look for hedonic qualities:
" hedonic quality refers to the product's perceived ability to support the achievement of "be-goals," such as "being competent," "being related to others," "being special."" ([ 17])
By formulating and making such goals explicit in a design process, the system may address other user needs than only those related to the system's functionality. However, the main bulk of work in this strand is directed at usability evaluation methods of already designed systems, including evaluation of such hedonic qualities. Experiences of interaction are typically broken down into a set of Likert-scale questions where users grade software along dimensions such as competence, autonomy, or relatedness. The ultimate goal is always design to for a positive product experience, not for expressive power in both negative and positive dimensions.
This direction of work has been accused of reductionism [ 6], [ 52] since it reduces experience to a set of variables that can be measured. 3.2.3 The Interactional Approach An interactional view sees emotions as constructed in interaction, where the system supports people in understanding and experiencing their own emotions [ 5], [ 23]. An interactional perspective on design will not aim to detect a singular account of the "right" or "true" emotion of the user and tell them about it, but rather make emotional experiences available for reflection. That is, to create a representation that incorporates people's everyday experiences that they can later reflect on. A users' own, richer interpretation guarantees that it will be a more "true" account of what they are experiencing.
According to Boehner et al. [ 5] (with two small modifications of Höök et al. [ 23]) the interactional approach to design:
1. recognizes affect as an embodied social, bodily, and cultural product,
2. relies on and supports interpretive flexibility,
3. is nonreductionist,
4. supports an expanded range of communication acts,
5. focuses on people using systems to experience and understand emotions, and
6. focuses on designing systems that stimulate reflection on and awareness of affect.
Affector, eMoto, and Affective Diary discussed in previous sections are all examples of design for an interactional perspective on emotion. An interactional approach to design tries to avoid reducing human experience to a set of measurements or inferences made by the system to interpret users' emotional states. While the interaction of the system should not be awkward, the actual experiences sought might not only be positive ones. eMoto may allow you to express negative feelings about others. Affector may communicate your negative mood. Affective Diary might make negative patterns in your own behavior painfully visible to you. An interactional approach is interested in the full range of human experience possible in the world [ 36].
An important aspect of the three systems above was their interaction with the actual bodies of the users. In eMoto, the gesturing with the stylus is meant to both be expressive and evoke processes in the body influencing and being influenced by the emotional experience. In Affector, the physical presence of the other picked up by camera and sensors conveys a rich picture of the other, allowing the two users to share and influence each other's mood in a companionable way. Finally, in Affective Diary, reliving bodily experiences through being reminded of when they happened and what else was going on at that moment in time, may allow users to reflect on their bodily experiences.
But, designing for bodily experiences is not trivial. Let us provide some background to how we can view the body from an interactive, constructive point of view, before we turn to a discussion on the different experiential, bodily, qualities explored by us and other researchers and the design methods used to reach them.
4.1 Embodiment and the Corporeal Body
When Merleau-Ponty writes about the body he begins by stating that the body is not an object [ 37
]. It is instead the condition and context through which I am in the world. Our bodily experiences are integral to how we come to interpret and thus make sense of the world. This premise draws heavily on the notion of embodiment
. Playing a central role in phenomenology, embodiment offers a way of explaining how we create meaning from our interactions with the everyday world we inhabit. Our experience of the world depends on our human bodies, not only in a strict physical, biological way, through our experiential
body, but also through our cultural
bodies. He attempts to get away from the perspective of the doctrine that treats:
" perception as a simple result of the action of external things on our body as well as against those which insist on the autonomy of consciousness. These philosophies commonly forget—in favor of a pure exteriority or of a pure interiority—the insertion of the mind in corporeality, the ambiguous relation without body, and correlatively, with perceived things.
" ([ 37
, pp. 3-4])
Feminists have attempted to deal with the actual physical body in more concrete terms, highlighting in particular the differences between male and female bodies. Grosz, for example, makes an interesting journey through the various philosophies, such as Freud's psychoanalysis and phenomenology, throughout the last century, showing that most of them speak, in a sense, vaguely about the actual corporeal body [ 13
]. As a feminist, she sees very little of the female body, but instead, in anything, a normal, male body in the theories on, e.g., perception. Grosz makes the case that female bodies are different from male bodies— both corporeally and through their "cultural completion":
" [..] as an essential internal condition of human bodies, a consequence of perhaps their organic openness to cultural completion, bodies must take the social order as their productive nucleus. Part of their own "nature" is an organic or ontological "incompleteness" or lack of finality, an amenability to social completion, social ordering and organization.
" ([ 13
], p. xi)
This perspective rhymes well with Merleau-Ponty's experiential and cultural bodies mentioned above, even if he, according to Grosz, never really dealt with the fact that some bodies are different from the male body—both corporeally but also in terms of their cultural completion.
Relevant to our investigation here, is Grosz's emphasis on bodily completion by culture or practice. This is where our designs of digital tools come into play. Through new tools, we are in fact interfering with users' practices, with the social ordering and organization. Our bodies are shaped by the tools we surround ourselves with—not only in a metaphorical or "cultural body"—sense but also in a concrete corporeal sense. The tools we have make us experience the world in certain ways, it makes our muscles be used in certain ways, and it stimulates our nervous system in certain ways. Just like dancers, riders, or runners will shape their bodies into certain forms, making them sensitive to balance, position, and rhythm, computer gamers or office workers will shape their bodies into fitting with gaming or desktop activities.
4.2 Body in HCI: Design Approaches
The actual corporeal human body and its experiences in interaction with machines has, for the most part, been treated from the perspective of body as extension of mind or body as something that needs to be trimmed and controlled in HCI. The body has been seen as subordinate to mind, as an instrument or object, and as passively receiving sign and signals, but not actively being part of producing them (cf. Gibson's view that perception is actively seeking). Some even claim that the technologies we have produced treat our bodies really badly [ 31
" Electronics, robotics, and spintronics invade and transform the body and, as a consequence of this, the body becomes an object and loses its remaining personal characteristics, those characteristics that might make us consider it as the sacred guardian of our identity."
Even if we hold the position that extending our bodies with technology might have many benefits, relieving our bodies from pain, creating interesting experiences, or making us healthy, it is obvious that many systems put "goals" and "tasks" to our bodies to be fulfilled no matter how we feel about it (an example of such goal-setting for the body, even if well-meaning is UbiFit Garden [ 8
Similar to how we arrived at the interactional approach to designing for emotional processes, we want to present a design stance that involves the corporeal body in a coconstructed, embodied sense. Let us briefly introduce how body has been seen in HCI before we introduce our own design stance.4.2.1 Ergonomics In ergonomics (preceding HCI [ 14]), the actual physical body is the core focus. The body has been measured and designed for in spaces such as airplane cockpits, cars, or nuclear plant control rooms. As pointed out by Harper et al. [ 16], the perspective taken is one where humans are seen as part of a machine. The pilots, car drivers, and factory workers are part of a larger machinery. They must be trained to follow certain routines automatically as if they are one part of the machine. The machinery must be fine-tuned so that human error is minimized and this can only be done through designing the machinery to fit with meticulous measurements of our physical capacity. In those situations, we actually do want to see our bodies as machines, able to follow routines, and act in error-free ways in the spur of a moment [ 16]. But just as we could have different perspectives on emotion processes, we can have different perspectives on the purpose and experience of using our bodies. It may be that when we drive a car, we want to be part of the car's machinery, but on another level, beyond the mechanistic routine tasks we can make our bodies perform, driving a car is also, on and off, a corporeal experience—sometimes dull, sometimes pleasurable, or even exhilarating. In those situations, we may want to see ourselves as something other than machines built in wetware.
In ergonomics and when we address usability in HCI, for the most part, we assume the body to be passive—the interface will be sending signals to the human body that the passive body receives. But as Merleau-Ponty [ 37] argued so successfully, the body is actively giving form and sense to its own component parts and to its relations with objects in the world—the body is not passive. 4.2.2 Cyborgs Another position sometime taken in HCI is that of cyborgs. A cyborg consists of both artificial and natural systems, or to phrase it differently, of both human body and designed tools that extends out capacity. In its simplest form the extension can be the stick that a blind man uses to find his way. The stick becomes a part of how he feels the world, an embodied part of his own body. But framing tools as part of our cyborg existence goes beyond this one-way extension of our bodies. The cyborg concept comes with various ethical and moral implications when we regard how the technical tools we extend our bodies with in turn speak back to us. This positive side of being a cyborg is in some sense that we can free ourselves from our bodies—as discussed by the feminist Donna Haraway in her cyborg feminist writings [ 15]. In a sense, the focus in this movement is on extending the mind, freeing us from our corporeal reality.4.2.3 Reuniting Virtual and Real While this body-less cyborg being on the internet was much discussed in the beginning of the virtual reality-era, the pendulum has now swung back and most regard it as bad behavior to not connect your real identity to your virtual identity. In addition, more and more technologies are tying reality and virtuality more strongly together, entering our physical selves into the virtual spaces. For example, in the computer games area, we have new interaction devices, such as WII, fake guitars in Guitar Hero, or mobiles, connecting more strongly with our physical selves. A new games field is that of pervasive games, games that are played in town, using technologies such as RFID-tags, mobiles, GPS, or Bluetooth, to exploit the real world and bystanders as part of the game world [ 26]. The currently best-known virtual world, second life, is playfully connected to the real world in various ways, mirroring, for example, various institutions in the real world to virtual ones.
But this drive to unite the real and virtual world does not only concern games and virtual worlds, but also, for example, communication tools. There are mobile communication tools that add contextual information on position or who else is around [ 18]. 4.2.4 Third Wave As mentioned above, in the "third wave" of HCI, we try to figure out how to design for experiences beyond those of task completion, efficiency, and tool-based perspectives. This includes designing for bodily experiences. So far, when it comes to involving bodies and creating for bodily experiences, the focus has mainly been on sports and games (e.g., from early work [ 25] to current [ 44]). The aim is to design for experiential qualities such as flow, immersion, or "game play." But there is also a growing body of designs aimed at other experiences. One example is Moen's Body Bug—a wire that you wrap around your body where a "bug" registers your actions and climbs up and down a wire [ 38]. The bug is a simple robot, moving along the wire. When you strap the wire around your body and start making movements, the bug will move along the wire, in a sense mirroring your movements, see Fig. 6. The bug makes you want to "dance." The sought experiential quality is that of enjoying your own body movement as we do when we dance.
Fig. 6. Interacting with Body Bug.
Using movement and body in interaction can lead to a whole range of experiential qualities of the interaction, such as affective loops [ 22] or supple interaction [ 24]. The system eMoto exemplifies an affective loop experience: By performing motions that resonate with aspects of those involved in an emotional experience, users get affected by the interaction. But, we can also imagine qualities such as mindfulness or the simple joy of movement as in Moen's work. To reach designs in which such qualities arise, designers and researchers have repeatedly reported that as designers, we need to experience our own bodies in the design process [ 19]. This, in turn, requires new methods in the design process.
4.3 Design Methods
In order to get at the felt life
during the design process, the designers of Affector decided to make themselves researchers, designers as well as evaluators of the system [ 45
]. By living with your system, both during the design process as well as with the finished system, designers get an emphatic, embodied corporeal experience of its interactions.
The design groups at Eindhoven University and Philips Design have through a range of applications explored enchantment, movement, and expressiveness [ 12
], [ 19
]. They hold a phenomenological perspective and sees embodiment and interaction as " not only task-oriented meaning but also aesthetic meaning arises in physical engagement.
" When designing technologies that involve human movement, they posit that the designers themselves have to move and experience the interaction in an embodied sense in order to be able to design for it.
An interesting approach to explorations of bodily experiences is offered by Schiphorst [ 43
] who, borrowing from acting methods (e.g., [ 4
]), proposes methods such as moving very, very slowly in order to listen to your own bodily state in interaction or attaching users by velcro and then asking them to move and interact together in order to explore extensions of the body and their meaning in terms of privacy.
Evaluation of use qualities may be more effective when we give users room to reflect upon their experience with a system in rich ways. This was done, for example, in the study of eMoto where the users helped interpret their own data [ 50
]. The users in this study also recruited a close friend or partner (spectator) who provided input on how the system was understood and used.
While we cannot cover all the research in this area, it should be clear that we must find theories and methods that allow us to talk about muscles, nervous system, brain, and signs and signals of emotional processes. But, our ways of describing these experiences should not solely be in terms of biologistic processes, once and for all shaped by evolution and deterministically given, providing for reductionists views on the body—but instead as experiences part of our being in the world. Researchers in this area also agree that our bodily ways of being in the world are shaped by the cultural tools we surround ourselves with—not only metaphorically, but corporeally, changing our physical selves. There is a fear amongst designers and researchers that unless we are careful, human values and ethics might exploit the body and our ways of being in the world in a negative way.
5. Bridging to Experiences and to Learning?
In a sense, the interest in emotional experiences served as a bridge for the whole field of HCI to turn from symbolic, analytical ways of doing task analysis and designing for efficient ways of supporting tasks, to caring more about experiences in general. It has also, to some researchers in HCI, served as a bridge to start addressing our physical, corporeal bodies in interaction and to attempt to bridge the dualism chasm.
This has, in turn, created a huge space of opportunities for design that puts our bodily ways of being in the world first and attempt to address our corporeal experiences. It is, perhaps, in this light we should put the three systems eMoto, Affector, and Affective Diary above. While each of these systems has its deficiencies, none of them is trying to reduce human experience to something that can be measured and modeled, and then packaged as an information piece to be sent to others. They are "nonreductionist" [ 23
]. The experience of using them emotionally and corporeally is shaped by the participants. In a sense, this becomes the "participatory design" movement of the third wave of HCI [ 21
We might expect that researchers in learning technologies will be influenced by this turn to experiences, bodily and emotional interaction. Affective computing has already had an impact on the learning technologies field 3
but perhaps mainly along the affective computing strands. A main difference between affective computing and the interactional approach presented here is that the latter constitutes a turn toward the users' own reflection and meaning-making processes, rather than diagnosing and reacting to their emotional or bodily processes. eMoto, Affector, and Affective Diary all spur reflective learning processes, with a focus on learning more about yourself or your friends on a personal, emotional level. The Affective Diary, in particular, emphasizes metacognitive learning.
Within the learning technologies field, there are many researchers who advocate a similar reflective stance when it comes to learning in general. For example, the researchers in the LeMoRe-group 4
aims to give learners access to the so-called "learner model" in adaptive educational systems in order to promote their own reflections on their knowledge.
But perhaps more akin to the reflective, constructive learning stance taken here, are those learning technologies that emphasize how learning is a constructive process that takes place in collaborative and physical settings. Some have explored tangible interaction or mobile location-aware technologies as a means to make learning more concrete, physical and enabling social performance in groups [ 35
], [ 53
]. Ferneaus and Tholander have designed a tangible environment where children create their own games through moving cards programmed with behaviors on a large mat on the floor. Benford et al. implemented a collaborative location-based educational game called Savannah in which children learn about the ecology of the African savannah, especially about lion behavior [ 3
]. Groups of six children at a time role play being lions by exploring a virtual savannah that appears to be overlaid on an empty school playing field, an open grassy area. Zuffrerey et al. have created a tangible environment for learning logistics of warehouse planning and maintenance [ 54
]. By moving small physical wooden shelves, metallic pillars, and cardboard docks, the students organize a warehouse that is simultaneously mirrored in an augmented reality. This allows for a physical, embodied reflective process where students and teachers get involved in a dialogue on how to organize the warehouse with the tangible model as a shared resource.
Bachour et al. have also tried to address metalearning strategies in a physical, embodied way. An example is Reflect, a system that mirrors how much you speak versus how much your colearners get to talk through lighting up LEDs embedded in a table [ 2
]. As speaking about a subject helps us learn it, the idea is that all the learners in a group needs to get a chance to talk about the subject they are trying to learn.
In all of these examples, the learners' physical presence is key in the interaction, and the learning or constructive/creative process is embedded in the overall interaction with the system. The learning is not happening in a piecemeal, spoon by spoon manner, instead the system is leaving "surfaces" open for users to appropriate for their joint learning processes—between learners or together with their teachers.
In summary, in my vision for (some) learning technologies, we would take care to: leave "surfaces" open for learners to appropriate for their own learning purposes, we would build systems where learners can recognize themselves and their ways of being in the world socially, emotional, or bodily through the interface, and we would avoid a reductionist stance toward people and learning and instead work from a constructive, holistic stance. Inspired by the developments in HCI, turning toward experiences, learning technologies would not be designed to solely promote learning goals in an instrumental, effiency-oriented way, but instead address us as sensual, aesthetically creative beings.
6. Worthwhile Learning Experiences?
My primary field is not learning technologies, and my knowledge of learning theories is limited. My suspicion is that there are plenty of theories speaking about the importance of involving our whole being when learning —especially for small children. I also know that theories of distributed and situated cognition which have influenced learning technologies do emphasis how learning is part of being in the world, utilizing the world as it presents itself to our us, uniquely in each new encounter [ 20
], [ 30
]. But my suspicion is that body is still treated mainly as an instrument to reinforcing learning that first and foremost is aimed to reside in an abstract form in some part of our minds (e.g., as in [ 51
My small contribution to this special issue on visions for learning technologies is therefore to simply remind ourselves that learning takes place in the world and that this world is inhabited by people that do in fact have bodies and that those bodies cannot be separated from their minds, their perception, or their ways of being in the world socially, culturally, or politically. Also, learning both takes place in the world and is enacted in the world—including that "flesh" or "corpse proper" [ 37
]. For a long time, we have focused our efforts toward learning of the abstract, of the rules, the grammar rather than the use of knowledge "in situ." The corporeal body is notably absent from most learning technologies.
As indicated by the design examples, I view our bodies as key in being in the world, in creating for experiences. Our bodies are not instruments or objects through which we communicate information. Communication is embodied—it involves our whole bodies. I also tried to indicate that there are many different kinds of bodily experiences we can envision designing for—mindfulness, affective loops, excitement, slow inwards listening, flow, reflection, or immersion. Some of them will perhaps not make the learning experience more efficient [ 34
], but is that really the point anyway? Efficiency is not the ultimate goal of our existence after all. Perhaps we should rather ask ourselves what the learning technologies that will involve our whole selves, including body, create worthwhile experiences?
This work was completed while the author was with Microsoft Research Ltd on temporary leave from the Mobile Life Center at Stockholm University. The work presented here builds upon a number of previous publications with other authors, including [ 21
], [ 22
], [ 23
], [ 48
], [ 50
]. The author wishes to thank Richard Harper and the anonymous reviewers for comments on the manuscript. This paper was written when the author was a visiting researcher at Microsoft Research Ltd.
• The author is with the Mobile Life Centre, Department of Computer and Systems Sciences, Stockholm University, Forum 1000, SE-164 40 Kista, Sweden. E-mail: firstname.lastname@example.org.
Manuscript received 28 Nov. 2008; accepted 29 Dec. 2008; published online 12 Jan. 2009.
For information on obtaining reprints of this article, please send e-mail to: email@example.com, and reference IEEECS Log Number TLT-2008-11-0102.
Digital Object Identifier no. 10.1109/TLT.2009.3.
1. While some might claim that using a keyboard and mouse will also involve muscular movement, all of these systems relate to noninstrumental, nonsymbolic gestures and movements, related to emotional expressions [ 46], which is different from the instrumental, goal-oriented movements we perform when typing on a keyboard.
3. See, e.g., workshops on affect and learning at ITS 2008 and AIED 2007.
received the PhD degree in 1996. She was chair of human-computer interaction at Stockholm University in 2003. She has been employed at the Swedish Institute of Computer Science since 1990. Currently, she is working as a professor at Stockholm University and as head of the Mobile Life Centre. She has published more than 50 articles in well-renowned journals and conferences. She is known for her work on social navigation, mobile services, and affective interaction.