The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.03 - July-September (2010 vol.3)
pp: 203-213
Published by the IEEE Computer Society
Khaled Bachour , Ecole Polytechnique Federale de Lausanne, Lausanne
Frédéric Kaplan , Ecole Polytechnique Federale de Lausanne, Lausanne
Pierre Dillenbourg , Ecole Polytechnique Federale de Lausanne, Lausanne
ABSTRACT
We describe an interactive table designed for supporting face-to-face collaborative learning. The table, Reflect, addresses the issue of unbalanced participation during group discussions. By displaying on its surface, a shared visualization of member participation, Reflect, is meant to encourage participants to avoid the extremes of over and underparticipation. We report on a user study that validates some of our hypotheses on the effect the table would have on its users. Namely, we show that Reflect leads to more balanced collaboration, but only under certain conditions. We also show different effects the table has on over and underparticipators.
Introduction
In situations of face-to-face collaborative learning, unbalanced participation can lead to undesirable results. Lower learning outcomes are observed for members of a group that do not participate in the group process, as well as loss of motivation for the other participating members [ 1], [ 2], [ 3]. One way to overcome this effect is by encouraging members of a group to participate in a more balanced manner. We attempt to achieve this by indicating to individual members their level of participation on a shared display. We embed this display in an interactive table, as shown in Fig. 1, that allows users to interact with each other in as natural a manner as possible while giving them feedback on their behavior. This semiambient display has the properties of both being in the background of the collaboration process while at the same time remaining visible in a central position of the shared workspace. The implemented system does not attempt to directly influence learning outcomes, but rather to promote intermediate processes or interactions that are shown to be predictive of positive learning gains.


Fig. 1. The current design of Reflect with color-coded circles around each speaker position indicating how much the person has spoken.




The remainder of this paper is divided as follows: We first position our work with respect to past research by motivating the current work with established notions from Computer-Supported Collaborative Learning (CSCL) research (Section 2), then by comparing it to similar existing systems (Section 3). In Section 4, we describe the system objectives and design, followed by a detailed account of the user study conducted to evaluate the system (Section 5) and the results obtained (Section 6). We conclude with discussions and future work in Sections 7 and 8, respectively.
2. Motivation
Research on collaborative learning has evolved over the past few decades from observing collaboration with the intent of determining its benefits over individual learning to research aimed at manipulating collaborative processes in ways that foster better learning outcomes. Building on the notion that collaborative learning can be more effective than individual learning, but only under certain conditions [ 4], researchers in CSCL have been exploring collaborative learning contexts (group size, age, gender, etc.) in an attempt to identify those that lead to better learning gains and develop tools that further improve learning outcomes. This proved to be a daunting task as the parameters were many and interacted with each other in complex ways. The “interactions paradigm” [ 5] proposed then a shift of focus in CSCL research: Rather than attempting to discover conditions under which collaboration is beneficial, one could attempt to discover which types of interaction occurring within collaboration lead to better learning outcomes and try to elicit these types of interactions. As shown in Fig. 2, the paradigm breaks down the complex question 1) under what learning conditions is collaborative learning beneficial? into two separate questions: 2) what types of interactions lead to better learning outcomes? and 3) how can these types of interactions be elicited within specific learning contexts?


Fig. 2. The interactions paradigm suggests an alternate path to studying the outcomes of collaborative learning.




Researchers in collaborative learning have indeed observed that certain types of interactions are predictive of learning. In particular, students who engaged in elaborated explanation [ 6], argumentation [ 7], mutual regulation [ 8], conflict resolution [ 9] as well as seeking and providing help [ 10] exhibited higher learning gains. We note that these types of interactions share a common theme: They are all based on active participation in the form of verbalization. Verbalization itself becomes a necessary, though not sufficient, predictor of a large class of interactions that, in turn, are predictive of higher learning outcomes.
However, in the context of collaborative learning, one cannot make the assumption that the more an individual speaks, the more the group learns. After all, given the generally exclusive nature of conversational turn-taking [ 11], the more one member of a group speaks, the less the others will. Therefore, when looking at learning gains for the group, one must look beyond the notion that more verbalization leads to better learning.
Cohen [ 1] describes some criteria for group productivity, without which group learners might benefit less than individual learners. Among these, lack of equity in participation is presented as an obstacle to effective learning in a group. Salomon and Globerson also describe the debilitating effects of unbalanced participation [ 3]. They describe two types of effects: the “free-rider” effect in which an overparticipating member could cause other members to expend less effort on the common task, and the “sucker” effect in which underparticipating members could lead the more active members to lose motivation in the task, and thus, avoid being taken advantage of. In either case, group productivity decreases.
Cohen also suggests that the difference in participation is not necessarily related to participants' abilities or their expertise, but rather to their perceived status which can come from any number of stimuli including age, gender, social status, or race of the participant. In some cases, perceived popularity or attractiveness of individuals can lead to more active participation on their part [ 1], [ 12]. Moreover, it was shown that the amount of one group member's participation in itself can lead to that member being perceived as having a higher status, thereby leading to even more unbalanced participation [ 13].
Unbalanced participation in group learning can thus be seen as a deterrent for effective learning. There is a need then to encourage members to participate in a more balanced manner.
To illustrate the need for balancing participation, we present a small study conducted with eight subjects divided into two groups of four. We gave the subjects a task in which they were asked to rank, individually at first, and then, in group, a list of 15 objects in order of their importance for survival in the desert. They were given 10 minutes to complete the task individually. They were then asked to discuss the problem for 30 minutes and come up with a single ranking that they all agree upon.
We measured the individual members' participation in terms of their total talking time during the group discussion phase. In both groups, one member clearly dominated the discussion, as can be seen in Fig. 3. It is important to note that in both cases, the individual rankings made by the dominating speakers before the start of the discussion were, according to experts, relatively poor when compared to some of the original decisions made by members of their group. This indicates that the dominant speakers did not have more expertise on the topic of discussion than other members. Interestingly, in both situations, most participants, including both of the dominating members, were not aware that the conversations they had were not balanced. Moreover, when asked, they were not able to determine which member did, in fact, dominate the meeting.


Fig. 3. Participation of group members in the choice shift task.




We drew two conclusions from the study. The first is a confirmation that difference in participation is not necessarily attributed to difference in level of expertise in which the more expert peer would participate more. The second, surprisingly, is that it is not always obvious for members of a group who it was that spoke more than the others, even when one speaker dominated the conversation significantly.
In conclusion, the technology we present here aims at balancing group participation in terms of verbalization. Although the ultimate aim of the system is to improve learning outcomes of collaboration, its direct aim is to balance participation, and thus, its design, evaluation, and analysis are made with that direct aim in mind.
3. Related Work
Jermann et al. [ 14] describe three types of computer support for collaborative learning. These vary depending on their level of active involvement in the regulation process. Coaching systems observe and interpret the collaborative setting and provide advice to the learners. Less active are metacognitive tools that summarize to the users, via a set of key indicators, the state of the interactions taking place without giving advice on how to interpret or act on these indicators. Finally, mirroring tools simply reflect to the users their basic actions by informing them what each member of the group has done. By increasing their awareness of what they are doing, mirroring tools help members maintain a common representation of what is taking place in the collaborative process. The system we propose here is of the mirroring type. It displays to the users a basic representations of the actions they have taken, namely, the amount of speech they have produced, without offering advice or interpretation on the state of the interaction.
Researchers in the field of Human-Computer Interaction have already done some work on influencing group conversation with mirroring displays. Most prominently, DiMicco et al. have explored the effect of such visualizations on speaker behavior [ 15], [ 16]. They have studied both the effects of having this information displayed in real time as the conversation takes place and of having this information displayed between meetings as a replay tool. Their system, Second Messenger, showed promising results for mirroring displays. The replay tool had a significant effect on speaker behavior after it was displayed. Overparticipators spoke less and underparticipators spoke more. This desired effect was not completely achieved when only the real-time tool was used. By displaying information in real time, Second Messenger pushed overparticipators to reduce their levels of participation but the effect was not as strong for underparticipators.
Other researchers have also studied the effects of these visualizations. Bergstrom and Karahalios implemented two systems, the Conversation Clock [ 17], [ 18] and Conversation Votes [ 19]. In both systems, a visualization representing the current conversation is projected onto some shared surface. The Clock shows which member of the group spoke at each time and allows the users to get a snapshot of the conversation history every time they look at the surface. Conversation Votes goes further and allows members of the group to anonymously “vote” indicating to the table whether or not they agree with what is being said. This information is visualized onto the table along with the speaking patterns of the users. The authors reported varying reactions to the visualizations especially in terms of reactions to long-term and short-term history, as well as changes in behavior among above and below average speakers.
Our work follows a similar approach of displaying information about speaker participation to group members. Our originality comes from achieving similar benefits in terms of balanced speakers while retaining as much of the natural behavior of group members as possible. The display becomes embedded into everyday furniture, and with the use of directional microphone arrays, we eliminate the need for lapel microphones or headsets. The result is a regular table augmented with a semiambient real-time feedback of a conversation taking place around it.
This notion of embedding computing functionality in real-world objects is developing as a research trend on its own. The name “roomware” has been given to this type of device and has been described as an “umbrella” framework for four fields: ubiquitous computing, computer-supported collaborative work, augmented reality, and architecture [ 20]. Countless devices have been developed that satisfy the criterion of roomware: real-world objects with embedded computing. Lamps, clocks, tables, walls, and floors have been augmented with computational functionality ranging from simple single-purpose devices such as a clock to improve location awareness for family members [ 21] to elaborate multipurpose table surfaces for supporting collaboration [ 22].
The purpose of our work, namely, influencing group behavior in order to foster interactions that improve learning outcomes, falls within the scope of CSCL. Our method, however, falls within the realm of roomware with the specific purpose of augmenting collaborative spaces by embedding within the physical table a tool that helps increase awareness of member participation.
4. Description of the System
Reflect is an interactive table designed to address the issue of unbalanced participation. We describe first the conceptual design in relation to its objective. We then detail the physical design which we will motivate with some constraints that we imposed on the system.
4.1 Design Objectives
The aim of Reflect is to function as a mirroring tool for collaborative groups. The term mirroring tool refers to the informative, rather than normative, nature of the system [ 14]. Mirrors do not tell their users what they are doing right and what they are doing wrong, in the same way that a bathroom mirror does not tell a user if their hair looks good or not. It simply shows them a reflection of their current state and leaves it for the users themselves to decide what, if anything, needs to be changed. In the same manner, Reflect is not meant to judge the quality of the interaction, nor is it meant to actively pursue a more balanced collaboration on the part of its users. Its role in that respect is to inform the users of the current state of the conversation, and it is up to the users to decide what needs to be done. There are instances where one speaker is expected or even required to participate more than the others, for example, if that speaker is the expert on the subject of discussion. Our system will thus remain neutral in terms of its judgment of the situation and its role will be strictly informative rather than normative. We cannot deny, however, that by making available information on participation levels, we are potentially inducing an implicit norm among at least some members of the group that participation levels need to be monitored, and therefore, controlled.
4.2 Design Requirements
We required our interactive table to abide by certain principles that we found important for a system meant to follow the disappearing computer paradigm of ubiquitous computing.
Regardless of its embedded functionality, the table was required to retain its initial purpose, namely, serving as a table before being a display. Having a conversation or a work meeting around the table should involve minimal behavioral change from the natural use of a regular table. In other words, users should be able to use the table in the same way they would use a regular table, without having to worry about attaching peripherals or other accessories to their bodies. In addition, the surface of the table should remain a working surface. Users should be able to place their notes, laptops, and their coffee mugs on the table.
The table should remain unobtrusive and should not take too much attention away from the task the users are performing. The information it provides to its users is meant to be minimal and require very little cognitive effort to understand. It is thus important that the table not draw a lot of attention to itself and away from the real task at hand.
Despite the unobtrusive criterion, the table must nonetheless be visible and should not be so discreet that it is ignored completely. The information should be prominently displayed in a shared location and should be within the peripheral vision of the users, i.e., the part of the users' field of vision that is not the focus of their attention, but of which they have at least a minimal awareness.
A balanced trade-off between the unobtrusive and visible criteria forms what we refer to as the semiambient nature of the table.
4.3 Physical Design
With the requirements above in mind, Reflect was designed as an interactive table for four people. In its center, three microphones, forming what is referred to as a microphone array, allow the system to detect which participant is speaking at each point. This is done by selectively filtering the sounds coming from different directions around the array and converting them into separate channels that can be listened to individually. This process, performed by a special purpose system developed by Illusonic [ 23], is called beamforming. It permits the table to determine the direction the sound is coming from, and hence, the current speaker, reliably and without requiring the users to carry any wearable artifacts such as microphones or other sensors. When overlap in speech occurs, the system registers only the loudest of the overlapping speakers. Users can thus simply sit at the table and begin their collaboration without the need to log in or to use equipment not generally required around a normal table. A sturdy glass surface permits the table top to be used as a regular working surface.
The display of the table is a matrix of $8 \times 16$ multicolor Light-Emitting Diodes (LEDs) that lay beneath a frosted glass surface. The LEDs are individually addressable and form a very low-resolution screen. This choice of display is mainly motivated by the unobtrusive criterion. The information displayed on the surface of the table should not be so complex that it requires significant attention from the users. Having the display at the center of the table and covering most of its surface, would make the information difficult to miss. The bright light of the LEDs also helps the information retain its visibility even in well-lit rooms.
Though it is easy to see why this design satisfies the serving as a table criterion, it is less obvious that the resulting table would be unobtrusive and visible. We will refer to this question later in this paper when we describe the results of the user study.
4.4 Visualization
Given the input of the beamforming microphone array and with the LED matrix as output, we were free to design a wide range of visualizations. Notably, the territorial display, as shown in Fig. 1, visualizes the conversation with four “territories” of lit LEDs, one around each speaker. The territories have different colors for different speakers, and they grow in size according to the speakers' levels of participation up to the point where one speaker's territory may begin to expand into the others' territories.
Another visualization that was implemented is seen in Fig. 4. We refer to it as the column visualization, and it shows the participation levels of speakers as columns of LEDs, colored differently for each user. The more a user speaks, the more LEDs in his or her column light up. The result is a simple visualization that makes it very easy, and may even encourage users, to compare their participation levels.


Fig. 4. Four subjects taking part in the experiment in the speaker-based condition. Four labeled columns of LEDs can be seen on the table indicating the users their participation levels.




Though our initial favorite was the territorial display, the column visualization was the one chosen for the user study for reasons that we will make clear later on.
5. User Study
In order to evaluate the effect Reflect has on collaborative work, we conducted a user study with the aim of validating two hypotheses:

    H1. Individuals are more aware of their own and their partners' levels of participation when usingReflect. By validating this hypothesis, we would be able to conclude that the information displayed on the table is seen and assimilated into the user's mental model of the conversation taking place.

    H2. Groups that are shown their levels of participation onReflectare more balanced than those that are not. By validating this hypothesis, we would conclude that the information displayed on the table is used by the participants as a tool to reduce over or underparticipation.

5.1 Description of the Experiment
Groups of four students were randomly selected from a pool of bachelor students that had volunteered for the experiments. The study included 18 groups (72 subjects—44 male and 28 female). All-male, all-female, and mixed groups were used. Subjects were paid 50 Swiss francs (around 45 US dollars) for their two hour involvement in the experiment. The groups were asked to solve a murder mystery task offered to us by Stasser and Stewart [ 24]. The task materials were translated into French and adapted for groups of four. In this task, each subject was given a copy of investigation logs that included maps, interviews, and a snippet of a news article. They were asked to accuse one of three suspects of having committed the murder. Each individual version of the investigation logs contained certain important pieces of information that were not available in others. This ensured that all subjects were required to participate in the discussion in order to gather all the necessary information. This type of task, referred to as a hidden profile task, is often used in experiments involving group decision-making and information pooling [ 24].
5.2 Experimental Conditions
We used two experimental conditions that were identical except for the content of the information displayed on the surface of the table. In the first condition, the students were shown their levels of participation, i.e., how much time each student talked. This condition will be referred to as speaker-based condition. In the second, they were shown the focus of the discussion, i.e., how much time was spent discussing the case of each of the three suspects in the murder mystery. This condition will be referred to as the topic-based condition.
We note here that we are not particularly interested in observing the effect of a topic-based visualization on the behavior of groups. Displaying information about topic balance serves the purpose of having a situation against which we can compare the effect of the speaker-based visualization. To that effect, the topic-based condition could have been replaced with a condition whereby no visualization is displayed at all. However, we introduced the topic-based visualization rather than no visualization in order to counter the effects of novelty and potential distraction, the speaker-based visualization would have had compared to a condition where no visualization at all is presented.
In both conditions, the columns visualization was used. In fact, the choice of visualization was motivated by the need for a single visualization that can be used for both conditions. Although the territorial display may have been more suitable for displaying speaker levels, it is not at all suited for displaying the time spent on each topic since, unlike the speakers, the different topics do not have a meaningful spatial position that would justify the location of their corresponding territories. This was not a problem for the column visualization as columns were spatially neutral. By labeling the columns, with white stickers posted on both ends of the table, we were able to attribute any kind of information to what each column represents. Both conditions were thus made as similar as possible to one another, with the exception of what information is displayed on the surface of the table.
Participation levels were detected automatically by the table. The subject of discussion was determined using the “Wizard of Oz” technique, i.e., with a human listening to the conversation as it took place and remotely signaling the topic of discussion to the table system.
A third neutral condition, in which no information is displayed on the table, was not included in the design of the study as it would have been quite costly and the benefits of having such a condition were not compelling enough.
5.3 Experimental Procedure
The students were first asked to read the investigation logs individually for 30 minutes, during which the table was used as a simple timer that kept the students informed of the time remaining. The students were allowed to annotate their copies of the logs and were told that they would keep the copies with them during the discussion. At that point, the students were not yet informed that their copies of the investigation logs contained information that was not available to others.
The students were then given 60 minutes to reach consensus on a suspect. In order to start the discussion, the students were asked to come up with possible means, motive, and opportunity for committing the crime for each suspect. They were informed that, in order to accuse a suspect, they must be convinced that he had all of these three elements against him and the other two suspects were missing at least one of the elements. The students were then made aware that they may possess unique information that is not available to others. In addition, they were told that they were not permitted to give their copy of the investigation logs to another participant and each participant was only allowed to read from his or her own copy. Finally, the visualizations were explained to the students, but no mention was made of the theoretical benefit of a balanced discussion either in terms of participation or subject focus.
5.4 Data Collection
During their discussion, the students were filmed and their voices were recorded using the built-in microphones of the table. Logs of participation levels and the time spent discussing each suspect were generated and saved. At the end of each experiment, the subjects were asked to fill in a postexperiment questionnaire that contained 19 questions mostly about the experience they had during the experiment and included four open questions. The questionnaire also asked the users to estimate the amount of time each group member spoke as well as the amount of time they spent discussing each suspect.
6. Results
One group was excluded from the analysis of logs because of an unintentional error that led to loss of recordings and logs for that group, but not the questionnaires, which were included in analysis related purely to questionnaire responses.
6.1 The Visible and Unobtrusive Criteria
Recall from the description of the design of the table that compliance with two of its design requirements, the visible and unobtrusive criteria, remained to be verified. We address this issue here.
The postexperiment questionnaire included some questions meant to get a sense of how subjects perceived the table. Some of the questions and their answers will shed some light on this issue. When asked “Did you look at the table?” the vast majority of the subjects in both conditions said they looked at the table either “sometimes” or “often,” as shown in Fig. 5.


Fig. 5. Responses to the question “Did you look at the table?” by condition.




In terms of the intrusiveness of the table, 86 percent of participants said they were not bothered by the table and 60 percent said they were not distracted by it. These responses vary across conditions, as shown in Fig. 6. Note that in the speaker-based condition, which is the condition of primary interest to the study, only 25 percent reported being distracted by the display.


Fig. 6. Percentage of subjects who answered “yes” to the questions “Did the display on the table bother you?” and “Did the display on the table distract you?” across conditions.




A minority of 15 percent reported feeling “uncomfortable with seeing their participation levels displayed for all to see.” Finally, when asked if they would like to use such a table for other meetings, 66 percent answered “yes” in the speaker-based condition, whereas only 25 percent answered “yes” in the topic-based condition.
We can thus conclude that the table design seemed to satisfy its visibility criterion, in that its visualization was looked at most of the time. The subjects also seemed comfortable with the table showing their levels of participation, enough to want to use it in the future. Few reported being bothered by it, but a quarter of the users were distracted. These results indicate that the table satisfies its unobtrusive criteria to a large extent, but there is nonetheless room for improvement.
6.2 General Effect on Balancing Participation
For measuring the effect of the table on balancing participation levels, we compared how balanced groups were in the speaker-based condition versus the topic-based condition. We measured balance as the difference between perfectly balanced participation (i.e., taking up 25 percent of the total speaking time of the group) and each user's participation level.
We started by comparing means of individual user balance across conditions using an independent samples t-test. We found no significant effect between how balanced users were in the speaker-based conditions and the topic-based condition ( $m_s=7.29,m_t=8.1,t[62]=-0.59,p>0.1$ ).
We then took a closer look at the result and made the following observation. In the postexperiment questionnaire, the subjects were asked the question: “Do you think it is important for members of the group to participate in a more-or-less balanced manner?” We looked again at the effect of the table on the group members' ability to balance their participation, excluding participants in both conditions that answered “no” to this question (36 percent of the participants in the study). As we mentioned earlier, Reflect is not designed as a tool for enforcing group balance, but rather for supporting it by improving participant awareness. The intention to participate in a balanced manner must thus come from the users themselves, and when this intention is absent, any balancing behavior the user exhibits would likely be coincidental.
With the remaining participants (46 subjects), i.e., those who claim that balance in participation is important, we compared the means of their participation levels across two conditions and obtained a statistically significant difference ( $m_s= 5.0, m_t=8.5,t[38] = 2.18, p < 0.05$ ). In other words, participants who had their participation levels shown to them during the task were statistically more balanced than those who had information about topic focus displayed. This result can be seen in Fig. 7.


Fig. 7. Boxplot showing difference between balance in participation across the two conditions for subjects who claimed to believe participation balance is important.




6.3 Effect on Over and Underparticipators
We studied the effect of the different visualizations on a specific subgroup of participants, namely, the extreme participators: those who overparticipated and those who underparticipated. We were interested in seeing how, over time, these extreme participators modify their behavior. The objective here is to see if spending time around the table would eventually lead to change in behavior. For that, we divided the 60 minute logs into two equal parts of 30 minutes each. We computed the relative participation of each participant during each of the 30 minutes. We then determined those participants who were extreme participators during the first half hour, and examined how their participation level changes during the second half hour.
In line with the method used by DiMicco et al. to determine extreme participators [ 15], we defined overparticipators as those who spoke more than the mean participation level (25 percent) plus the standard deviation of participation levels among all participants. A similar definition was used for underparticipators. We ended up with 10 overparticipators and 10 underparticipators, divided equally across the conditions.
We observed that, on average, during the first half-hour overparticipators in the speaker-based condition spoke less than overparticipators in the topic-based condition, though the effect was not significant. More interestingly, in the second half hour, overparticipators in the speaker-based condition spoke less than they did during the first half hour, while in the topic-based condition, they spoke even more. When comparing the second half-hour participation levels of overparticipators across conditions, we observed that there is a significant correlation between participation levels and the condition ( $m_s=37.1,m_t=47.6,t[8]=-3.97, p<0.01)$ . The effect is similar when looking at underparticipators. During the first half hour, underparticipators spoke more in the speaker-based condition than they did in the topic-based condition, and in the second half hour, they increased their participation in the speaker-based condition and reduced it even more in the topic-based condition. However, when comparing the second half-hour participation levels across conditions, the difference is not significant ( $m_s=11.5,m_t\;{=}$$6.1,t[8]=1.304, p>0.1$ ). These results, as illustrated in Fig. 8, are similar to the findings of DiMicco et al. [ 15].


Fig. 8. Change in participation levels of extreme participators in both conditions. Using the speaker-based visualization, overparticipators reduce their level of participations and underparticipators increase theirs. In the topic-based visualization, both extreme participators move in the direction of further imbalance.




Though some of these results do not show a statistically significant effect, which is possibly related to the small number of extreme participators, they do show a trend indicating that the table has the desired effect on participation levels.
6.4 Effect on Individual Awareness
We measured the effect the table has on the subjects' ability to estimate both speaker levels for all participants as well as time spent on each topic of discussion (i.e., the suspects). We wanted to evaluate how much users are aware of the information displayed on the surface of the table. The subjects where thus asked, as part of the postexperiment questionnaire, to estimate for each member of the group, including themselves, the relative level of participation. Note that the visualization on the table was switched off just before the participants were informed that the task is over, and the questionnaire was handed out about a minute afterward. We computed the estimation error of each participant as the sum of differences between their estimate of how much each subject spoke and the actual percentage of time that subject spoke.
For all estimations made, the participants were significantly better at estimating the information in the condition where that information was displayed to them. In other words, when estimating speaker levels, the average error made by the users was significantly lower in the speaker-based condition than in the topic-based condition ( $m_s=4.0{,}$$m_t=5.8,t[62]=-3.3, p<0.01$ ), and when estimating time spent on each suspect, the average error was significantly lower in the topic-based condition than in the speaker-based condition ( $m_s=5.8,m_t=4.3,t=2.4, p<0.05$ ). These results are summarized in Fig. 9.


Fig. 9. Error levels of while estimating speaker levels and time spent on different suspects across both conditions.




6.5 Effect on Topic Balance
In addition to the effect of the table on group balance in terms of participation levels, we also investigated the effect on balance in topic discussion for the topic-based condition. There are, of course, some conceptual differences between topic balance and participation balance. Unlike participation levels where each member of the group is primarily responsible for his or her own level of participation, no single member is responsible for how much time is spent on each topic. In addition, changes in topic occur much less frequently than changes in speaker, especially near the beginning of the discussion. When the group begins discussing one suspect, they tend to stick to that suspect for a long time before moving to a next one. Finally, the nature of the task does not necessitate that suspects are discussed equally. Some details of the murder mystery require more in-depth discussion than others.
That said, we report that no significant difference occurred in terms of topic balance across the conditions ( $m_s=5.7,m_t=6.1,t[49] = -0.24, p>0.1$ ). The time spent on individual suspects in the experiments varied greatly among groups. Not surprisingly, a large number of participants (70 percent) felt that it is not “important to spend more-or-less the same amount of time discussing the case of each suspect.” In the case of participation levels, we were able to put aside subjects who felt that speaker balance is unimportant. However, we cannot do so here since, as stated before, topic balance is not determined by individual users, but by the group as a whole.
6.6 Qualitative Findings
In order to better understand the effect of the table on our subjects, we present here a brief summary of some qualitative analyses done with one of the groups that took part in our experiment. A more detailed breakdown of this case can be found in our previous work [ 25]. These subjects solved the task in the speaker-based condition. We referred to this group in particular because of the different effect the table had on its individual members.
For our analysis, we considered the subjects' responses to two of the open questions in the postexperiment questionnaire:

    1. Can you indicate one or more occasions where the visual display influenced your behavior?

    2. Can you indicate one or more occasions where the visual display had a negative impact on the collaboration?

Fig. 10 shows the rate of participation of each member in this group over time.


Fig. 10. Rate of participation of members of one group is the amount of speech produced by each member over a certain amount of time. Three of the speakers' participation rates can be clearly seen to converge, whereas one speaker remains virtually silent.




Some observations were made about this group discussion.

    1. Participant C, whose rate of participation started low but increased to match that of B and D, responded to the second question by saying that when she noticed that her LEDs weren't lit, she got “frustrated.”

    2. Participant D also exhibits balancing behavior by reducing her level of participation to match two of her group members. In her questionnaire, she explicitly noted that she “tried not to surpass the speaking time of [Participant B]” and sometimes, she “refrained from talking to avoid having a lot more lights than the others.”

    3. Participant A, on the other hand, participated very little at the start, and even less in the second half of the discussion. He reported that he rarely looked at the table and did not feel it is important for members of the group to participate equally. Note that the three other participants reported that they looked at the table either sometimes or often, and all three felt that it was important for members of the group to participate equally.

This case study provides further insight into the potential balancing effect this table can have on group discussion as well as the lack of effect it can have on some individuals. It also highlights the informative and nonnormative role the table has in this kind of setting.
7. Discussion and Limitations
The results of the experiment allow us to draw some conclusions about the effect of a device such as Reflect on group behavior. We summarize the main findings here.
7.1 Validation of Hypotheses
Our first hypothesis is validated: Users are more aware of their participation levels when using the table in speaker-based mode. The significant difference we found when comparing errors in estimating participation levels indicates that the use of the table increased user awareness of these levels. This, of course, does not imply that the users directly used the display of table to learn these levels. It is also possible that by simply knowing that this information was displayed, the users became more conscious of how much they and others were participating. On the other hand, with 88 percent of the users reporting that they looked at least sometimes on the table (96 percent in the speaker-based condition), it seems safe to make the claim that the information displayed on the table did indeed increase awareness on participation levels among the members of the group.
The second hypothesis is only partially validated: Users who were shown their participation levels are more balanced than those who are not. Though this turned out to be true, in general, it is only statistically significant when considering users who claimed to believe that it is important to participate in a balanced manner. Given the informative, rather than normative, nature of the table, this is not surprising. The table does not raise a red flag when a participant speaks too much or too little, thus prompting them to balance their behavior. If a user speaks too much and believes that it is acceptable to speak too much for whatever reason, being made aware of their overparticipation will not push them to reduce their levels of speech.
Our results also showed a significant difference among the second half-hour balance between overparticipators across conditions. Underparticipators also increased their participation in the speaker-based condition and decreased it further in the topic-based condition, though the difference was not statistically significant. In both cases, however, the trend is clear: Extreme participators are pushed in the right direction by having the participation levels displayed. However, given the small number of extreme participators, this result is only partially conclusive, and further investigation is needed to establish whether the effect is truly present or not.
7.2 Limitations of the Study
As a first study, this experiment tried to understand the effect Reflect has on small groups. Due to the laboratory nature of this study, the subjects used the table for short periods of time, and once only. They were working with people they did not know beforehand and will likely never meet afterward. This limits our ability to generalize the results to possible real-world uses of the table. For example, if a group of four people who work together on a daily basis have regular meetings on such a table, what will the effect be? Will they eventually lose interest in the feedback provided by the table and start ignoring it? Or will they learn to build a sense of trust with the table as an objective observer and rely on it for guidance? These questions cannot be answered by our one hour experiments. In the concluding section of this paper, we will describe another study currently being prepared that will address these questions.
The study also did not address the question of group performance in terms of learning benefits. However, as discussed in Section 2, the technology was evaluated with respect to its direct goal of balancing participation. The effect of the technology in terms of its ultimate goal of improving learning gains is yet to be addressed. We content in this paper with expected theoretical benefit on learning gains given the observed effect on group balance.
8. Conclusions and Future Work
We presented an interactive table, Reflect, that is designed to support collaboration between small groups. Reflect listens to the conversation taking place around it and displays information on its surface about the levels of participation of the speakers. We conducted a study that shows that the table does indeed increase awareness of group members about their participation levels. It also, under certain conditions, leads group members to participate in a more balanced manner. We observed a stronger effect of overparticipators reducing their participation than underparticipators increasing theirs.
To further understand the effect of the table, we will soon conduct a real-world study where four prototypes of the table will be placed in four different workplaces for a period of several months. We will observe the effect the table has on groups of people after long-term regular use.
Our ultimate goal will be to address the question of how group members are participating, instead of simply how much. Recall from Section 2 that verbalization in and of itself is not a predictor of learning gains; it is rather a manifestation of certain types of interaction that are predictors of better learning outcomes. We are thus currently exploring the use of pitch and other prosodic features of the voice in order to attribute to each speaker not only a participation level, but also a manner of participation and possibly even a role. By knowing which members of the group are engaging in interactions that foster learning (rather than which group members are simply speaking) the table might be able to provide more meaningful feedback to the group. Current state of the art indicates that a lot can be told about the outcome of an interaction by simply observing basic vocal features [ 26]. We aim to incorporate this type of vocal analysis in future versions of Reflect.

Acknowledgments

The authors would like to thank the Swiss Federal Institute of Technology, Lausanne, for funding the work presented here. In addition, they would like to thank Christof Faller, René Beuchat, and Martino D'Ésposito as well as Quentin Bonnard and Asheesh Gulati for their help in developing the table. They also thank Jean-Baptiste Haué and Guillaume Raymondon, who were involved in the development of initial prototypes based upon which Reflect was built. Finally, they would like to extend their gratitude to Garold Stasser who generously provided them with the murder mystery task used in their experiments.

    The authors are with Swiss Federal Institute of Technology, EPFL-CRAFT, Rolex Learning Center, Station 20, CH-1015 Lausanne, Switzerland.

    E-mail: {khaled.bachour, frederic.kaplan, pierre.dillenbourg}@epfl.ch.

Manuscript received 27 Aug. 2009; revised 29 Jan. 2010; accepted 8 Apr. 2010; published online 15 July 2010.

For information on obtaining reprints of this article, please send e-mail to: lt@computer.org, and reference IEEECS Log Number TLT-2009-08-0136.

Digital Object Identifier no. 10.1109/TLT.2010.18.

References



Khaled Bachour received the master's degree in computer science from the American University of Beirut, Lebanon. He is currently working toward the PhD degree at the School of Information and Communication Sciences, Swiss Federal Institute of Technology in Lausanne (EPFL). His current work, under the supervision of Professor Pierre Dillenbourg and Dr. Frederic Kaplan, is on the development and evaluation of Reflect, an interactive table for supporting casual collaborative learning, at EPFL's Center for Research and Support for Training and Its Technologies (CRAFT).



Frédéric Kaplan received the graduate degree in engineering from the Ecole Nationale Supérieure des Télécommunications, Paris, and the PhD degree in artificial intelligence from the University Paris VI. He worked 10 years at the Sony Computer Science Laboratory in Paris. He is a researcher with CRAFT at the Swiss Federal Institute of Technology in Lausanne (EPFL), Switzerland. In recent years, he has been conducting research in various areas of human-computer interaction including several interactive furniture projects, design of robotic objects, gesture-based interfaces, and paper computing.



Pierre Dillenbourg received the graduate degree in educational science from the University of Mons, Belgium, and the PhD degree in computer science from the University of Lancaster, United Kingdom, in the field of educational applications of artificial intelligence. He is a professor of computer science at the Swiss Federal Institute of Technology in Lausanne (EPFL). He started to conduct research in learning technologies in 1984. He has been involved in the CSCL community since the first meeting in 1989 and has been the president of the International Society for the Learning Sciences. His recent work covers various domains of CSCL, ranging from the design and experimentation of collaboration scripts and interactive furniture to more cognitive projects on dual eye tracking and mutual modeling.
18 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool