A Mobile Mixed-Reality Environment for Children's Storytelling Using a Handheld Projector and a Robot
In GENTORO, children can express their story in an immersive environment where physical and virtual spaces are integrated.
Because children can use a robot that behaves in a physical space as a character of their story, they can express the story as if they were producing a film or a Tokusatsu (special effects movie) [ 30].
The mobility of handheld projectors and robots enhances the children's embodied participation in their storytelling activities.
Desktop-based storytelling support systems: a typical feature of systems in this category is that children create a story and make story characters play in the virtual world. Some systems enable children to interact with characters shown on a fixed monitor, using conventional input devices such as mice and keyboards or novel input devices.
Physical-space-based storytelling support systems: a typical feature of systems in this category is that children play the roles of characters in their story in a physical space. They can interact with other children or artifacts in an immersive environment enhanced by mixed-reality technologies.
In these systems or toys, children arrange special physical papers or tiles where scenes are drawn so that a robot can follow a designated path on them. One advantage of using a projector is that animated and dynamic scenes can be easily expressed, which is difficult when using physical papers or tiles. Moreover, papers and tiles may get dirty or broken, which is avoided when projected scenes are used.
Using a projector is a common method for implementing AR techniques. These techniques are used for collaborative work and learning applications in an immersive environment and have been confirmed as useful for raising the level of participants' motivation (e.g., [ 26], [ 37]). In this study, by including a mobile projector for robot control, it is expected to further enhance the level of children's commitment to and embodied participation in their storytelling activities.
Issues related to story rendering processes.
- Scene drawing module: What functions are necessary for supporting children's scene drawing? What should the user interface of the software look like so that they can easily use it?
- Event definition module: What kinds of events for the robot should be prepared? How can children define an event in an intuitive manner?
- Simulation module: Is the simulation module useful for story expression in a physical space? What kinds of simulation parameters are required?
Issues related to story expression processes.
- Handheld projector: Is it acceptable for children to manipulate a robot by using a handheld projector? What kinds of problems may occur while using the projector?
- Information display: Is it reasonable to visualize information such as words used by characters in the scene via a projector or should they be provided aurally via audio speakers?
- Scene control: How can children change the scene in their story?
3.4.1 Overview Pilot Study 1 was conducted to make explicit the requirements for supporting children's story expression processes. The following main issues were investigated:
Problems or difficulties when children use a handheld projector for controlling a robot, and solutions.
The merits and demerits of multirobot control.
Before the pilot study, several functions of the CoGAME system were extended. These include multirobot control and the visual representation of interactions between the robots. For example, when two robots approached each other, the robots started to rotate, or displayed expressions such as “ouch” or “hello, where are you going?” in a balloon on the projected image.
The pilot study was carried out during a single day in November 2007 at a public elementary school near the authors' laboratory (Kashiwa Chiba prefecture, Japan). Twenty-four schoolchildren (fourth to sixth grade, aged 9-12) voluntarily participated during after-school hours. The children were divided into eight groups of three and asked to control two robots (a turtle and a rabbit) using a handheld projector, as shown in Fig. 3. The mobile equipment used by the children comprised a small lightweight LED projector (Mitsubishi LVP-PK, weight 0.5 kg) of reasonable resolution (800 × 600) and brightness, a lightweight USB camera (Logicool Qcam for Notebook Pro, weight 0.04 kg), and a mobile PC with sufficient computational power (Sony VAIO VGN-UX90PS, weight 0.52 kg). All the children in each group held the projector in their hands and carried the PC in a shoulder bag, as shown in Fig. 3. In the pilot study, individual children in each group first manipulated a single robot separately using their own projectors, but occasionally one of them manipulated two robots simultaneously using his/her own projector. The study lasted about 150 minutes, with each group using the system for 15 to 20 minutes.
3.4.2 Results The major findings of the study obtained via questionnaires and video analyses were as follows:
1. Twenty-two children (92 percent) could easily understand how to manipulate the robot and could control its movement as they intended.
2. Via the questionnaires, 18 children (75 percent) were found to have a strong interest in the interactions between two robots. Four groups (50 percent) repeatedly improvised expressions for the robots when they approached each other (e.g., “hey, nice to meet you again”) and started their rotating movement (e.g., “oh, you are too strong, help me ”).
3. Two children (8 percent) in two groups could not control the robot stably because they had difficulty in holding the handheld projector.
4. The system sometimes failed to recognize robots via the camera, which made robot control by the children impossible. In this case, the children could not understand what had happened and why they could not control the robots.
5. Transfer of robot control from one projector to another did not always work smoothly, and this often irritated the children.
6. When small children (in many cases, the younger children) used the system, they had to control the robot via a small projected image because of the short distance between the screen (floor) and the projector. This was a problem, because text or visual objects shown in the projected image were too small to be recognized.
7. The display of a robot's words in a balloon was legible if there were only a few letters (less than 10 Japanese letters). Otherwise, the small size of the letters made it difficult to read the words. In addition, it was almost impossible for the children to keep a constant distance between their projector and the floor while manipulating the robot, which meant that with no autofocus function in the projector the projected image often became blurred.
Findings 1 and 2 represented positive results for GENTORO. Especially, Finding 2 indicated that the physical interactions between multiple robots could inspire children storytelling. However, the remaining findings were negative results. For Finding 3, it was necessary for the projector to be designed so that children could hold it easily. As the projector used in the pilot study is not as light and small as a cellular phone, which enables even children to hold it one-handed, the design of the hardware to enable the projector to be held easily with both hands must be considered.
With Finding 4, it became clear that improvements in robot recognition and the information display in cases of recognition failure were critical. The system used in the pilot study recognized the position and orientation of each robot by identifying different blinking patterns of three infrared LEDs mounted on its surface. If the camera attached to a projector is located directly above the robot, it can capture all LED signals and the robot is completely recognized. However, in a multirobot situation, with the robots moving away from each other, some robots frequently went beyond the field of view of the camera. In that case, to control the robots, the children had to step away from the robots and tilt the camera to enable all the robots to be within its field of view, which could lead to robot recognition failure. This means that, in that implementation of the system, simultaneous multirobot control was not always stable, and visualizing interactions between multiple robots, which engaged the children (Finding 2), did not always work properly.
There are several technical approaches to this problem, such as using a highly functional camera with a wide-angle lens instead of the small lightweight USB monocular camera, or using visual markers, similar to ARToolKitPlus markers [ 33], which show higher recognition rates than infrared LEDs. However, the authors did not take either approach, because, in the former approach, a highly functional camera is expensive (more than (US) at the time of this study) and attaching it to a projector would make the projector heavier. In the latter approach, the markers would be too obtrusive [ 34] and might distract children from their storytelling activities. Our primary goals are to clarify the effects of using a handheld projector and a robot for children's storytelling and to investigate the design of a storytelling support tool that is acceptable to children. Therefore, the design decision was made that a lightweight camera should be used for identifying a single robot, with the issue of simultaneous multirobot identification being postponed.
There were several reasons for Finding 5, including the robot recognition problem described above and hand jitter that made the recognition of the robot unstable when it was at the edge of the projected image. Different control transfer methods should be investigated.
For Finding 6, it is possible to make a projected image larger by using a wide-angle lens or mirrors to increase the distance to the floor from the projector. However, using a wide-angle lens requires nonlinear real-time image calibrations that involve considerable computational cost. Adding mirrors leads not only to difficult optical problems (e.g., optical axis adjustment) but also to a heavier projector. Video analyses of the pilot study indicated that tall children (in many cases sixth graders) could hold the projector sufficiently high to show a large projected image. Therefore, it was decided to restrict the target users to sixth graders in the absence of improved hardware components for the projector.
Finding 7 indicated that the visual representation of text for the words of characters or the narration via a projected image was not ideal, and other methods such as auditory representation should therefore be considered. To provide an autofocus function for sharpening the projected image, one solution is to measure the distance to the floor via external sensors such as ultrasonic sensors. However, a preliminary experiment proved that the measurement was not always accurate, and that it was difficult to adjust the focus automatically and precisely. Adding sensors also meant that the projector became heavier. Therefore, it was decided not to add a sensor for distance measurement and to omit an autofocus function.
3.5.1 Overview For Pilot Study 2, software modules to support story rendering processes were implemented. These modules included a scene drawing module, a robot path setting module, and story simulation modules. The children used a tablet PC (HP tc4200, resolution) for scene drawing tasks. They then set the path for robot movement in each scene. The direction of movement for a robot was specified by a pen stroke on the PC. When the children had completed the scene drawing and the robot path setting, they then conducted story simulations to check the appearance of the rendering of their story on the tablet PC, as shown in Fig. 5. During the simulations, each scene automatically changed to the next scene at a specified time. The duration of each scene was decided by the time needed for children to speak the words of the characters and the narration of the scene. The time taken for each scene was also used to control the speed of the robot for their story expression in the physical space.
Pilot Study 2 was carried out during a single day in March 2008 at the same elementary school as Pilot Study 1. Seven children (sixth graders aged 12, four boys, and three girls) who had participated in Pilot Study 1 volunteered. The children were divided into two groups of three and four. For this study, the children first spent about one hour on their story design, discussing the theme of their story and its script. The children then spent about one hour using the software modules for their story rendering. Finally, they spent 30 minutes expressing their story three to four times. The children spoke the words of the characters, narrated their story, and manipulated a robot by handheld projectors.
3.5.2 Results From Pilot Study 2, the following issues emerged via video analyses:
The children were not confused by the use of the scene drawing module. However, they did not favor the module for the drawing task. As the resolution of the tablet PC was low compared with pen and paper, and the children were trying to draw finely decorated objects, they were not satisfied with their drawings and tried to draw scenes repeatedly.
The children could easily set a path for the robot in each scene by using the robot path-setting module.
The story simulation module was effective, with the children trying to check several things such as the movement of the robot and the tempo of their story. The children specified the duration of each scene and used the simulation module for the rehearsal of their story expression tasks.
In the story expression tasks, each scene changed to the next scene automatically after the specified duration, and the active projector (the projector showing the projected scene for controlling the robot) switched from one child's projector to another accordingly. However, the automatic scene transfer did not always occur as intended. The timing of the scene change was not sufficiently synchronized to the children's speech because children often spoke the characters' words and the narration faster or slower than expected.
The observations of the study and postexperimental inquiries to children and schoolteachers elicited the following facts:
The children put physical objects (their belongings such as erasers and pencils) on the floor and then expressed their story. These objects were used either as landmarks to lead the robot or as meaningful scene objects mentioned in their story.
The children spent longer than expected on their story design and the rendering processes. In particular, they had difficulty in deciding on their story theme because, individually, they had different ideas and had to spend a long time making the ideas converge to an agreeable theme. Children tend to lose their concentration and interest if much time is needed to complete a task. Therefore, it is better to ask children to select their story theme from some alternatives, rather than ask them to invent it by themselves. Similarly, the number of scenes drawn by the children should be limited.
During the expression of their story, two children in the group worked on the manipulation of the robot using the handheld projector, and the others spoke the characters' words and the narration. It was found that speaking a character's words and the narration while confirming the moves of the robot and considering the timing of scene changes was too difficult for one child alone. During the story expression tasks, each child concentrated on his/her own task and did not pay full attention to the others' tasks. Therefore, to make the story expression successful, the number of children in a group should be increased, and each child's task load should be reduced.
Story rendering function.
- Instead of using the scene drawing module on a tablet PC, children draw scenes with paper and colored magic markers. Scenes drawn on the paper are scanned and used by the other modules in the story rendering processes.
Story expression function.
- A handheld projector must be designed so that children can easily and stably hold it with both hands.
- Scene changes during story expression tasks must be conducted manually. A scene control device must be introduced that enables a child to start, finish, and change scenes intuitively.
- To let the children know if a robot is successfully recognized by a camera attached to a projector, and to improve its recognition, a visual indicator must be shown via the projector.
Because of the size and weight of currently available handheld projectors, the target users in elementary school should be sixth graders.
The number of children in a group should be five or more. For a group of five children, two children should work on the manipulation of the robot via the handheld projector, one child on speaking the characters' words, one child on the narration, and one child as the “director” who gives directions to the other children to enable coordinated and synchronized behavior. This child manipulates a scene control device, as described below.
5.3.1 Episode Analysis To investigate how GENTORO affected the children's stories, one group's discourse during the story design processes in the first and second trials was analyzed. The story theme of the group was “friendship.” In the first trial, they devised the idea of using a pit to represent a character's comeback from his life crisis. In the second trial, they were inspired by the movement of the robot in their story expression process and devised a different idea, namely dropping physical objects for the robot (the left picture in Fig. 1). In the following transcripts, phrases and sentences in square brackets have been added by the authors to clarify the meaning of the discourse.
In the first trial:
Girl1: Crisis of the turtle? OK, crisis well, how about a pit
[for expressing the crisis].
Girl2: Wow, good idea!
Boy2: But we cannot create a pit [in the field].
Girl3: Listen! How about [using] black paper cut in a circle
and putting the turtle on it. It looks like a pit, doesn't it!
Boy1: It's not really a pit!
Girl1: How about [using] LEGO blocks?
Boy1: But the turtle first has to climb up the blocks.
Boy2: Let me go and check [to find suitable physical objects
on the table over there].
Boy2: Can we create a pit [by using the blocks that
I brought] ?
Boy1: No way!
In the second trial:
Girl1: Hmm what should we do [to represent the crisis] ?
Boy1: Well, how about dropping something onto a moving
All: Sounds good!
Boy1: Wait a minute! Let me try once!
[Two boys brought the robot and three girls brought
various physical objects. Then they tried each object by
dropping it onto the robot.]
Girl2: Looks good if fruit drops [onto the robot]!
Girl3: Yes, I agree. We can use apples, bananas, and more and
Girl2: So we have to draw the scene again!
From this discourse, it can be seen that, via the story expression tasks in the physical space, the children tried to make the scenes of their story more dynamic by moving not only the robot but also other physical objects. The resulting story indicates that the children could use their imaginations and express ideas in a creative manner.
5.3.2 Evaluations Using Creative Product Semantic Scale (CPSS) Overview. To clarify the effects of GENTORO in supporting children's creative storytelling quantitatively, videos of their stories captured by experimenters were evaluated using the CPSS method [ 5]. CPSS asks nonexpert evaluators to evaluate products in three dimensions, namely “Novelty,” “Resolution,” and “Elaboration and Synthesis.” The original CPSS includes 55 subscale items, each of which is represented by an adjective, and asks evaluators to rate each item on a 7-point Likert scale. As this is too time-consuming and burdensome for evaluators, a simplified version of CPSS [ 35] that uses 15 subscale items out of 55 is often employed. A more simplified CPSS method using six subscale items to evaluate children's creativity is discussed in [ 29]. In this study, by following the method proposed by White and Smith [ 35], 15 subscale items from the 55 items were selected to reduce the workload of evaluators, while retaining the meanings of the three dimensions proposed in the original CPSS as much as possible. The selected subscale items were “novel,” “unusual,” “unique,” “original,” and “fresh” from the “Novelty” dimension, “logical,” “makes sense,” “relevant,” “appropriate,” and “adequate” from the “Resolution” dimension, and “skillful,” “well-made,” “well-crafted,” “meticulous,” and “careful” from the “Elaboration and Synthesis” dimension.
In this evaluation, two stories created by each group of children in the first and second trials were compared. Children changed their story in the second trial after expressing their story in the first trial. Therefore, the purpose of this evaluation was to clarify how experiences of physical and embodied interaction (collaborative story expressions using a robot, artifacts, and handheld projectors in a mobile setting) affected the children's story.
As for the evaluators, they are requested to be familiar with the corresponding domain [ 3]. Therefore, six graduate students (male, aged 25-30) who have been studying educational technologies for at least two years and involved with educational practices through interactions with children in elementary schools were recruited.
Each of them was first asked to watch a one-minute example story on a laptop PC and instructed about their task, which was to rate the stories based on the 15 subscale items. Then the evaluators received 10 video clips comprising five pairs of stories from the first and the second trials by each group of children. The duration of each video was less than three minutes. To prevent order effects, the experimenters selected a pair of stories and instructed each evaluator to rate one of the two stories in random order. When they had rated one pair of stories, they wrote brief comments to explain their ratings. The evaluation lasted about 70 minutes.
5.3.3 Result In order to confirm the internal reliability and consistency of the evaluators, Cronbach's alpha coefficient [ 2] was calculated for each subscale item. One guideline threshold for reliability and consistency is 0.7, and all the values shown in Table 2 are greater than this threshold. Therefore, the evaluators' internal reliability and consistency can be considered satisfactory.
Fig. 10 shows the rating results by the evaluators in terms of the 15 subscale items. As shown in this figure, the average scores of stories in the second trial are all higher than those in the first trial. Concerning the within-subject tests for each subscale item, 13 out of the 15 items were of significance, namely the story in the second trial being more “novel” , “unusual” , “unique” , “original” , “logical” , “makes sense” , “appropriate” , “adequate” , “skillful” , “well-made” , “well-crafted” , “meticulous” and “careful” . The evaluators explained the reasons why they rated a story as “novel,” “unusual,” “unique,” and “original,” such as the inclusion of an unexpected plot, usage of physical objects, or rendition (e.g., singing a song). Also the evaluators commented that stories in the second trial were more “logical” and “makes sense,” because they were more understandable than those in the first trial. One of the evaluators mentioned that stories in the second trial were not boring because they were more “meticulous.” The evaluators rating a story as more “skillful” explained that children could smoothly manipulate a robot using the handheld projector and change scenes, which might be because children were more familiar with GENTORO in the second trial.
Because of the size and performance of current hardware devices, the target children must be of a certain height. For Japanese children, this means sixth graders or older. However, this situation may change with the recent development of small-scale projector technologies, as discussed in Section 3.2.
To avoid losing children's concentration by using too much time, it is better to limit the number of story scenes. (Four-scene stories were designed in this study).
To enhance embodied participation, individual children should be assigned tasks with moderate difficulty levels, which promote their collaboration and coordination.
The appearance of a robot may affect the children's interest and engagement level. In this study, the “cute” appearance of the turtle robot attracted interest, particularly among the girls. However, cultural differences should be considered in discussing this issue.
The author is with the Department of Electrical Engineering and Information Systems, Graduate School of Engineering, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan.
Manuscript received 18 Mar. 2010; revised 24 June 2010; accepted 24 Sept. 2010; published online 29 Mar. 2011.
For information on obtaining reprints of this article, please send e-mail to: email@example.com, and reference IEEECS Log Number TLT-2010-03-0026.
Digital Object Identifier no. 10.1109/TLT.2011.13.
Masanori Sugimoto received the BEng and MEng degrees from the Department of Aeronautics and Astronautics and the DrEng degree from the Interdisciplinary Course on Advanced Science and Technology, University of Tokyo, Japan, in 1990, 1992, and 1995, respectively. Currently, he is an associate professor in the Department of Electrical Engineering and Information Systems, Graduate School of Engineering, University of Tokyo. His research interests include human computer interaction, mobile and ubiquitous computing, human robot interaction, acoustic imaging, entertainment computing, and mixed reality. He is a member of the IEEE.