The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.02 - Second (2012 vol.5)
pp: 170-176
Published by the IEEE Computer Society
K. K. Patel , Sch. of ICT, Ahmedabad Univ., Ahmedabad, India
S. Vij , Dept. of CE-IT-MCA, SVIT, Vadodara, India
ABSTRACT
The inability to navigate independently and interact with the wider world is one of the most significant handicaps that can be caused by blindness, second only to the inability to communicate through reading and writing. Many difficulties are encountered when visually impaired people (VIP) need to visit new and unknown places. Current speech or haptics technology does not provide a good solution. Our approach is to use treadmill-style locomotion interface, unconstrained walking plane (UWP), to allow a richer and more immersive form of virtual environment (VE) exploration to enable VIP to create cognitive maps efficiently and thereby to enhance their mobility. An experimental study is reported that tests design of UWP for both straight walking and turning motions. Two groups of participants, blind-folded-sighted and blind, learned spatial layout in VE using two exploration modes: guided (training phase) and unguided (testing phase). Spatial layout knowledge was assessed by asking participants to perform object-localization task and target-object task. Our results showed a significant decrease in time and helps taken to complete tasks, subjective workload, and errors in a post-training trial as compared to a partial-training trial. UWP has been found to significantly improve interaction with VE with visualizations such as spatial information.
Introduction
Spatial information is not fully available to visually impaired people (VIP) causing difficulties in their mobility in new and unfamiliar locations, due to their handicap in generating efficient mental maps of new and unfamiliar spaces. Consequently, many (more than 30 percent) of the VIP do not ambulate independently outdoors [ 1], [ 2]. Mental map generation, being a subconscious process, is facilitated by repeated visits to the new space. Thus, a good number of researchers focused on using technology to simulate visits to a new space for cognitive maps formation. Although isolated solutions have been attempted, no integrated solution of nonvisual spatial learning (NSL) to VIP is available to the best of our knowledge. Also most of the simulated systems are far away from reality. Use of advanced computer technology including speech processing, computer haptics and virtual reality (VR) provide us with various options in design and implementation of a wide variety of multimodal applications. Virtual Reality provides for creation of simulated objects and events with which people can interact. It thus allows users to interact with a simulated environment either through the use of standard input devices such as a keyboard and mouse, or through multimodal devices such as a wired glove, the Polhemus boom arm, or else omnidirectional treadmill.
The absence of visual channel in visually impaired people is more than compensated by the other sensory channels enabling them to engage in a range of activities in a simulated environment. In this paper, we propose and describe the design of a locomotion interface to the virtual environment to acquire spatial knowledge and thereby to structure spatial cognitive maps of an area. Virtual environment is used to provide spatial information through audio cues instructions to VIP and prepare them for independent travel. The locomotion interface is used to simulate walking from one location to another location. The device is needed to be of a limited size, allow a user to walk on it and provide a sensation as if he is walking on an unconstrained plane.
Unconstrained Walking Plane (UWP) [ 3] has three major parts, viz 1) a motor-less treadmill, 2) mechanical rotating base, and 3) block containing servo motor and gearbox. This device is interfaced with computer-simulated virtual environment through 89C2051 microcontroller-based control circuit. Virtual environment has been developed in Java language using the JDK 1.5 API. Further details of the device can be found in [ 3].
The following safety issues were considered during design phase of the device:

    Feeling of stability while walking on the UWP and when device is taking a turn.

    Any kind of malfunctioning of a device such as device not stopping after turning 90 degree.

    The possibility of electric shock felt by the participant.

The above safety issues have been addressed as follows:

    The side handles are provided to ensure the stability of participants.

    The emergency stop switch has been provided at a convenient location to stop the device in case of any suspected malfunctioning. Second, the observer can also switch off the power supply in an event of emergency.

    Proper earthing and fuse have been provided for electrical safety.

In addition to the above measures, initial hand holding of the participants had created a sense of confidence and safety during experimentation. The advantages of our proposed device are as follows:

    It solves instability problem during walking by providing supporting rods. The limited width of treadmill along with side supports gives a feeling of safety and eliminates the possibility of any fear of falling out of the device.

    No special training is required to walk on it, as the process of walking on the device is the natural one.

    The device's acceptability is expected to be high due to the feeling of safety along with the feeling of natural walking on the device. This results in the formation of mental maps without any hindrance.

    It is a low weight device which is simple to operate and maintain.

Section 2 of the paper presents the review of the related literature. Section 3 describes planning and procedure for experiments; Section 4 illustrates the results; while Section 5 concludes the paper and presents the directions of future research.
2. Review of Related Work
2.1 Spatial Learning
In recent years, a plethora of assistive navigation technologies have been designed to maintain and enhance the independence of the community of visually impaired people. VE has been a popular paradigm in simulation-based training, game, and entertainment industries [ 4]. It has also been used for rehabilitation and learning environments for people with disabilities (e.g., physical, mental, and learning disabilities) [ 5], [ 6]. Recent technological advances, particularly in haptic interface technology, enable blind individuals to expand their knowledge as a result of using artificially created reality through haptic and audio feedback. Research on the use of haptic devices by people who are blind for construction of cognitive maps includes the works by Lahav and Mioduser [ 2], and Semwal and Evans-Kamp [ 7]. The use of audio interface by VIP for construction of cognitive maps includes Audio-Tactile BATS [ 8]; modeling audio-based virtual environments for children with visual disabilities [ 9]. The use of audio-haptics interface by VIP for construction of cognitive maps includes haptics and vocal navigation software (e.g., Virtual Sea for blind sailors [ 10] and Haptics Soundscapes [ 11]). Although audio and haptic interface has been studied for NSL, nothing is known about the use of locomotion interface for supporting NSL.
2.2 Locomotion Interface
Good number of devices have been developed over the last two decades to integrate locomotion interfaces with VE. We have categorized the most common VE locomotion approaches as follow:

    Treadmill-style interface [ 12], [ 13], [ 14], [ 15].

    Pedaling devices (such as bicycles or unicycles) [ 16].

    Walking-in-place devices [ 17].

    The motion foot pad [ 18].

    Actuated shoes [ 19].

    The string walker [ 20].

    Finger walking-in-place devices [ 21].

Generally, a locomotion interface is designed to cancel the user's self-motion in a place to allow the user to go anywhere in a large virtual space on foot. For example, a treadmill can cancel the user's motion by moving its belt in the opposite direction. Its main advantage is that it does not require a user to wear any kind of devices as required in some other locomotion devices.
3. Planning and Procedure for Experiment
The experiment was conducted to examine whether the participants were able to create cognitive maps of so-called survey-map type by exploring the VE, and to evaluate the practical effectiveness of this newly developed aid.
3.1 Participant
Fourteen volunteers recruited as test participants for this research. All participants were between the age group of 17-35 years and the place to be visited was unknown to them, have self-reported normal spatial learning. They were divided into two groups—bind-folded sighted (eight participants) and blind (five congenital blinds and one late blind)—learned to form the cognitive maps from a VE exploration.
3.2 Experimental Apparatus
We have developed UWP for VE exploration. The schematic view and mechanical structure of UWP is shown in Fig. 2. It consists of a motor-less treadmill resting on a mechanical rotating base. The experimental software is run on a laptop-based system with a 2 GHz Intel Core 2 Duo processor, 2 GB RAM, and a 15” monitor. It is developed in Java language using the JDK 1.5 API.
We developed computer-simulated virtual environment based on ground floor of our institute (as shown in Fig. 3), which has three corridors and eight landmarks (such as Faculty room, Auditorim, Library, Class room-1, etc.). It has one main entrance. The system lets the participant to form cognitive maps of unknown areas by exploring VE using UWP (as shown in Fig. 1). It can be considered an application of “learning-by-exploring” principle for acquisition of spatial knowledge and thereby formation of cognitive maps using VE. It guides the VIP through speech by describing surroundings (e.g., on your left there is Library and on your right there is Class room-2), guiding directions including giving early information of a turning and crossings (e.g., after around 20 steps, you need to take right turn). Additionally, occurrences of various events (e.g., arrival of a junction, arrival of object(s) of interest, etc. such as “you are near your destination”) are signaled by sound through speakers or headphones.


Fig. 1. Spatial learning by VE exploration using UWP by participant.






Fig. 2. Unconstrained walking plane. There are three major parts: (a) A motor-less treadmill. (b) Mechanical rotating base. (c) Block containing Servo motor and gearbox to rotate the mechanical base.




3.3 Research Instruments
Nine main instruments served the study; the last five instruments were developed for the collection of quantitative and qualitative data. The research instruments were:

    The Unknown Target Space—the space to be explored as a virtual space in the VE (see Fig. 3). It is a 2300-square-foot building with one entrance, eight landmarks, and three corridors.

    Exploration Task—each participant was asked individually to explore the virtual building and to complete the given task. The task was repeated four times, taking maximum 5 minutes for each trial. The first two repitition rounds are considered as partial-learning rounds. The next two repitions of exploration task led to significant learning. The trials were started with the experimenter informing the participants that they would be asked 1) to describe the building and its components, 2) to locate five landmarks as asked by the experimenter, and 3) to perform target-object task at the end of their exploration.

    Object-Localization Task—participants were asked to locate particular five objects within 5 minutes. For this task, participants were provided contextual help only. In case of confusion, participant may get help from system by paying penalty for it. Same way, in case of mistake made by participants, system warns them and provides help.

    Target-Object Task—participants were asked to perform target-object task that is “to go to computer lab starting from main entrance.” Participants were asked to perform this task using contextual help only. In case of confusion, participant may get help from system by paying penalty for it. Same way, in case of mistake made by participants, system warns them.

    Questionnaire—the questionnaire comprised of eight questions concerning the participants' views and feedback about the UWP and system. The participants were given this questionnaire at the end of last trial.

    Interview—the participants were asked to give verbal description of the unknown environment. Participants were asked about their experience and views about the study.

    Observations—video camera of cell phone was used for recording the participant's exploration. Participants' navigation process and audio remarks in the VE were recorded during the tasks. The information from these recordings was combined with the computer log recording.

    Computer log—the log enabled the researcher to analyze users' learning and exploration process in the VE. Participant's VE navigation trajectories, distances traversed, time duration taken, and breaks taken are stored in database.

    Evaluation schemes—it served the researcher's analysis of the participants' mobility skills and their acquaintance process with the new space.

3.4 Procedure
All participants carried out the specified tasks and were observed individually. The study was carried out in five stages:

    1. Familiarization with the VE features and operation of the UWP.

    2. Participants' exploration of the unknown virtual space using the UWP.

    3. Performing object-localization task (the participants were asked to locate five landmarks as asked by the experimenter).

    4. Participants were asked to perform the target-object task (the user were asked to go to particular landmark).

    5. Participants were asked to answer questionnaire and give a verbal description of the environment.

In the last four stages, i.e., 2 to 5, all participants' performances were video recorded.


Fig. 3. Screen shot of computer-simulated environments.




In first stage, i.e., familiarization stage, participants spent a few minutes using the system in a simple virtual environment. The duration of such practice session was typically about 3 minutes. It helped the participants to familiarize themselves with the UWP and the system, before the trials began. The goal of this stage was not to give enough time to participants to achieve competence.
After the familiarization stage, the following three tasks were given to participants:

    Exploration task: participants were asked to explore the VE and to complete the given task. Each participant repeated the task four times, taking maximum 5 minutes for each trial. Participants navigated the virtual space using first mode of navigation, i.e., they were provided the contextual cues and system help both. The testing task (i.e., Target-object task) was carried out after the second trial and after the fourth trial of Exploration task. Data collected during the testing task after the second trial are termed as partial learning. While the data collected after the fourth trial are termed as postlearning.

    Object-localization task: the participants were asked to locate five landmarks as asked by the experimenter. This task took a maximum of 5 minutes.

    Target-object task: the participants were asked to complete following task, i.e., “Go to the Computer Laboratory starting from Main Entrance.” The time allotted for this task was maximum 5 minutes.

Participants performed Object-localization task and Target-object task using second mode of navigation, i.e., without system help. In case of confusion, participant could get help from the system by paying penalty for it. Same way, in case of a mistake made by participants, system warned them.
3.5 Statistical Analysis
The independent variables used for the analysis included 1) trial number, 2) mode of virtual navigation, and 3) the type of participants (blind-folded sighted and blind). The dependent variables were categorized into two categories:

    Number of objects located and identified correctly, and

    1) time taken, 2) number of times help taken, and 3) number of pauses taken to complete the task of traversing 300-feet length of specified route. A t-test was used to analyze the experimental data with a level of significance ( ) taken as 0.05. The feedback from the participants was also analyzed using t-test.

4. Result
4.1 Hypothesis 1
Null hypothesis (Ho): UWP does not contribute significantly to spatial learning of VIP.
Alternate hypothesis (Ha): UWP significantly contributes to spatial learning of VIP.
The quality of spatial learning is judged through four parameters as listed in Table 1. The null hypothesis (Ho) effectively implies that there is no significant difference in the mean values of the four parameters between partial-training samples and posttraining samples.

Table 1. T-Test of Spatial Learning Performance Parameters


BFS—blind-folded sighted, BL—blind participants, dof—degree of freedom, SD—standard deviation, T7—computed t-value.

The paired samples t-test was used to analyze the statistical significance of posttraining gains in the above mentioned parameters for both types of participants. Since the degree of freedom (dof) is 7 in BFS case, t-test table value for ${\rm dof} = 7$ and $\alpha = 0.05$ is compared with the computed t-value (shown as T7). In same way, the degree of freedom (dof) is 5 in BL case, t-test table value for ${\rm dof} = 5$ and $\alpha = 0.05$ is compared with the computed t-value (shown as T5).
Significant posttraining difference, for both the types of participants, was found in each of the following parameters:

    1. ${\rm T}7 = 6.25 > 2.365 ({\rm t}_{0.05})$ for BFS and ${\rm T}5 = 5.35 > 2.571 ({\rm t}_{0.05})$ for BL in time taken.

    2. ${\rm T}7 = 11.65 > 2.365 ({\rm t}_{0.05})$ for BFS and ${\rm T}5 = 14.03 > 2.571 ({\rm t}_{0.05})$ for BL in the number of times help taken.

    3. ${\rm T}7 = 14.14 > 2.365 ({\rm t}_{0.05})$ for BFS and ${\rm T}5 = 13.06 > 2.571 ({\rm t}_{0.05})$ for BL in the number of times help provided.

    4. ${\rm T}7 = 19.18 > 2.365 ({\rm t}_{0.05})$ for BFS and ${\rm T}5 = 20.04 > 2.571 ({\rm t}_{0.05})$ for BL in the number of times pauses taken to complete the task.

Since the calculated values are more than t-test table values for all the parameters, the null hypothesis is rejected. Thus, the posttraining spatial learning performance improves significantly for both types of participants as depicted in Figs. 4 and 5. In other words, we can say with 95 percent confidence that the UWP significantly contributes to spatial learning of VIP through the construction of a cognitive map of an unknown space that would lead to enhanced mobility of VIP.


Fig. 4. Partial-post VE exploration difference (for BFS).






Fig. 5. Partial-post VE exploration difference (for BL).




4.2 Hypothesis 2
Null hypothesis (Ho): There is no significant difference in posttraining navigation performance using UWP and in real environment by BFS.
Alternate hypothesis (Ha): There is a significant difference in posttraining navigation performance using UWP and in real environment by BFS.
The quality of spatial learning is judged through four parameters as listed in Table 2. The null hypothesis (Ho) effectively implies that there is no significant difference in the mean values of the four parameters between posttraining navigation performance samples using UWP (refer Table 1) and in real environment (refer Table 2) by BFS.

Table 2. T-Test of Navigation Performance Parameters (Real Environment)


BFS—blind-folded sighted, dof—degree of freedom, SD—standard deviation, T7—computed t-value.

The paired samples t-test was used to analyze the statistical significance in posttraining navigation performance using UWP and in real environment by BFS in the above-mentioned parameters. Since the degree of freedom (dof) is 7 in BFS case, t-test table value for ${\rm dof} = 7$ and $\alpha = 0.05$ (i.e., 2.365) is compared with the computed t-value (shown as T7).
Since the calculated values are less than t-test table values (i.e., 2.365) for all the four parameters for posttraining navigation performance using UWP and in real environment, the null hypothesis is accepeted. Thus, there is no significant difference in the posttraining navigation performance using UWP and in real environment for BFS.
4.3 Hypothesis 3
Null Hypothesis (Ho): The object-localization task performance is not affected by the type of blindness.
Alternate Hypothesis (Ha): Type of blindness does affect on object-localization task.
The object-localization task is judged through a number of objects located as listed in Table 3. The null hypothesis (Ho) effectively implies that there is no significant difference in the mean value of the parameter between BFS samples and BL samples.

Table 3. T-Test of Object-Localization Task Measures


As per t-test on this data, there is a 95 percent confidence level (5 percent significance level) that population mean will range between 3.90 (i.e., four landmarks) to 5.09 (i.e., five landmarks) for BFS and between 3.61 (i.e., four landmarks) to 5.05 (i.e., five landmarks) for BL. That is, the participants were able to successfully localize at least four landmarks out of five after the training.
Also significant difference ( ${\rm T}12 = 0.11 < 1.78 ({\rm t}_{0.05})$ ) was not found in the BFS and the BL group concerning the characteristics of the landmark localization process.
Since the calculated values are less than t-test table values for two types of participants, the null hypothesis is accepted. In other words, we can say with 95 percent confidence that irrespective of the type of participants, the system provided the same possibility for a user to recognize or localize the landmarks correctly.
4.4 Hypothesis 4
Null Hypothesis (Ho): UWP provides overall satisfaction to both types of participants for nonvisual spatial learning.
Alternate Hypothesis (Ha): UWP does not provide overall satisfaction to both types of participants for nonvisual spatial learning.
The participants' responses to the questionnaire are presented in Table 4 given below, which are also depicted in Figs. 6 and 7.

Table 4. Feedback Statistics




Fig. 6. Feedback by BFS.






Fig. 7. Feedback by BL.




We applied t-test on the above data. As per t-test, there is 95 percent confidence level (5 percent significance level) that population mean will approximately range between four (representing “Agree”) to five (representing “Strongly agree”) for all eight parameters by BFS and BL both. This statistics show that population means for all the parameters is between four and five, i.e., all the participants agree and some participants strongly agree on all the eight parameters. Thus, the overall satisfaction level is high for both the type of participants.
The paired samples t-test was used to analyze whether there is a significant difference between the two types of participants. Since the degree of freedom (dof) is $(8+6)-2$ in this case, t-test table value for ${\rm dof} = 12$ is compared with the computed t-value (shown as T12).
Significant difference was not found for each of the following parameters (refer Table 4):

    1. ${\rm T}12 = 1.34 < 1.78\;({\rm t}0.05)$ for sense of naturalness.

    2. ${\rm T}12 = 0.55 < 1.78\;({\rm t}0.05)$ for safe walking.

    3. ${\rm T}12 = 0 < 1.78\;({\rm t}0.05)$ for control on device.

    4. ${\rm T}12 = 0.42 < 1.78\;({\rm t}0.05)$ for the sense of presence.

    5. ${\rm T}12 = 0.83 < 1.78\;({\rm t}0.05)$ for overall satisfaction.

    6. ${\rm T}12 = 1.62 < 1.78\;({\rm t}0.05)$ for effort level.

    7. ${\rm T}12 = 0.15 < 1.78\;({\rm t}0.05)$ for difficulty level.

    8. ${\rm T}12 = 0.19 < 1.78\;({\rm t}0.05)$ for enjoyment.

We observe that the calculated values are less than t-test table values for all parameters. Thus, we can say that there is no significant difference in overall satisfaction for both the type of participants. And we have already shown above that the overall satisfaction level is better than “high” for each of the participants. Thus, we can say with 95 percent confidence that the null hypothesis is accepted.
To sum up, we can say that participants can create cognitive maps of an area, localize the landmarks, and there is a significant improvement in spatial learning of the blind participants after getting training on our system.
General comments and feedback:
The kind of general comments and feedback received from the participants is given below:
“The virtual movements did not become natural until 3-4 trials.”
“The exploration got easier progressively each time.”
“I found it somewhat difficult to explore. As I explored, I got better.”
Although there was a general satisfaction among the participants, there were some comments indicating a scope for further improvements in the device. Such comments are given below:
“I had difficulty making immediate turns in the virtual environment.”
“Virtual walking through keyboard needs more efforts than real walking.”
5. Conclusion
We have integrated the novel treadmill-style locomotion interface—unconstrained walking plane—with virtual environment (VE) to enable nonvisual spatial learning (NSL). UWP allows user to navigate in VE as they walk on the device. The motivation to use UWP was driven by its potential to provide near-natural feeling of real walking leading to NSL and effective development of cognitive maps for unknown locations. Results reveal that the participants have benefited by the training, i.e., there were significant improvements in posttraining navigation performance of the participants. The experimental results and participants' feedback have conclusively indicated that the UWP is very effective for unattended spatial learning and thereby enhancement of mobility skills of VIP. Its simplicity of design coupled with supervised multimodal training facility makes it an effective device for virtual walking simulation and thereby for spatial learning. The results match with our expectations that UWP would result in complete perceptual and reduced memory processing that considerably reduces demand on learning spatial layouts without visual information. One known limitation of our device is its inability to simulate movements on slopes and highly zigzag paths.

Acknowledgments

This work was funded in part by the Computer Society of India (CSI), India (Order No.: 1-14/2010-02 dated 29/03/2010 of Education Directorate, CSI, Chennai, India). The authors would like to express their gratitude to CSI for providing them with a research project grant. Special thanks to their students and colleagues for their support during experimental study.

    K.K. Patel is with the School of ICT, Ahmedabad University, 7, Yogiraj Bungalows, B/h Annapurna Restaurant, Jashodanagar, Ahmedabad 382 445, Gujarat, India. E-mail: kkpatel7@gmail.com.

    S. Vij is with the Department of CE-IT-MCA, SVIT, 18, JMK Apts., HT Road, Subhanpura, Vadodara 390 023, Gujarat, India.

    E-mail: vijsanjay@gmail.com.

Manuscript received 1 Feb. 2011; revised 13 May 2011; accepted 11 Sept. 2011; published 5 Dec. 2011.

For information on obtaining reprints of this article, please send e-mail to: lt@computer.org, and reference IEEECS Log Number TLT-2011-02-0010.

Digital Object Identifier no. 10.1109/TLT.2011.29.

References



Kanubhai K. Patel is working toward the PhD degree from the Faculty of Technology at Dharmsinh Desai University, Nadiad. He received the MCA degree from Gujarat Vidyapith, Ahmedabad, in June 1997. He is an assistant professor at the Schools of ICT of Ahmedabad University, Ahmedabad, India. He was previously a faculty member at Gujarat University, Ahmedabad, India. His research interests include assistive technology, spatial cognition, human-computer interaction, and virtual learning environments. He has authored more than 15 publications, including refereed journal papers and three book chapters. He has also authored a book, Data Structures: Theory and Problems. He is reviewer for a couple of peer-reviewed journals.



Sanjaykumar Vij received the PhD degree from IIT, Mumbai in 1974. He is currently the director in the Department of CE-IT-MCA, Sardar Vallabhbhai Patel Institute of Technology (SVIT), Vasad, India. His research interests include text mining, knowledge management, and NLP. He has authored more than 20 publications, including more than seven refereed journal papers and two book chapters. He is a member of the academic council, board of studies, and school research committee at Gujarat Technological University, Ahmedabad, India. He is a registered PhD guide with Dharmsinh Desai University, Nadiad, India. He was on a panel of experts/advisors at GSLET and GPSC. He was the chairman of the Computer Society of India, Vadodara Chapter.
197 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool