2014 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) (2014)
Sept. 10, 2014 to Sept. 12, 2014
Nicolas H. Lehment , Institute for Human-Machine-Communication Technische Universität München
Daniel Merget , Institute for Human-Machine-Communication Technische Universität München
Gerhard Rigoll , Institute for Human-Machine-Communication Technische Universität München
This paper presents an AR videoconferencing approach merging two remote rooms into a shared workspace. Such bilateral AR telepresence inherently suffers from breaks in immersion stemming from the different physical layouts of participating spaces. As a remedy, we develop an automatic alignment scheme which ensures that participants share a maximum of common features in their physical surroundings. The system optimizes alignment with regard to initial user position, free shared floor space, camera positioning and other factors. Thus we can reduce discrepancies between different room and furniture layouts without actually modifying the rooms themselves. A description and discussion of our alignment scheme is given along with an exemplary implementation on real-world datasets.
Cameras, Three-dimensional displays, Avatars, Observability, Optimization, Teleconferencing, Computational modeling
N. H. Lehment, D. Merget and G. Rigoll, "Creating automatically aligned consensus realities for AR videoconferencing," 2014 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Munich, Germany, 2014, pp. 201-206.