NOVEMBER/DECEMBER 2003 (Vol. 23, No. 6) pp. 20-21
0272-1716/03/$31.00 © 2003 IEEE
Published by the IEEE Computer Society
Published by the IEEE Computer Society
Guest Editors' Introduction: 3D Reconstruction and Visualization
|About the Articles|
PDFs Require Adobe Acrobat
We have entered an era where the acquisition of 3D data is ubiquitous, continuous, and massive. These data come from multiple sources including high-resolution, geo-corrected imagery from aerial photography and satellites; ground-based close-up images of buildings and urban features; 3D point clouds from airborne laser range-finding systems, such as Lidar; imagery from synthetic aperture radar; and other sources. Now researchers are developing mobile geo-located systems—some containing calibrated laser range finders and cameras—that can collect ground-level detail at unprecedented resolution. In the future, mobile individuals or robots will be equipped with cameras and other sensors and the computational power to pervasively collect and organize geo-located data.
To make these data really useful, they should be employed to model the real world, and the model should then be available for interactive exploration and analysis. However, the modeling aspect is not straightforward since almost all the collected data has holes (due to obstructions or poor acquisition conditions), and no single acquisition mode is likely to produce complete models. The overall modeling problem is then one of fusing multisource data consistently and accurately.
As acquisition modes are automated and models are produced, there will be an exponential explosion in the amount of data available for analysis and exploration. The models will ultimately include buildings and everything associated with the environment, such as trees, shrubs, lampposts, sidewalks, streets, and so on. The interiors of buildings will be fair game; databases and interactive techniques should ultimately allow you to go inside or outside at will. We must develop data organizations to efficiently handle all these aspects and that scale to cover whole cities with tens of thousands of buildings and an uncountable number of other structures. Since the automated acquisition mechanisms will permit repeated collection over time, both the models and the database should be dynamic. The main mode of exploration for this massive collection will be through interactive visualization. The database must support interactive visualization, both in terms of hierarchical structure and multiresolution models. Further, we need novel graphical methods due to the extreme scale of the data.
Based on these needs and issues, we formulated this special issue, which provides a sampling of the latest and most interesting research in several of these areas.
About the Articles
The first article, "Sea of Images: A Dense Sampling Approach for Rendering Large Indoor Environments" by Aliaga et al., describes an image-based approach to provide interactive and photorealistic walk-throughs of complex indoor environments. Using omnidirectional images, a dense sampling of viewpoints is obtained in a large static environment. The images are stored in a multiresolution hierarchy and then fetched in real time during the interactive walk-through. A feature-based warping algorithm permits rendering of novel images for new viewpoints. The authors demonstrate photorealistic walk-throughs reproducing specular reflections and occlusion effects at 20 to 30 frames per second.
In "New Methods for Digital Modeling of Historic Sites," Allen et al. discuss an approach for building 3D models using range image segmentation and feature extraction algorithms. The approach automatically computes pairwise registration and builds a topological graph for the 3D model. The authors extended the same approach to automated texture mapping of the model. They demonstrate the approach by reconstructing a model of the Cathedral Saint-Pierre in Beauvais, France.
In the next article, "Automatic Generation of High-Quality Building Models from Lidar Data," Rottensteiner tackles the problem of generating building models from Lidar point clouds. The method first automatically detects building regions, finds roofs, and groups roof planes to create polyhedral building models. The whole model is then improved by using all available sensor information and, if possible, geometric constraints. Preliminary results for the city of Vienna show that the approach gives good results and has significant potential.
In "Constructing 3D City Models by Merging Aerial and Ground Views," Früh and Zakhor attack the problem of merging and registering results from high-resolution ground-based and aerial sensors. The ground-based results come from a laser scanner and digital camera mounted on a truck while the airborne results come from aerial imagery and Lidar data. An automated procedure registers the models from these sources with respect to one another, removes redundant parts, and fills gaps. The result is an accurate and high-resolution integrated model both at street level and in overhead views. The authors apply the approach to modeling the downtown area of Berkeley, California.
Finally, the issue ends with a survey, "Approaches to Large-Scale Urban Modeling" by Hu, You, and Neumann. The survey summarizes the state of the art in sensing technologies and reconstruction techniques and also talks a bit about visualization.
There were 21 articles and one survey article submitted for this special issue. We were surprised at the number of papers submitted and pleased with their quality. However, this made selecting the four articles to appear in the special issue a difficult task. Thus we were grateful to the IEEE CG&A Associate EIC Markus Gross for accepting an additional article for a future issue. The number of articles submitted and their quality indicate the vitality of this area. We expect to see and hear a good deal more. What issues might be addressed in the future?
In addition to the topics covered by the articles appearing in this issue, several other topics will undoubtedly be of importance. These include the problems of scale and database management when the amount and variety of data becomes really huge (and is always changing), which will certainly happen as acquisition becomes more common and automated. Then there is the problem of interactive visualization for these collections of models of unprecedented scale and detail. This is both a problem of fast multiresolution rendering and of efficient access to the database. Although a good start has been made in this issue's articles, the major problem of incomplete models and how to fill in the gaps still exists. Finally, there is the question of the full use of these models, which will certainly extend beyond mere interactive navigation and will eventually include real-time manipulation of the models in a variety of applications.
We recruited a rather large, international set of reviewers who are recognized in their fields. We thank them for taking time from their invariably busy schedules and for providing comments that were careful, thoughtful, and timely. We also thank the IEEE CG&A staff for helping us manage and organize this issue.
Bill Ribarsky is principal research scientist and leader of the Virtual Worlds Lab (which comprises the Data Visualization Group and the Virtual Environments Group) in the Graphics, Visualization, and Usability Center, College of Computing, at the Georgia Institute of Technology. His research interests include 3D multimodal interaction, usability of novel user interfaces, virtual environments, 3D modeling and rendering, and navigation and analysis of large-scale information spaces using interactive visualization and hierarchical representations. Ribarsky received a PhD in physics from the University of Cincinnati. He is the former chair and a current director of the IEEE Technical Committee on Visualization and Graphics. He also chairs the steering committees for the IEEE Visualization Conference and the IEEE Virtual Reality Conference, and is an associate editor of IEEE Transactions on Visualization and Computer Graphics.
Holly Rushmeier is a research staff member at the IBM T.J.Watson Research Center. Her research interests include data visualization, rendering algorithms, and acquisition of input data for computer graphics image synthesis. Her most recent work has been in the area of acquiring 3D models of high visual quality for cultural heritage applications, including projects in Italy and Egypt. Rushmeier received a BS, MS, and PhD in mechanical engineering from Cornell University. In 1990, she was selected as a US National Science Foundation Presidential Young Investigator. She has served as papers chair or cochair for the ACM Siggraph conference, the IEEE Visualization conference, and the Eurographics Rendering Workshop. From 1996 to 1999, she was editor in chief of ACM Transactions on Graphics. She is a member of the IEEE Computer Graphics and Applications editorial board.