Issue No. 06 - November/December (2003 vol. 23)
"I have always been connecting art and automated 3D imagery in my mind," said Ioannis Stamos, assistant professor in the department of computer science at The City University of New York's Hunter College. "Early on in my career I attended an exhibition of the painter Piet Mondrian at the Museum of Modern Art in New York. The geometric figures in Mondrian's painting and the regular geometry I was involved with had something in common. I never pursued this direction though."
Interestingly enough he says he's never used his research in 3D visualization and reconstruction to create images purely for artistic purposes. "When I started presenting my segmentation images to conferences and talks, people seemed to appreciate an artistic flavor in them," he explained. "This was never my intention though."
Stamos wrote his undergraduate thesis in the area of computer vision and image processing in the School of Engineering at the University of Patras, Greece. He graduated with the highest GPA of his class. He discovered a 3D range scanner for the first time while working as a research scientist in the electrical engineering department of the Catholic University of Leuven. "I was fascinated," he said. "I was able to use, for the first time, a range scanner and acquire 3D models of small objects."
After receiving his MS and MPhil in computer science at Columbia University, Stamos entered the PhD program in that school's robotics lab under the supervision of Professor Peter K. Allen. There, a new scanner enabled them to launch a project of acquiring models of urban environments.
"I guess the fact that we were situated in a major urban university at a major urban center affected our decisions," he said. "We were surrounded by beautiful buildings, we had an amazing 3D scanner, and we had already started developing the technology. That is why we decided to push the technology to its limits."
Hunter College offered him a job and there he found the perfect place to continue his research. "I had the strong support of the dean and of the computer science department, and I was still in a major urban place where I could continue acquiring interesting 3D images. I was also able to continue my collaboration with Columbia University and start a collaboration with Hunter's geography department that maintains a large database of aerial imagery of the New York City area. Also I had access to a large pool of PhD students since I was still affiliated with the Graduate Center of the City University of New York."
Stamos uses a Cyrax laser range-scanner to acquire 3D range scans of large-scale urban scenes, usually buildings. The scanner senses a 3D scene and outputs a cloud of 3D points that capture the scene. At one million points per scan, the quality of the acquired point sets are quite high. Stamos developed segmentation algorithms for automatic extraction of planar surfaces and 3D lines from each individual scan that reduce the complexity and provide features (lines and planes) used for registration.
He noted that the problem of registration has to do with placing all scans in the same coordinate system, similar to mosaicing a set of 2D photographs. After registration and modeling, the system maps color photographs on the geometry. The cover image and all the images shown in this article are from the segmentation and registration part of the system.
The cover image and Figure 1 are from the Cathedral Saint-Pierre in Beauvais, France. These images show more than 10 scans placed onto the same coordinate system. (See the article on page 32 for further discussion.)
Figures 2a and 2b are of the Casa Italiana building at Columbia University. Figure 2a is the northeast view of the building and Figure 2b is the northwest view. "What we see in these images," Stamos said, "are 'segmented' 3D point sets. We run a segmentation algorithm that classifies 3D points that lie on the same connected planar surface. Each different surface is rendered with a different color for clarity. The algorithm picks a random color for every different planar surface. The large planar walls are rendered with different colors. Also small bricks (see the corner of the building) have been extracted as distinct surfaces."
However, Stamos never gives a red color to his output surfaces. Whatever appears in the image as red is an area where he couldn't fit a plane. "This happens often in the interior of the windows," Stamos explained. "The laser beam that our sensor emits into the scene is not reflected by the glass, and gets into the interior of the building. Due to that discontinuity in these areas it is hard to fit a surface around the points. Also objects like flags (bottom left part of Figure 2a) or vegetation can not fit the description of a plane, and are rendered as red." (Note the predominance of red in Figure 1.)
Stamos' future projects include exploring the avenues of acquisition and visualization of large-scale 3D models in a large city. An adventure that would succeed if he could acquire, process, and visualize large parts of a major urban center quickly and accurately. "This is my major focus at this point. We need to improve our automated registration algorithms, implement more efficient modeling algorithms, and attack the neglected problem of sensor planning. The combination of purely geometric with image-based rendering methods is a second avenue of exploration."
He noted that urban planning, architecture, construction, entertainment (cinematography and games), archeology, and disaster recovery are all disciplines that his research could affect.