The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.05 - September/October (2010 vol.30)
pp: 20-31
Gerwin de Haan , Delft University of Technology
Huib Piguillet , Delft University of Technology
Frits Post , Delft University of Technology
ABSTRACT
User interfaces for video surveillance networks are traditionally based on arrays of video displays, maps, and indirect controls. To address such interfaces' usability limitations when the number of cameras increases, some video surveillance systems aim to improve context awareness. However, interactive spatial navigation is still difficult: unconstrained, free 3D control is too complex, and a predefined camera path limits flexibility. Especially for live tracking of complex events along cameras, operators must make navigation decisions quickly and accurately on the basis of the actual situation. The novel spatial-navigation interface described in this article facilitates such video surveillance tasks. Users directly navigate in the visible video via the mouse, which lets them maintain attention on the action instead of an external navigation interface. While users track the action with the mouse, interactive 3D widgets augmented in the video provide visual updates regarding available camera transitions. Optimized visual transitions between individual videos ensure context awareness while users focus on the action. Several surveillance datasets, along with results from a pilot user evaluation, demonstrate this interface's effectiveness.
INDEX TERMS
computer graphics, graphics and multimedia, video surveillance, spatial navigation, context aware, user interfaces, video displays
CITATION
Gerwin de Haan, Huib Piguillet, Frits Post, "Spatial Navigation for Context-Aware Video Surveillance", IEEE Computer Graphics and Applications, vol.30, no. 5, pp. 20-31, September/October 2010, doi:10.1109/MCG.2010.64
REFERENCES
1. G. de Haan et al., "Egocentric Navigation for Video Surveillance in 3D Virtual Environments," Proc. IEEE Symp. 3D User Interfaces (3DUI 09), IEEE Press, 2009, pp. 103–110.
2. P.W. Thorndyke and B. Hayes-Roth, "Differences in Spatial Knowledge Acquired from Maps and Navigation," Cognitive Psychology, vol. 14, no. 4, 1982, pp. 560–589.
3. T. Igarashi and K. Hinckley, "Speed-Dependent Automatic Zooming for Browsing Large Documents," Proc. 13th Ann. ACM Symp. User Interface Software and Technology (UIST 00), ACM Press, 2000, pp. 139–148.
4. J.J. van Wijk and W.A.A. Nuij, "Smooth and Efficient Zooming and Panning," Proc. 9th Ann. IEEE Symp. Information Visualization (InfoVis 03), IEEE CS Press, 2003, pp. 15–22.
5. D.S. Tan, G.G. Robertson, and M. Czerwinski, "Exploring 3D Navigation: Combining Speed-Coupled Flying with Orbiting," Proc. SIGCHI Conf. Human Factors in Computing Systems (CHI 01), ACM Press, 2001, pp. 418–425.
6. J.D. Mackinlay, S.K. Card, and G.G. Robertson, "Rapid Controlled Movement through a Virtual 3D Workspace," Proc. Siggraph, ACM Press, 1990, pp. 171–176.
7. M. Nienhaus and J. Döllner, "Dynamic Glyphs—Depicting Dynamics in Images of 3D Scenes," Proc. 3rd Int'l Symp. Smart Graphics (SG 03), LNCS 2733, Springer, 2003, pp. 102–111.
8. A. Girgensohn et al., "Effects of Presenting Geographic Context on Tracking Activity between Cameras," Proc. SIGCHI Conf. Human Factors in Computing Systems (CHI 07), ACM Press, 2007, pp. 1167–1176.
9. H.M. Dee and S.A. Velastin, "How Close Are We to Solving the Problem of Automated Visual Surveillance? A Review of Real-World Surveillance, Scientific Progress and Evaluative Mechanisms," Machine Vision Applications, vol. 19, nos. 5–6, 2008, pp. 329–343.
19 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool