The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.12 - Dec. (2011 vol.17)
pp: 1737-1746
Cheuk Yiu Ip , University of Maryland
Amitabh Varshney , University of Maryland
ABSTRACT
The field of visualization has addressed navigation of very large datasets, usually meshes and volumes. Significantly less attention has been devoted to the issues surrounding navigation of very large images. In the last few years the explosive growth in the resolution of camera sensors and robotic image acquisition techniques has widened the gap between the display and image resolutions to three orders of magnitude or more. This paper presents the first steps towards navigation of very large images, particularly landscape images, from an interactive visualization perspective. The grand challenge in navigation of very large images is identifying regions of potential interest. In this paper we outline a three-step approach. In the first step we use multi-scale saliency to narrow down the potential areas of interest. In the second step we outline a method based on statistical signatures to further cull out regions of high conformity. In the final step we allow a user to interactively identify the exceptional regions of high interest that merit further attention. We show that our approach of progressive elicitation is fast and allows rapid identification of regions of interest. Unlike previous work in this area, our approach is scalable and computationally reasonable on very large images. We validate the results of our approach by comparing them to user-tagged regions of interest on several very large landscape images from the Internet.
INDEX TERMS
Image Saliency, Very Large Scale Images, Scene Perception, Interactive Visualization, Anomaly Detection, Guided Interaction.
CITATION
Cheuk Yiu Ip, Amitabh Varshney, "Saliency-Assisted Navigation of Very Large Landscape Images", IEEE Transactions on Visualization & Computer Graphics, vol.17, no. 12, pp. 1737-1746, Dec. 2011, doi:10.1109/TVCG.2011.231
REFERENCES
[1] S. Avidan and A. Shamir, Seam carving for content-aware image resizing. ACM Trans. Graphics, 26, 2007. doi:10.1145/1276377. 1276390. 2
[2] M. Baştan, H. Çam, U. Güdükbay, and O. Ulusoy, An MPEG-7 compatible video retrieval system with integrated support for complex multi-modal queries. IEEE Multimedia, 17 (3): 62–73, 2009. doi:10.1109/ MMUL.2009.74. 5
[3] U. Bordoloi and H.-W. Shen, View selection for volume rendering. In IEEE Visualization, pages 487 – 494, Oct 2005. doi:10.1109/ VISUAL.2005.1532833. 3
[4] N. D. B. Bruce and J.-K. Tsotsos, Saliency, attention, and visual search: An information theoretic approach. Journal of Vision, 9 (3): 1–24, 2009. doi:10.1167/9.3.5. 2
[5] S. Bruckner and T. Möller, Isosurface similarity maps. Computer Graphics Forum, 29 (3): 773–782, 2010. doi:10.1111/j.1467-8659. 2009.01689.x. 3
[6] M. Chen and H. Jänicke, An information-theoretic framework for visualization. IEEE Trans. Visualization and Computer Graphics, 16 (6): 1206–1215, 2010. doi:10.1109/TVCG.2010.132. 5
[7] O. Chum and A. Zisserman, An exemplar model for learning object classes. In IEEE CVPR, pages 1–8, June 2007. doi:10.1109/CVPR. 2007.383050. 2
[8] C.-S. Co, M.-A. Duchaineau, and K.-I. Joy, Streaming aerial video textures. In Scientific Visualization: Advanced Concepts, volume 1 of Dagstuhl Follow-Ups, pages 336–345, 2010. doi:10.4230/DFU. SciViz.2010.336. 1
[9] G. Daniel and M. Chen, Video. visualization. In IEEE Visualization, pages 409–416, 2003. doi:10.1109/VISUAL.2003.1250401. 2, 3
[10] K. Devlin, A. Chalmers, A. Wilkie, and W. Purgathofer, STAR: Tone reproduction and physically based spectral rendering. In State of the Art Reports, Eurographics 2002, pages 101–123, September 2002. 2
[11] S. Goferman, L. Zelnik-Manor, and A. Tal, Context-aware saliency detection. In IEEE CVPR, pages 2376–2383, June 2010. doi:10.1109/ CVPR.2010.5539929. 2
[12] D. Guo and C. Poulain, Terapixel: A spherical image of the sky. In Microsoft Environmental Research Workshop, 2010. 1
[13] A. Hanjalic, R. Lienhart, W.-Y. Ma, and J.-R. Smith, The holy grail of multimedia information retrieval: So close or yet so far away? Proceedings of the IEEE, 96 (4): 541 –547, 2008. doi:10.1109/JPROC. 2008.916338. 2
[14] X. Hou and L. Zhang, Saliency detection: A spectral residual approach. In IEEE CVPR, pages 1–8, 2007. doi:10.1109/CVPR. 2007.383267. 2
[15] S. Howlett, J. Hamill, and C. O'Sullivan, Predicting and evaluating saliency for simplified polygonal models. ACM Trans. Applied Perception, 2: 286–308, 2005. doi:10.1145/1077399.1077406. 3
[16] L. Itti and C. Koch, A comparison of feature combination strategies for saliency-based visual attention systems. Journal of Electronic Imaging, 10: 161–169, 2001. doi:10.1117/1.1333677. 4, 7
[17] L. Itti, C. Koch, and E. Niebur, A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. PAMI, 20 (11): 1254–1259, 1998. doi:10.1109/34.730558. 2, 3, 4
[18] H. Jänicke and M. Chen, A salience-based quality metric for visualization. Computer Graphics Forum, 29 (3): 1183–1192, 2010. doi: 10.1111/j.1467-8659.2009.01667.x. 5
[19] H. Jänicke and G. Scheuermann, Visual analysis of flow features using information theory. Computer Graphics and Applications, IEEE, 2009. doi:10.1109/MCG.2009.135. 5
[20] H. Jänicke and G. Scheuermann, Measuring complexity in lagrangian and eulerian flow descriptions. Computer Graphics Forum, 29 (6): 1783–1794, 2010. doi:10.1111/j.1467-8659.2010.01648.x. 5
[21] H. Jänicke, A. Wiebel, G. Scheuermann, and W. Kollmann, Multi-field visualization using local statistical complexity. IEEE Trans. Visualization and Computer Graphics, 13 (6): 1384 –1391, 2007. doi: 10.1109/TVCG.2007.70615. 5
[22] M. Kazhdan and H. Hoppe, Streaming multigrid for gradient-domain operations on large images. ACM Trans. Graphics, 27: 1–10, 2008. doi: 10.1145/1360612.1360620. 1
[23] G. Kim and A. Torralba, Unsupervised Detection of Regions of Interest using Iterative Link Analysis. In NIPS, 2009. 2
[24] Y. Kim and A. Varshney, Saliency-guided enhancement for volume visualization. IEEE Trans. Visualization and Computer Graphics, 12 (5): 925 –932, 2006. doi:10.1109/TVCG.2006.174. 3
[25] Y. Kim and A. Varshney, Persuading visual attention through geometry. IEEE Transactions on Visualization and Computer Graphics, 14: 772– 782, July 2008. doi:10.1109/TVCG.2007.70624. 3
[26] Y. Kim, A. Varshney, D.-W. Jacobs, and F. Guimbretière, Mesh saliency and human eye fixations. ACM Trans. Applied Perception, 7: 12: 1–12:13, 2010. doi:10.1145/1670671.1670676. 3
[27] J. Kopf, M. Uyttendaele, O. Deussen, and M. Cohen, Capturing and viewing gigapixel images. ACM Trans. Graphics, 26 (3), 2007. doi: 10.1145/1276377.1276494. 1
[28] P.-Y. Laffont, J.-Y. Jun, C. Wolf, Y.-W. Tai, K. Idrissi, G. Drettakis, and S.-e. Yoon, Interactive content-aware zooming. In Proceedings of Graphics Interface 2010, pages 79–87, 2010. 2
[29] C.-H. Lee, Y. Kim, and A. Varshney, Saliency-guided lighting. IEICE Trans. Information and Systems, E92.D (2): 369–373, 2009. doi:10. 1587/transinf.E92.D.369. 3
[30] C.-H. Lee, A. Varshney, and D.-W. Jacobs, Mesh saliency. ACM Trans. Graphics, 24: 659–666, July 2005. doi:10.1145/1073204. 1073244. 3
[31] T. Liu, J. Sun, N.-N. Zheng, X. Tang, and H.-Y. Shum, Learning to detect a salient object. In IEEE CVPR, pages 1 –8, June 2007. doi:10.1109/ CVPR.2007.383047. 2
[32] Q. Luan, S.-M. Drucker, J. Kopf, Y.-Q. Xu, and M.-F. Cohen, Annotating gigapixel images. In UIST, pages 33–36, 2008. doi:10.1145/ 1449715.1449722. 1
[33] R. Machiraju, J.-E. Fowler, D. Thompson, and a. B. S. W. Schroeder, Evita: Efficient visualization and interrogation of tera-scale data. In Data mining for scientific and engineering applications. Kluwer, 2001. 3
[34] M. Muja and D.-G. Lowe, Fast approximate nearest neighbors with automatic algorithm configuration. In VISAPP, pages 331–340, 2009. 5
[35] A. Oliva, A. Torralba, M. Castelhano, and J. Henderson, Top-down control of visual attention in object detection. In IEEE ICIP, pages 253–6, Sept 2003. doi:10.1109/ICIP.2003.1246946. 2
[36] R. Patro, C. Y. Ip, and A. Varshney, Saliency guided summarization of molecular dynamics simulations. In Scientific Visualization: Advanced Concepts, volume 1 of Dagstuhl Follow-Ups, pages 321–335, 2010. doi:10.4230/DFU.SciViz.2010.321. 3
[37] M. Rubinstein, D. Gutierrez, O. Sorkine, and A. Shamir, A comparative study of image retargeting. ACM Trans. Graphics, 29: 1–160, 2010. doi: 10.1145/1882261.1866186. 2
[38] B. Russell, W. Freeman, A. Efros, J. Sivic, and A. Zisserman, Using multiple segmentations to discover objects and their extent in image collections. In IEEE CVPR, pages 1605 – 1614, 2006. doi: 10.1109/CVPR.2006.326. 2
[39] J. Sankaranarayanan, H. Samet, and A. Varshney, A fast all nearest neighbor algorithm for applications involving large point-clouds. Computers & Graphics, 31 (2): 157 – 174, 2007. doi:10.1016/j.cag.2006. 11.011. 5
[40] B. Summa, G. Scorzelli, M. Jiang, P.-T. Bremer, and V. Pascucci, Interactive editing of massive imagery made simple: Turning Atlanta into Atlantis. ACM Trans. Graphics, 30: 1–13, 2011. doi:10.1145/ 1944846.1944847. 1
[41] C.-C. Tanner, C.-J. Migdal, and M.-T. Jones, The clipmap: a virtual mipmap. In SIGGRAPH, pages 151–158, 1998. doi:10.1145/ 280814.280855. 7
[42] J.-K. Tsotsos, S.-M. Culhane, W. Y. K. Wai, Y. Lai, N. Davis, and F. Nuflo, Modeling visual-attention via selective tuning. Artificial Intelligence, 78 (1-2): 507–545, 1995. doi:10.1016/0004-3702(95) 00025-9. 2
[43] I. Viola, M. Feixas, M. Sbert, and M. Gröller, Importance-driven focus of attention. IEEE Trans. Visualization and Computer Graphics, 12 (5): 933 –940, 2006. doi:10.1109/TVCG.2006.152. 3
[44] W. Wang, Y. Wang, Q. Huang, and W. Gao, Measuring visual saliency by site entropy rate. In IEEE CVPR, pages 2368 –2375, June 2010. doi: 10.1109/CVPR.2010.5539927. 2
[45] A. Wiebel, X. Tricoche, D. Schneider, H. Jänicke, and G. Scheuermann, Generalized streak lines: Analysis and visualization of boundary induced vortices. IEEE Trans. Visualization and Computer Graphics, 13 (6): 1735 –1742, 2007. doi:10.1109/TVCG.2007.70557. 3
[46] R. Zonenschein and L. Velho, Visorama 2.0: a platform for multimedia gigapixel panoramas. In SIGGRAPH ASIA 2010 Posters, 2010. doi: 10.1145/1900354.1900424. 1
6 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool