The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.03 - May/June (2010 vol.16)
pp: 455-467
Emmanuel Pietriga , INRIA Saclay and Université Paris-Sud, CNRS, Orsay
Olivier Bau , INRIA Saclay and Université Paris-Sud, CNRS, Orsay
Caroline Appert , INRIA Saclay and Université Paris-Sud, CNRS, Orsay
ABSTRACT
Focus+context interaction techniques based on the metaphor of lenses are used to navigate and interact with objects in large information spaces. They provide in-place magnification of a region of the display without requiring users to zoom into the representation and consequently lose context. In order to avoid occlusion of its immediate surroundings, the magnified region is often integrated in the context using smooth transitions based on spatial distortion. Such lenses have been developed for various types of representations using techniques often tightly coupled with the underlying graphics framework. We describe a representation-independent solution that can be implemented with minimal effort in different graphics frameworks, ranging from 3D graphics to rich multiscale 2D graphics combining text, bitmaps, and vector graphics. Our solution is not limited to spatial distortion and provides a unified model that makes it possible to define new focus+context interaction techniques based on lenses whose transition is defined by a combination of dynamic displacement and compositing functions. We present the results of a series of user evaluations that show that one such new lens, the speed-coupled blending lens, significantly outperforms all others.
INDEX TERMS
Graphical user interfaces, visualization techniques and methodologies, interaction techniques, evaluation/methodology.
CITATION
Emmanuel Pietriga, Olivier Bau, Caroline Appert, "Representation-Independent In-Place Magnification with Sigma Lenses", IEEE Transactions on Visualization & Computer Graphics, vol.16, no. 3, pp. 455-467, May/June 2010, doi:10.1109/TVCG.2009.98
REFERENCES
[1] K. Perlin and D. Fox, "Pad: An Alternative Approach to the Computer Interface," Proc. ACM SIGGRAPH '93, pp. 57-64, 1993.
[2] B.B. Bederson, J. Grosjean, and J. Meyer, "Toolkit Design for Interactive Structured Graphics," IEEE Trans. Software Eng., vol. 30, no. 8, pp. 535-546, Aug. 2004.
[3] E. Pietriga, "A Toolkit for Addressing HCI Issues in Visual Language Environments," Proc. IEEE Symp. Visual Languages and Human-Centric Computing (VL/HCC '05), pp. 145-152, 2005.
[4] C. Ware and S. Osborne, "Exploration and Virtual Camera Control in Virtual Three Dimensional Environments," Proc. Symp. Interactive 3D Graphics (SI3D '90), pp. 175-183, 1990.
[5] W.C. Donelson, "Spatial Management of Information," Proc. ACM SIGGRAPH '78, pp. 203-209, 1978.
[6] J. Lamping, R. Rao, and P. Pirolli, "A Focus+Context Technique Based on Hyperbolic Geometry for Visualizing Large Hierarchies," Proc. SIGCHI Conf. Human Factors in Computing Systems (CHI '95), pp. 401-408, 1995.
[7] T. Munzner, F. Guimbretière, S. Tasiran, L. Zhang, and Y. Zhou, "Treejuxtaposer: Scalable Tree Comparison Using Focus+Context with Guaranteed Visibility," Proc. ACM SIGGRAPH '03, pp. 453-462, 2003.
[8] E.R. Gansner, Y. Koren, and S.C. North, "Topological Fisheye Views for Visualizing Large Graphs," IEEE Trans. Visualization and Computer Graphics, vol. 11, no. 4, pp. 457-468, July/Aug. 2005.
[9] M.S.T. Carpendale, J. Ligh, and E. Pattison, "Achieving Higher Magnification in Context," Proc. ACM Symp. User Interface Software and Technology (UIST '04), pp. 71-80, 2004.
[10] G. Shoemaker and C. Gutwin, "Supporting Multi-Point Interaction in Visual Workspaces," Proc. SIGCHI Conf. Human Factors in Computing Systems, pp. 999-1008, 2007.
[11] G. Ramos, A. Cockburn, R. Balakrishnan, and M. Beaudouin-Lafon, "Pointing Lenses: Facilitating Stylus Input through Visual- and Motor-Space Magnification," Proc. SIGCHI Conf. Human Factors in Computing Systems (CHI '07), pp. 757-766, 2007.
[12] P. Blenkhorn, G. Evans, A. King, S.H. Kurniawan, and A. Sutcliffe, "Screen Magnifiers: Evolution and Evaluation," IEEE Computer Graphics and Applications, vol. 23, no. 5, pp. 54-61, Sept./Oct. 2003.
[13] E. Pietriga and C. Appert, "Sigma Lenses: Focus-Context Transitions Combining Space, Time and Translucence," Proc. 26th CHI Conf. Human Factors in Computing Systems (CHI '08), pp. 1343-1352, 2008.
[14] G.G. Robertson and J.D. Mackinlay, "The Document Lens," Proc. ACM Symp. User Interface Software and Technology (UIST '93), pp. 101-108, 1993.
[15] M. Sarkar, S.S. Snibbe, O.J. Tversky, and S.P. Reiss, "Stretching the Rubber Sheet: A Metaphor for Viewing Large Layouts on Small Screens," Proc. ACM Symp. User Interface Software and Technology (UIST '93), pp. 81-91, 1993.
[16] M.S.T. Carpendale, D.J. Cowperthwaite, and F.D. Fracchia, "3-Dimensional Pliable Surfaces: For the Effective Presentation of Visual Information," Proc. ACM Symp. User Interface Software and Technology (UIST '95), pp. 217-226, 1995.
[17] T.A. Keahey and E.L. Robertson, "Nonlinear Magnification Fields," Proc. 1997 IEEE Symp. Information Visualization (INFOVIS '97), pp. 51-58, 1997.
[18] M.S.T. Carpendale and C. Montagnese, "A Framework for Unifying Presentation Space," Proc. ACM Symp. User Interface Software and Technology (UIST '01), pp. 61-70, 2001.
[19] C. Gutwin and A. Skopik, "Fisheyes are Good for Large Steering Tasks," Proc. SIGCHI Conf. Human Factors in Computing Systems (CHI '03), pp. 201-208, 2003.
[20] A. Cockburn, A. Karlson, and B.B. Bederson, "A Review of Overview+Detail, Zooming, and Focus+Context Interfaces," ACM Computing Surveys, vol. 41, no. 1, pp. 1-31, 2008.
[21] E. LaMar, B. Hamann, and K.I. Joy, "A Magnification Lens for Interactive Volume Visualization," Proc. Ninth Pacific Conf. Computer Graphics and Applications (PG '01), 2001.
[22] Y. Yang, J.X. Chen, and M. Beheshti, "Nonlinear Perspective Projections and Magic Lenses: 3D View Deformation," IEEE Computer Graphics and Applications, vol. 25, no. 1, pp. 76-84, Jan./Feb. 2005.
[23] J. Brosz, F.F. Samavati, M.T.C. Sheelagh, and M.C. Sousa, "Single Camera Flexible Projection," Proc. Fifth Int'l Symp. Non-Photorealistic Animation and Rendering (NPAR '07), pp. 33-42, 2007.
[24] A. Angelidis and K. Singh, "Space Deformations and Their Application to Shape Modeling," Proc. ACM SIGGRAPH '06 Courses, pp. 10-29, 2006.
[25] A.H. Barr, "Global and Local Deformations of Solid Primitives," Proc. ACM SIGGRAPH '84, vol. 18, no. 3, pp. 21-30, 1984.
[26] T.W. Sederberg and S.R. Parry, "Free-Form Deformation of Solid Geometric Models," Proc. ACM SIGGRAPH '86, vol. 20, no. 4, pp. 151-160, 1986.
[27] P. Rademacher, "View-Dependent Geometry," Proc. ACM SIGGRAPH '99, pp. 439-446, 1999.
[28] C.D. Correa and D. Silver, "Programmable Shaders for Deformation Rendering," Proc. 22nd ACM SIGGRAPH/EUROGRAPHICS Symp. Graphics Hardware (GH '07), pp. 89-96, 2007.
[29] S. Schein, E. Karpen, and G. Elber, "Real-Time Geometric Deformation Displacement Maps Using Programmable Hardware," Visual Computer, vol. 21, no. 8, pp. 791-800, 2005.
[30] Y. Kurzion and R. Yagel, "Interactive Space Deformation with Hardware-Assisted Rendering," IEEE Computer Graphics and Applications, vol. 17, no. 5, pp. 66-77, Sept. 1997.
[31] M. Spindler, M. Bubke, T. Germer, and T. Strothotte, "Camera Textures," Proc. Fourth Int'l Conf. Computer Graphics and Interactive Techniques in Australasia and Southeast Asia (GRAPHITE '06), pp. 295-302, 2006.
[32] T. Porter and T. Duff, "Compositing Digital Images," Proc. ACM SIGGRAPH '84, pp. 253-259, 1984.
[33] C. Gutwin, "Improving Focus in Interactive Fisheye Views," Proc. SIGCHI Conf. Human Factors in Computing Systems (CHI '02), pp. 267-274, 2002.
[34] R. Stockli, E. Vermote, N. Saleous, R. Simmon, and D. Herring, "The Blue Marble Next Generation—A True Color Earth Dataset Including Seasonal Dynamics from MODIS," NASA Earth Observatory, 2005.
[35] E. Pietriga, C. Appert, and M. Beaudouin-Lafon, "Pointing and Beyond: An Operationalization and Preliminary Evaluation of Multi-Scale Searching," Proc. Human Factors in Computing Systems (CHI '07), pp. 1215-1224, 2007.
[36] ISO, "9241-9 Ergonomic Requirements for Office Work with Visual Display Terminals (VDTs)-Part 9: Requirements for Non-Keyboard Input Devices," Int'l Organization for Standardization, 2000.
[37] Y. Guiard and M. Beaudouin-Lafon, "Target Acquisition in Multiscale Electronic Worlds," Int'l J. Human-Computer Studies, vol. 61, no. 6, pp. 875-905, Dec. 2004.
[38] H. Lieberman, "Powers of Ten Thousand: Navigating in Large Information Spaces," Proc. ACM Symp. User Interface Software and Technology (UIST '94), pp. 15-16, 1994.
[39] D.A. Cox, J.S. Chugh, C. Gutwin, and S. Greenberg, "The Usability of Transparent Overview Layers," Proc. Conf. Summary on Human Factors in Computing Systems (CHI '98), pp. 301-302, 1998.
[40] B.L. Harrison, H. Ishii, K.J. Vicente, and W.A.S. Buxton, "Transparent Layered User Interfaces: An Evaluation of a Display Design to Enhance Focused and Divided Attention," Proc. Conf. Human Factors in Computing Systems (CHI '95), pp. 317-324, 1995.
[41] E.A. Bier, M.C. Stone, K. Pier, W. Buxton, and T.D. DeRose, "Toolglass and Magic Lenses: The See-Through Interface," Proc. ACM SIGGRAPH '93, pp. 73-80, 1993.
[42] L. Williams, "Pyramidal Parametrics," ACM SIGGRAPH Computer Graphics, vol. 17, no. 3, pp. 1-11, 1983.
21 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool