The Community for Technology Leaders
RSS Icon
Issue No.12 - Dec. (2012 vol.18)
pp: 2689-2698
Bongshin Lee , Microsoft Research
Nathalie Henry Riche , Microsoft Research
Sheelagh Carpendale , University of Calgary
The importance of interaction to Information Visualization (InfoVis) and, in particular, of the interplay between interactivity and cognition is widely recognized [12,15,32,55,70]. This interplay, combined with the demands from increasingly large and complex datasets, is driving the increased significance of interaction in InfoVis. In parallel, there have been rapid advances in many facets of interaction technologies. However, InfoVis interactions have yet to take full advantage of these new possibilities in interaction technologies, as they largely still employ the traditional desktop, mouse, and keyboard setup of WIMP (Windows, Icons, Menus, and a Pointer) interfaces. In this paper, we reflect more broadly about the role of more “natural” interactions for InfoVis and provide opportunities for future research. We discuss and relate general HCI interaction models to existing InfoVis interaction classifications by looking at interactions from a novel angle, taking into account the entire spectrum of interactions. Our discussion of InfoVis-specific interaction design considerations helps us identify a series of underexplored attributes of interaction that can lead to new, more “natural,” interaction techniques for InfoVis.
Data visualization, Instruments, Information analysis, Human computer interaction, Taxonomy, User interfaces, NUI (Natural User Interface), Design considerations, interaction, post-WIMP
Bongshin Lee, Petra Isenberg, Nathalie Henry Riche, Sheelagh Carpendale, "Beyond Mouse and Keyboard: Expanding Design Considerations for Information Visualization Interactions", IEEE Transactions on Visualization & Computer Graphics, vol.18, no. 12, pp. 2689-2698, Dec. 2012, doi:10.1109/TVCG.2012.204
[1] E. W. Anderson, K. C. Potter, L. E. Matzen, J. F. Shepherd, G. A. Preston, and C. T. Silva, A User Study of Visualization Effectiveness Using EEG and Cognitive Load, Computer Graphics Forum (Proc. Euro Vis), vol. 30, no. 3, pp. 791-800, 2011.
[2] Anoto - Start, http:/
[3] Apple - iPhone 4S - Ask Siri to help you get things done, .
[4] R. Ball and C. North, Realizing Embodied Interaction for Visual Analytics Through Large Displays, Computers & Graphics, vol. 31, no. 3, pp. 380-400, 2007.
[5] D. Baur, F. Seiffert, M. Sedlmair, and S. Boring, The Streams of Our Lives: Visualizing Listening Histories in Context, IEEE TVCG (Proc. Info Vis), vol. 16, no. 6, pp. 1119-1128, 2010.
[6] M. Beaudouin-Lafon, Instrumental Interaction: An Interaction Model for Designing Post-WIMP User Interfaces, Proc. CHI, pp. 446-453, 2000.
[7] M. Beaudouin-Lafon, Designing Interaction, Not Interfaces, Proc. AVI, pp. 15-22, 2004.
[8] J. Bellegarda, Statistical Language Model Adaptation: Review and Perspectives, Speech Communication, vol. 42, pp. 93-108, 2004.
[9] S. Boring, D. Baur, A. Butz, S. Gustafson, and P. Baudisch, Touch Projector: Mobile Interaction Through Video, Proc. CHI, pp. 2287-2296, 2010.
[10] P. Brandl, C. Richter, and M. Haller, NiCEBook - Supporting Natural Note Taking, Proc. CHI, pp. 599-608, 2010.
[11] J. Browne, B. Lee, S. Carpendale, N. Riche, and T. Sherwood, Data Analysis on Interactive Whiteboards through Sketch-based Interaction, Proc. ITS, pp. 154-157, 2011.
[12] S. Card, J. Mackinlay, and B. Shneiderman, Readings in Information Visualization: Using Vision to Think, Morgan Kaufmann, 1999.
[13] S. Carpendale, A Framework for Elastic Presentation Space, PhD Dissertation, Dept. of Computer Science, Simon Fraser University, Vancouver, Canada, 1999.
[14] W. O. Chao, T. Munzner, and M. van de Panne, Poster: Rapid Pen-Centric Authoring of Improvisational Visualizations with Napkin Vis. Posters Compendium Info Vis, 2010.
[15] E. H. Chi and J. Riedl, An Operator Interaction Framework for Visualization Systems, Proc. InfoVis, pp. 63-70, 1998.
[16] K. Cox, R. E. Grinter, S. L. Hibino, L. J. Jagadeesan, and D. Mantilla, A Multi-Modal Natural Language Interface to an Information Visualization Environment, J Speech Technology, vol. 4, pp. 297-314, 2001.
[17] A. van Dam, Post-WIMP user interfaces, CACM, vol. 40, no. 2, pp. 63-67, 1997.
[18] Data Science Revealed: A Data-Driven Glimpse into the Burgeoning New Field, http://www.emc.comlcollaterallabout/news emc-data-science-study-wp.pdf.
[19] P. Dourish, Where The Action Is: The Foundations of Embodied Interaction, MIT Press, 2001.
[20] T. Dwyer, B. Lee, D. Fisher, K. Inkpen, P. Isenberg, G. Robertson, and C. North, Understanding Multi-touch Manipulation for Surface Computing, IEEE TVCG (Proc. InfoVis), vol. 25, no. 19, pp. 961-968, 2009.
[21] N. Elmqvist, A. Vande Moere, H.-C. Jetter, D. Cemea, H. Reiterer, and T. J. Jankun-Kelly, Fluid Interaction for Information Visualization, Information Visualization, vol. 10, pp. 327-340, 2011.
[22] M. Frisch, J. Heydekom, and R. Dachselt, Investigating Multi-Touch and Pen Gestures for Diagram Editing on Interactive Surfaces, Proc. ITS, pp. 149-156, 2009.
[23] D. Frohlich, Direct Manipulation and Other Lessons. In M. Helander, T. K. Landauer, and P. V. Prabhu, editors, Handbook of Human-Computer Interaction, pp. 463-488, 1997.
[24] D. M. Frohlich, The History and Future of Direct Manipulation, Behaviour & Information Technology, vol. 12, no. 6, pp. 315-329, 1993.
[25] M. J. F. Gales, Maximum Likelihood Linear Transformations for HMM-Based Speech Recognition, Computer Speech & Language, vol. 12, pp. 75-98, 1998.
[26] G. Goth, Brave NUI world, CACM, vol. 54, no. 12, pp. 14-16, 2011.
[27] L. Grammel, M. Tory, and M. A. Storey, How Information Visualization Novices Construct Visualizations, IEEE TVCG (Proc. Info Vis), vol. 16, no. 6, pp. 943-952, 2010.
[28] S. Greenberg, N. Marquardt, T. Ballendat, R. Diaz-Marino, and M. Wang, Proxemic Interactions: The New Ubicomp? ACM Interactions, vol. 18,no. 1, pp. 42-50, 2011.
[29] C. Gutwin and S. Greenberg, A Descriptive Framework of Workspace Awareness for Real-Time Groupware, CSCW, vol. 11, no. 3-4, pp. 411-446, 2002.
[30] M. Haller, J. Leitner, T. Seifried, J. Wallace, S. Scott, C. Richter, P. Brandl, A. Gokcezade, and S. Hunter, The NiCE Discussion Room: Integrating Paper and Digital Media to Support Co-Located Group Meetings, Proc. CHI, pp. 609-618, 2010.
[31] M. Hancock, T. ten Cate, and S. Carpendale, Sticky Tools: Full 6DOF Force-Based Interaction for Multi-Touch Tables, Proc. ITS, pp. 145-152, 2009.
[32] J. Heer and B. Shneiderman, Interactive Dynamics for Visual Analysis, ACM Queue, vol. 10, no. 2, pp. 30-55, 2012.
[33] M. Heilig, S. Huber, M. Demarmels, and H. Reiterer, ScatterTouch: A Multi Touch Rubber Sheet Scatter Plot Visualization for Co-located Data Exploration, Proc. ITS, pp. 264-264, 2010.
[34] N. Henry, J-D. Fekete, and M. J. McGuffin, NodeTrix: a Hybrid Visualization of Social Networks, IEEE TVCG (Proc. InfoVis), vol. 13, no. 6, pp. 1302-1309, 2007.
[35] K. Hinckley, K. Yatani, M. Pahud, N. Coddington, J. Rodenhouse, A. Wilson, H. Benko, and B. Buxton, Pen + Touch = New Tools, Proc. UIST pp. 27-36, 2010.
[36] U. Hinrichs and S. Carpendale, Gestures in the Wild: Studying Multi-Touch Gesture Sequences on Interactive Tabletop Exhibits, Proc. CHI, pp. 3023-3032, 2011.
[37] C. Holz and S. Feiner, Relaxed Selection Techniques for Querying Time-Series Graphs, Proc. UIST, pp. 213-222, 2009.
[38] P. Isenberg and S. Carpendale, Interactive Tree Comparison for Co-located Collaborative Information Visualization, IEEE TVCG (Proc. InfoVis), vol. 13, no. 6, pp. 1232-1239, 2007.
[39] P. Isenberg, A. Bezerianos, N. Henry, S. Carpendale, and J.-D. Fekete, CoCoNutTrix: Collaborative Retrofitting for Information Visualization, CG&A, vol. 29, no. 5, pp. 44-57, 2009.
[40] P. Isenberg, D. Fisher, S. A. Paul, M. R. Morris, K. Inkpen, and M. Czerwinski, Collaborative Visual Analytics Around a Tabletop Display, IEEE TVCG, vol. 18, no. 5, pp. 689-702, 2012.
[41] P. Isenberg, A. Tang, and S. Carpendale, An Exploratory Study of Visual Information Analysis, Proc. CHI, pp. 1217-1226, 2008.
[42] R. J. K. Jacob, A. Girouard, L. M. Hirshfield, M. S. Horn, O. Shaer, E. T. Solovey, and J. Zigelbaum, Reality-Based Interaction: A Framework for Post-WIMP Interfaces, Proc. CHI, pp. 201-210, 2008
[43] H.-C. Jetter, J. Gerken, M. Zöllner, H. Reiterer, and N. Milic-Frayling, Materializing the Query with Facet-Streams - A Hybrid Surface for Collaborative Search on Tabletops, Proc. CHI, pp. 3013-3022, 2011.
[44] C. N. Klokmose and M. Beaudouin-Lafon, VIGO: Instrumental Interaction in Multi-Surface Environments, Proc. CHI, pp. 869-878, 2009.
[45] R. Kosara, H. Hauser, and D. L. Gresh, An Interaction View on Information Visualization, Proc. Eurographics—State of the Art Reports, pp. 123-137, 2003.
[46] A. Kretz, R. Huber, and M. Fjeld, Force Feedback Slider (FFS): Interactive Device for Learning System Dynamics, Proc. ICATL, pp. 457-458, 2005.
[47] B. Kwon, W. Javed, N. Elmqvist, and J.-S. Yi, Direct Manipulation Through Surrogate Objects, Proc. CHI, pp. 627-636, 2011.
[48] Z. Liu, N. J. Nersessian, and J. T. Stasko, Distributed Cognition as a Theoretical Framework for Information Visualization, IEEE TVCG (Proc. InfoVis), vol. 14, no. 6, pp. 1173-1180, 2008.
[49] Many Eyes, http:/
[50] Microsoft Kinect for Windows | Develop for the Kinect | Kinect for Windows, .
[51] J. Nielsen, Noncommand User Interfaces, CACM, vol. 36, no. 4, pp. 8399, 1993.
[52] Nuance - Dragon Dictation: iPhone - Dragon Dictation for iPadtm, iPhonetm and iPod touchtm is an easy-to-use voice recognition application, http://www.nuance.comlfor-business/by-product/ dragon-dictation-iphoneindex.htm.
[53] L. Olsen, F. F. Samavati, M. C. Sousa, and J. A. Jorge, Sketch-Based Modeling: A Survey, Computers & Graphics, vol. 33, pp. 85-103, 2009.
[54] Perceptive Pixel, http:/
[55] W. A. Pike, J. T. Stasko, R. Chang, and T. A. O'Connell, The Science of Interaction, Information Visualization, vol. 8, no. 4, pp. 263-274, 2009.
[56] N. H. Riche, K. Inkpen, I. T. Stasko, T. Gross, and M. Czerwinski, Supporting asynchronous collaboration in visual analytics systems, Proc. A VI, pp. 809-811, 2012.
[57] K. Ryall, N. Lesh, T. Lanning, D. Leigh, H. Miyashita, and S. Makino, QueryLines: Approximate Query for Visual Browsing, Ext. Abst. CHI, pp. 1765-1768, 2005.
[58] S. Schmidt, M. Nacenta, R. Dachselt, and S. Carpendale, A Set of Multitouch Graph Interaction Techniques, Proc. ITS, pp. 113-116, 2011.
[59] C. Schwesig, I. Poupyrev, and E. Mori, Gummi: A Bendable Computer, Proc. CHI, pp. 263-270, 2004.
[60] B. Shneiderman, Direct Manipulation: A Step Beyond Programming Languages, IEEE Computer, vol. 16, no. 8, pp. 57-69, 1983.
[61] B. Shneiderman, The Eyes Have It: A Task By Data Type Taxonomy for Information Visualizations, Proc. VL, pp. 226-242, 1996.
[62] B. Shneiderman and P. Maes, Direct Manipulation vs. Interface Agents, Interactions, vol. 4, no. 6, pp. 42-61, 1997.
[63] D. C. Smith, C. Irby, R. Kimball, and E. Harslem, The Star User Interface: An Overview, Proc. AFIPS, pp. 515-528, 1982.
[64] R. Spence, Information Visualization: Design for Interaction, Pearson, 2nd ed., 2007.
[65] M. Spindler, S. Stellmach, and R. Dachselt, PaperLens: Advanced Magic Lens Interaction Above the Tabletop, Proc. ITS, pp. 69-76, 2009.
[66] M. Spindler, C. Tominski, H. Schumann, and R. Dachselt, Tangible Views for Information Visualization, Proc. ITS, pp. 157-166. 2010.
[67] Y. Sun, J. Leigh, A. Johnson, and S. Lee, Articulate: A Semi-Automated Model for Translating Natural Language Queries into Meaningful Visualizations, Proc. SG, LNCS, vol. 6133, pp. 184-195, 2010.
[68] I. E. Sutherland, Sketchpad A Man-Machine Graphical Communication System, Proc. AFIPS Spring Joint Compo Conf, pp. 329-346, 1963.
[69] C. Swindells, K. E. MacLean, K. S. Booth, and M. J. Meitner, Exploring Affective Design for Physical Controls, Proc. CHI, pp. 933-942, 2007.
[70] J. J. Thomas and K. A. Cook, Illuminating the Path, IEEE, 2005.
[71] M. Tobiasz, P. Isenberg, and S. Carpendale, Lark: Coordinating Co-located Collaboration with Information Visualization, IEEE TVCG (Proc. Info Vis), vol. 15, no. 6, pp. 1065-1072, 2009.
[72] F. B. Viegas, M. Wattenberg, M. McKeon, F. van Ham, and J. Kriss, Harry Potter and the Meat-Filled Freezer: A Case Study of Spontaneous Usage of Visualization Tools, Proc. HICSS, 2008.
[73] M. Wattenberg, Baby Names, Visualization, and Social Data Analysis, Proc. Info Vis, pp. 1-7, 2005.
[74] M. Wattenberg, Sketching a Graph to Query a Time-Series Database, Ext. Abs. CHI, pp. 381-382, 2001.
[75] D. Wigdor and D. Wixon, Brave NUI World: Designing Natural User Interfaces for Touch and Gesture, Morgan Kaufmann, 2011.
[76] C. Williamson and B. Shneiderman, The Dynamic HomeFinder: Evaluating Dynamic Queries in a Realestate Information Exploration System, Proc. Research and Development in Information Retrieval, pp. 338-346, 1992.
[77] A. D. Wilson, TouchLight: An Imaging Touch Screen and Display for Gesture-Based Interaction, Proc. ICMI, pp. 69-76, 2004.
[78] A. D. Wilson, S. Izadi, o. Hilliges, A. Garcia-Mendoza, and D. Kirk, Bringing Physics to the Surface, Proc. UIST, pp. 67-76, 2008.
[79] J. O. Wobbrock, M. R. Morris, and A. D. Wilson, User-Defined Gestures for Surface Computing, Proc. CHI, pp. 1083-1092, 2009.
[80] Wordle - Beautiful Word Clouds, http:/
[81] J. S. Yi, Y. A. Kang, J. T. Stasko, and J. A. Jacko, Toward a Deeper Understanding of the Role of Interaction in Information Visualization, IEEE TVCG (Proc. Info Vis), vol. 13, no. 6, pp. 1224-1231, 2007.
[82] H. Zhao, C. Plaisant, and B. Shneiderman, “I Hear the Pattern”-Interactive Sonification of Geographical Data Patterns, Ext. Abs. CHI, pp. 1905-1908, 2005.
29 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool