Visual-Analytics Evaluation
May/June 2009 (Vol. 29, No. 3) pp. 16-17
0272-1716/09/$31.00 © 2009 IEEE

Published by the IEEE Computer Society
Visual-Analytics Evaluation
Catherine Plaisant , University of Maryland

Georges Grinstein , University of Massachusetts Lowell

Jean Scholtz , Pacific Northwest National Laboratory
  Article Contents  
  In This Issue  
Download Citation
   
Download Content
 
PDFs Require Adobe Acrobat
 
Visual analytics (VA) is the science of analytical reasoning facilitated by interactive visual interfaces. Assessing VA technology's effectiveness is challenging because VA tools combine several disparate components, both low and high level, integrated in complex interactive systems used by analysts, emergency responders, and others. These components include analytical reasoning, visual representations, computer–human interaction techniques, data representations and transformations, collaboration tools, and especially tools for communicating the results of their use.
VA tool users' activities can be exploratory and can take place over days, weeks, or months. Users might not follow a predefined or even linear work flow. They might work alone or in groups. To understand these complex behaviors, an evaluation can target the component level, the system level, or the work environment level, and requires realistic data and tasks. Traditional evaluation metrics such as task completion time, number of errors, or recall and precision are insufficient to quantify the utility of VA tools, and new research is needed to improve our VA evaluation methodology.
In This Issue
The articles in this special issue address four facets of evaluation:

    • synthetic data set generation and use,

    • design guidelines,

    • insight characterization and measurement, and

    • users' reasoning capture.

"Generating Synthetic Syndromic-Surveillance Data for Evaluating Visual-Analytics Techniques," by Ross Maciejewski, Ryan Hafen, Stephen Rudolph, George Tebbetts, William S. Cleveland, Shaun J. Grannis, and David S. Ebert, describes a method for generating synthetic data for assessing tools that analyze public-health data. Obtaining public-health data is particularly difficult because of privacy issues involved; synthetic but realistic data is definitely needed. The authors used a set of public-health data to develop parameterized models of user demographics and seasonal trends of illnesses. Irregular disease outbreak patterns can be injected into the system data. Readers interested in generating synthetic data as well as those interested in using public-health data should find this article interesting. Readers wanting to model parametrizable synthetic data from real data will also find this article valuable.
Previous researchers have suggested assessing VA's effectiveness by measuring the number of insights that users generate during analysis. In "To Score or Not to Score? Tripling Insights for Participatory Design," Michael Smuc, Eva Mayr, Tim Lammarsch, Wolfgang Aigner, Silvia Miksch, and Johannes Gärtner propose collecting the insights generated during the earlier participatory-design phase of development. They introduce three levels of insight analysis, from the simple counting of insights to the analysis of the relation between insight and prior knowledge.
In "Integrating Statistics and Visualization for Exploratory Power: From Long-Term Case Studies to Design Guidelines," Adam Perer and Ben Shneiderman summarize the results of two long-term case studies of using a social-network analysis tool. They then illustrate how their evaluation can lead to guidelines for designers who want to combine statistical analysis and visualization.
Finally, in "Recovering Reasoning Processes from User Interactions," Wenwen Dou, Dong Hyun Jeong, Felesia Stukes, William Ribarsky, Heather Richter Lipford, and Remco Chang discuss how they uncovered part of the user's reasoning during the analysis of financial transactions. Using logs of user interactions, coders uncovered over 60 percent of analysts' strategies, methods, and findings. The ability to analyze use, in short and long-term studies, is essential to improving the tools we design.
For more information on VA evaluation, we invite you to check out the Semvast (Scientific Evaluation Methods for Visual Analytics Science and Technology) Web pages at www.cs.umd.edu/hcil/semvast.
The four articles cover a fairly wide range of topics and highlight the importance of this rich and necessary emerging field. We thank CG&A Associate Editor in Chief Holly Rushmeier for helping us through the development of this special issue. We also thank all the reviewers, who provided invaluable feedback and suggestions.
Catherine Plaisant is an associate research scientist at the Human-Computer Interaction Laboratory at the University of Maryland Institute for Advanced Computer Studies. Her research interests include the design and evaluation of new interface technologies. She's coauthor (with Ben Shneiderman) of Designing the User Interface (5th ed., Addison-Wesley, 2009). Plaisant has a Doctorat d'Ingénieur degree from Université Pierre et Marie Curie. Contact her at plaisant@cs.umd.edu.
Georges Grinstein is a professor of computer science at the University of Massachusetts Lowell, where he is a codirector of the Institute for Visualization and Perception Research and of the Center for Biomolecular and Medical Informatics. His research interests include computer graphics, visualization, visual analytics, virtual environments, and user interfaces, with emphasis on the modeling, visualization, and analysis of complex information systems, particularly biomedical. Grinstein has a PhD in mathematics from the University of Rochester. He's a member of the ACM and the IEEE. Contact him at grinstein@cs.uml.edu.
Jean Scholtz is a chief scientist in the Pacific Northwest National Laboratory's National Security Division. Her main research interest is user-centered evaluation of intelligent systems and visual-analytics systems. Scholtz received her PhD in computer science from the University of Nebraska. She's on the editorial board of the International Journal of Human-Computer Studies. She's a member of the ACM, SIGCHI, and the IEEE. Contact her at jean.scholtz@pnl.gov.