Advances in Information Visualization

Guest Editor’s Introduction • Jeffrey Heer • July 2017

Read the Guest Editors’ Introduction in
Spanish    |    Chinese

Translations by Osvaldo Perez and Tiejun Huang

Listen to the Guest Editors' Introduction

English (Steve Woods):


Spanish (Martin Omana):


Chinese (Robert Hsu):

Advances in Information Visualization

Nobel Prize-winning economist Herbert Simon famously described the “poverty of attention” that accompanies an overabundance of information. This concept certainly applies to today’s Big Data era. As our capacity to generate, store, and process vast amounts of data increases, we need new techniques to derive reliable and actionable insights from the data.

The information visualization field aims to enhance people’s ability to analyze and communicate complex data in ways that inform decision-making and enable scientific discovery. Information visualization technologies and techniques include

  • visual encoding and interaction methods,
  • visualization software systems and languages,
  • experimental assessments of visualization effectiveness, and
  • perceptually informed models for automated design and evaluation.

This July 2017 Computing Now issue presents five recent articles that make advances in these areas, leading to new tools and models that help people better understand their data. The related video examines challenges in creating visualization tools that help people explore data, make decisions, and communicate findings.

The Articles

Network analysis can pose challenges in terms of spatial layout, path following, cluster identification, and discovery of relationships between graph structure and multivariate attributes of nodes and edges.  Dynamic networks that also change over time can easily overwhelm traditional analysis approaches. In their IEEE VAST 2015 Best Paper, “Reducing Snapshots to Points: A Visual Analytics Approach to Dynamic Network Exploration,” Stef van den Elzen and his colleagues take a visual analytics approach that combines statistical analysis and visualization within an interactive exploration system. The authors apply dimensionality-reduction techniques to create an overview of network dynamics, revealing the “trajectory” of a network over time, including where networks diverge or converge. The resulting system enables flexible summary method specification and allows users to "drill-down" to view networks in increasing detail.

Venn diagrams might seem simple when showing the relationships between two or three sets. But as the number of sets increases, Venn diagrams quickly become untenable and difficult to interpret. Alexander Lex and his colleagues contribute new techniques for interactive analysis of set composition and intersection in “UpSet: Visualization of Intersecting Sets.” UpSet allows analysts to form task-driven aggregates that communicate the size and properties of aggregates and intersections. UpSet is available in open-source software packages.

An often-stated goal of visualization is to “discover the unexpected.” Yet, in many cases, data graphics can mislead by giving visual prominence to known base rates (such as population densities) or to artifacts of sample size and normalization (such as outliers arising from smaller, and thus more variable, samples). In “Surprise! Bayesian Weighting for De-Biasing Thematic Maps,” Michael Correll and Jeffrey Heer propose re-weighting techniques for spatial and temporal data that visualize statistically surprising features relative to a set of base models. Importing a measure from vision science called Bayesian surprise, Surprise Maps use a space of initially equi-plausible models along with Bayesian update steps to re-estimate their plausibility in the face of observed data. These update steps down-weight expected spatiotemporal events and boost surprising events. The resulting maps serve to guide analysts’ attention to regions of the data where observed features are less likely to be due to base rates or normal statistical fluctuation.

In addition to developing new techniques, information visualization research aims to understand what does (and does not) make for effective graphic presentations. Long-standing design guidelines include Jock Mackinlay’s expressiveness and effectiveness principles, which roughly state that (1) visualizations should convey the facts in the selected data and only those facts, and that (2) chosen visual encoding channels should maximize people’s ability to decode the data quickly and accurately. Whereas prior work operationalizes the expressiveness principle in terms of logical criteria, empirical results suggest the need for a more nuanced approach grounded in human perception. In “Evaluating the Impact of Binning 2D Scalar Fields,” Lace Padilla and her colleagues assess the effects of both continuous and discrete color encodings of scalar fields. The authors find that binned representations of a continuous domain can lead to greater accuracy than a more “direct” continuous encoding. This article not only refines design guidelines but also exemplifies a high-quality visualization effectiveness assessment, including careful task selection.

Perception experiments can be used not only to directly assess and compare competing design approaches, but also to build predictive models of perceptual phenomena. Tool builders can use these models to develop automated methods for designing and evaluating visualization techniques. “Colorgorical: Creating Discriminable and Preferable Color Palettes for Information Visualization” addresses the problem of automatically generating effective color palettes for categorical data using an optimization-based approach that combines models of color perception, naming, and aesthetic preference. Authors Connor C. Gramazio, David H. Laidlaw, and Karen B. Schloss focus not only on purely automated procedures but also on how to incorporate them into an interactive tool that lets designers express their own preferences alongside perceptually grounded guidance. This article points the way toward perceptually informed tools that support customized design while remaining sensitive to effective design principles.

Video Perspectives

Maureen Stone shares insights into important challenges facing visualization tool designers

The Industry Perspective

In this month’s video, Tableau Software Research Manager Maureen Stone shares insights into important challenges facing visualization tool designers: fostering critical skepticism of data, improving data curation and integration, and developing processes for understanding users’ intentions and goals.


Information visualization helps people across industries and academic fields understand their data and communicate important findings. The cutting-edge research and tools presented in this month’s Computing Now theme can help designers make more effective visualizations and help analysts discover new insights from their data.

Guest Editor

Jeffrey Heer is an associate professor in the University of Washington’s Computer Science & Engineering department, and he has a PhD in computer science from the University of California, Berkeley. He is an associate editor-in-chief of IEEE Transactions on Visualization and Computer Graphics. Heer co-founded Trifacta — a provider of interactive tools for scalable data transformation — and has helped develop visualization tools (Vega, D3.js, Protovis, and Prefuse) that researchers, companies, and thousands of data enthusiasts around the world use. Contact him at


Average (0 Votes)
The average rating is 0.0 stars out of 5.

Article Comments

Please log in to comment.