The Community for Technology Leaders
Green Image
This paper describes the integration of perceptual guidelines from human vision with an AI-based mixed-initiative search strategy. The result is a visualization assistant called ViA, a system that collaborates with its users to identify perceptually salient visualizations for large, multidimensional datasets. ViA applies knowledge of low-level human vision to: (1) evaluate the effectiveness of a particular visualization for a given dataset and analysis tasks; and (2) rapidly direct its search towards new visualizations that are most likely to offer improvements over those seen to date. Context, domain expertise, and a high-level understanding of a dataset are critical to identifying effective visualizations. We apply a mixed-initiative strategy that allows ViA and its users to share their different strengths and continually improve ViA's understanding of a user's preferences. We visualize historical weather conditions to compare ViA's search strategy to exhaustive analysis, simulated annealing, and reactive tabu search, and to measure the improvement provided by mixed-initiative interaction. We also visualize intelligent agents competing in a simulated online auction to evaluate ViA's perceptual guidelines. Results from each study are positive, suggesting that ViA can construct high-quality visualizations for a range of real-world datasets.
Multivariate visualization, Display algorithms, Interaction techniques, Human information processing
Vivek Rao, Robert St. Amant, Christopher Healey, Reshma Mehta, Sarat Kocherlakota, "Visual Perception and Mixed-Initiative Interaction for Assisted Visualization Design", IEEE Transactions on Visualization & Computer Graphics, vol. 14, no. , pp. 396-411, March/April 2008, doi:10.1109/TVCG.2007.70436
103 ms
(Ver 3.1 (10032016))