The Community for Technology Leaders
2014 47th Hawaii International Conference on System Sciences (2005)
Big Island, Hawaii
Jan. 3, 2005 to Jan. 6, 2005
ISSN: 1530-1605
ISBN: 0-7695-2268-8
pp: 105
Malcolm Slaney , IBM Almaden Research Center
Daniel M. Russell , IBM Almaden Research Center
We present a method for testing subject's performance in a realistic (end-to-end) information understanding task-rapid understanding of large document collections-and discuss lessons learned from our attempts to measure representative information-understanding tools and behaviors. To further our understanding of this task, we need to move beyond overly constrained and artificial measurements of easily instrumented behavior. From observations, we know information analysis is often performed under time pressure and requiring use of large document collections. Instrumenting people in their workplace is often untenable, yet oversimple laboratory studies often miss explanatory richness. We argue that studies of information analysts need to be done on tests that are closely aligned with their natural tasks. Understanding human performance in such tasks requires analysis that accounts for many of the subtle factors that influence final performance, including the role of background knowledge, variations in reading speed, and tool use costs.
Malcolm Slaney, Daniel M. Russell, "Measuring Information Understanding in Large Document Collections", 2014 47th Hawaii International Conference on System Sciences, vol. 04, no. , pp. 105, 2005, doi:10.1109/HICSS.2005.404
92 ms
(Ver )