IEEE Computer Society Newsfeed

Subscribe RSS

« Back

[Conference News] End-User Assessments are Valuable – to a Certain Point

Intelligent assistants are on the rise in today’s high-tech society. From smart home security systems that serve a family in its home, to research “coding” assistants helping a group of project researchers, intelligent assistants customize their work around an end-user’s needs; they learn, among other things, how to recognize everything from junk email to photos of friends.

Unfortunately, intelligent systems sometimes handle tasks so important or large that they cannot be trusted implicitly. Systematic assessment of an intelligent assistant’s end users can establish certain levels of trust, but such assessments can be costly.

A group of researchers from Oregon State University and City University London investigated recently whether bringing a small crowd of end users (“mini-crowds”) to assess an intelligent assistant is useful from a cost/benefit perspective. The results? A mini-crowd of testers supplied many more benefits than the obvious decrease in cost and workload, but as the size of the mini-crowds increased, there was a point of diminishing returns where the cost-benefit ratio became less attractive.

In a paper titled “Mini-Crowdsourcing End-User Assessment of Intelligent Assistants: A Cost-Benefit Study”, to be presented at the IEEE Symposium on Visual Languages and Human-Centric Computing (VLHCC 11), 18-22 September in Pittsburgh, Penn., three different-sized mini-crowds assessed the performance of an intelligent assistant that classified textual messages. Findings showed that bigger was not always better. For example, the mini-crowd of six introduced fewer false negatives than the mini-crowd of 11. Even in tests where larger mini-crowds outperformed smaller crowds, the benefit of increasing the crowd size quickly dropped, while costs scaled linearly.

The researchers envision a future in which shared testing is paired with shared debugging to support small ecosystems of end users to quickly and effectively assess intelligent assistants that support important aspects of their work and lives.

To learn more about papers to be presented at VLHCC 2011, visit the conference website at

Trackback URL: