, Stanford University
Pages: pp. 10-12
Abstract—Crowdsourcing involves outsourcing some job to a distributed group of people online, typically by breaking the job down into microtasks. Online markets offer human users payment for completing small tasks, or users can participate in nonpaid platforms such as games and volunteer sites. These platforms' general availability has enabled researchers to recruit large numbers of participants for user studies, generate third-party content and assessments, or even build novel user experiences. This special issue provides a snapshot of the most recent crowdsourcing research.
Keywords—crowdsourcing, microtask, Amazon Mechanical Turk, human computation
While humans have always been a critical part of developing and guiding computational systems, large groups of people online are now taking an even more active role in computational tasks. These crowds take on goals such as writing encyclopedias, evaluating the quality of user-generated content, identifying the best photograph in a set, and helping the blind to read a restaurant menu when OCR (optical character recognition) technologies fail. Crowdsourcing involves outsourcing some job to a distributed group of people online, typically by breaking the job down into microtasks. The main enablers of this shift are online markets that offer human users payment for completing small tasks and nonpaid platforms such as games 1 and volunteer sites. These platforms' general availability has enabled researchers to recruit large numbers of participants for user studies, 2 generate third-party content and assessments, or even build novel user experiences. 3
This special issue focuses on using crowdsourcing in Internet computing research. The articles we've selected demonstrate the breadth of ways that researchers are integrating crowd-sourcing into other fields, and how they're extending its reach and efficacy.
Although the idea is by no means new, crowdsourcing has gained significant popularity in the past decade. Indeed, it has made inroads in computing as an accepted form of merging computational algorithms and platforms with human intelligence. Many think of this as the recognition that certain computational problems are simply very difficult to do well or well enough to not need human involvement.
A good canonical example in industry is business card data entry. Even very sophisticated algorithms using OCR technology simply can't deal with the great variety of card designs in the real world. Instead, a company called CardMunch ships the business cards to Amazon Mechanical Turk to have them transcribed ( www.readwriteweb.com/archives/linkedin_updates_cardmunch_iphone_app.php). In this way, even a user with hundreds of business cards, say from a convention, can have them transcribed very quickly and cheaply (see Figure 1).
Figure 1 Example business card task in Amazon Mechanical Turk. (Figure courtesy of the LinkedIn Corporation; reprinted with permission.)
CAPTCHAs are another example of crowdsourcing. Specifically, the reCAPTCHA project digitizes out-of-print books by asking online users to spell out individual words so that they can prove that they're real human beings. 1 Here, no money changes hands. Users desiring access to some Web service are simply asked to perform a task to prove that they're human, and, in exchange, the system learns across many users the recognized characters' probability distribution.
At the other end of complexity, Wikipedia is probably the best-known crowdsourcing platform; it compiles an online encyclopedia by dividing the task up across hundreds of thousands of contributors, letting anyone author or edit articles. 4
Mechanical Turk's development has greatly accelerated research on crowdsourcing. It might be the first successful large-scale labor market of the Internet age. Many subtle platform issues have arisen on how to organize a crowdsourcing labor force. Analytics of these platforms are also ongoing, including great work by Panos Ipeirotis on the demographics of Mechanical Turk workers. 5
The articles in this special issue reflect the diversity of methodologies and fields in crowdsourcing. They represent areas ranging from human-computer interaction to management science, ubiquitous computing, and experimental and statistical methods.
"Priming for Better Performance in Microtask Crowdsourcing Environments," by Robert Morris, Mira Dontcheva, and Elizabeth Gerber, describes the authors' use of priming, a classic method from psychology, to improve paid crowdsourcing quality. Interestingly, inducing positive affect via the stimulation of an image or song leads to improved worker performance on problem-solving tasks.
In "Obtaining High-Quality Relevance Judgments Using Crowdsourcing," Jeroen Vuurens and Arjen de Vries demonstrate spam-detection techniques that bring crowdsourced relevance annotations to the quality of expert annotations. To drive crowdsourcing forward as a field, it's important to demonstrate that crowdsourcing can match expert-quality work, but with less time, money, or overhead.
In their article "MobileWorks: Designing for Quality in a Managed Crowdsourcing Architecture," Anand Kulkarni and his colleagues describe their MobileWorks architecture. MobileWorks breaks the mold of existing crowdsourcing markets: among other innovations, it demonstrates the effectiveness of training a set of crowd workers to be managers within their own marketplace. This idea of a career trajectory for crowd workers will only grow in importance.
Georgios Chatzimilioudis and his colleagues take crowdsourcing off the desktop and to the mobile phone and location-based services arena, in "Crowdsourcing with Smartphones." They present a suite of applications that exploit the crowd as a large distributed sensor. These mobile crowds can be queried for specific sensor-based characteristics or can actively share information with their local neighbors.
In "Analyzing Crowd Labor and Designing Incentives for Humans in the Loop," Oksana Tokarchuk, Roberta Cuel, and Marco Zamarian apply theory-building to characterize the set of crowdsourcing approaches and best practices for each. Their motivating examples range from Mechanical Turk to Threadless to Galaxy Zoo. This overarching lens on the design space can help us understand what we have accomplished and what challenges remain.
Finally, "An Experiment in Comparing Human-Computation Techniques," by Stefan Thaler, Elena Simperl, and Stephan Wölger, draws together two separate crowdsourcing methodologies — paid microtasks and games with a purpose — to compare their relative strengths. The authors remind us that games must be carefully designed to be successful, and that paid microtasks have some benefits in this domain.
This issue should give readers a snapshot of current crowdsourcing research. A first reaction may be that this area is promisingly interdisciplinary: it involves techniques from computer science, human-computer interaction, labor and organizational economics, psychology, and sociology, as well as concerns from policy, law, and ethics. One area not well covered in this issue is work that embeds crowds inside applications that react in real time: one example is Soylent, 3 which enables real-time help on word processing tasks.
The science behind crowdsourcing involves understanding the human psychology of intrinsic and extrinsic motivations as well as cognitive and perceptual effects on biases or tendencies. We are just starting to realize these patterns and principles for use in designing crowdsourcing systems.
The surprise in the past five to six years of this research endeavor has been how far we've been able to push crowdsourcing's possibilities. Human computation is taking computing to areas that we haven't been able to reach with existing computing and algorithmic techniques thus far. It appears that the human and computing symbiosis J.C.R. Licklider originally envisioned in his famous paper 6 is manifesting in exciting and interesting ways. We hope you agree that it is an exciting journey!