Abstract—This installment of Computer's series highlighting the work published in IEEE Computer Society journals comes from IEEE Transactions on Affective Computing and IEEE Transactions on Knowledge and Data Engineering.
Keywords—Spotlight on Transactions; crowdsourcing; artificial intelligence; AI; healthcare; intelligent systems; human-computer interaction; HCI; affective computing; digital well-being; health informatics
Björn Schuller, University of Passau and Imperial College London
Humans are reasonably good at sensing when others are depressed. In particular, our hearing is attuned to the voice-related changes often exhibited during depression. But could computers also be taught to “hear” depression in everyday conversations outside a lab, even to the point of being able to make a clinical diagnosis?
In their article “Self-Reported Symptoms of Depression and PTSD Are Associated with Reduced Vowel Space in Screening Interviews,” Stefan Scherer and his colleagues investigate such an automated approach to diagnosis (IEEE Trans. Affective Computing, vol. 7, no. 1, 2016, pp. 59–73). They focus on the acoustic changes in vowel sounds—particularly on vowel reduction—commonly observed among individuals with psychological disorders such as depression or neurological disorders such as Parkinson's disease. To make diagnosis less reliant on clinicians’ or caretakers’ subjective impressions, the authors use SimSensei, a virtual human interviewer that “listens” to patients’ voices during unprompted conversations. They argue that this system could lead to more natural patient behavior during interviews, more objective evaluations, and reduced costs.
Scherer and colleagues suggest that employing virtual agents offers clinicians increased control over the choice of stimuli. Because virtual humans can be programmed a priori, they avoid the predispositions or “impurities” that human partners might introduce. The authors also propose that virtual human interviewers can reduce patients’ stress, fear, and sense of being judged. Ultimately, this encourages patients to speak more freely, thus providing richer data for analysis.
In the study, the SimSensei virtual human system interviewed both US military veterans and nonmilitary individuals to identify—based on their speech characteristics—those at risk for posttraumatic stress disorder (PTSD) or major depression. All had been “coded for depression and PTSD” by standard psychiatric questionnaires. The findings support the assumption that vowel reduction was indeed related to depression or PTSD diagnosis.
Automated voice-based diagnosis is attracting growing interest in many healthcare-related areas. Given sufficiently robust analysis, the efficiency of such a system would be unrivaled—a phone call to a recording service could suffice to make relatively inexpensive automated diagnosis accessible to the general public. Overall, the prospects of automatic voice-based diagnosis are exciting, although we must stay alert to its potential ethical implications.
Jian Pei, Simon Fraser University
AlphaGo recently beat a human master at a very sophisticated board game. But there are still many ways in which computers can't match our intelligence. Human and computational intelligence, however, can complement each other, and by combining them in a smart manner, we can achieve more than by applying each individually. For example, through the Captcha mechanism, a smart algorithm and many people together transcribed more than 440 million words with nearly 100 percent accuracy. The key to such feats is crowdsourcing, a fast-growing research and development area.
The September 2016 issue of IEEE Transactions on Knowledge and Data Engineering will feature two survey articles by leading experts in this exciting research frontier: “A Survey of General-Purpose Crowdsourcing Techniques” (A.I. Chittilappilly et al.), and “Crowdsourced Data Management: A Survey’’ (G. Li et al.). Though written simultaneously and independently, the two articles complement each other in many aspects. Both individually and together, they present comprehensive, concrete, insightful, and timely reviews on crowdsourcing.
Anand Chittilappilly and his colleagues’ article takes a general view of crowdsourcing systems and techniques. Starting with a crowdsourcing framework, it reviews popular crowdsourcing platforms, including those that are standalone, metaplatforms, general purpose, and specialized. The authors then discuss in depth three major aspects of crowdsourcing: incentive design, task decomposition and assignment, and quality control. They use a real scenario to deliberate on a series of technical issues, and conclude with important future research directions.
Guoliang Li and his colleagues’ article is data management–oriented and focuses on three major crowdsourcing properties: quality control, cost control, and latency control. After a brief introduction, the authors provide a detailed review of crowdsourced operators including selection, collection, join and entity resolution, ranking and sorting, aggregation, categorization, skyline analysis, planning, schema matching, mining, and spatial crowdsourcing. They also discuss crowdsourced optimization and systems and identify major research challenges.
These articles will appeal both to advanced researchers wishing to catch up on the latest crowdsourcing developments and to general readers interested in learning the essentials of this fast-growing field. For investigators familiar with the results surveyed, the articles also provide a valuable opportunity to reexamine critical principles and research challenges.