IEEE/IBM Watson Student Showcase

Greg Byrd, North Carolina State University

Pages: 102–104

Abstract—Using cognitive computing, student teams created original apps that process unstructured text and natural language. The Web extra at features a video of one of the winning projects, Miface, which is using crowdsourcing to build an extensive database of semantically tagged facial expressions.

Keywords—IBM; Watson; education; mobile applications; social media; crowdsourcing; human-computer interaction

IBM Watson ( uses natural language processing and machine learning to analyze and extract insights from vast, unstructured volumes of text and data. Watson—probably best known as the computer that in 2011 defeated two former Jeopardy! champions—now has its own business unit focused on developing healthcare, business, -interactive toys, and other applications. The platform offers 25 APIs and services in four key areas: language, speech, vision, and data insights.

In 2015, IEEE and IBM launched the Watson Student Showcase. Small teams of undergraduate and graduate students were provided access to Watson and other services through the IBM Bluemix cloud computing platform, and were challenged to create a cognitive app. Submissions were judged on originality, feasibility, usefulness, and creativity.

This month's column highlights the five winning projects. Each winning team was awarded a cash prize of $2,000.

DocBot: Patient Snapshot

Andrew Ninh (Arizona State University), Tyler Dao (California State University, Long Beach), and Harrison Nguyen (University of California, Davis) developed DocBot, a mobile app that summarizes electronic health record (EHR) information to streamline medical appointments.

EHRs are complex and detailed, and sifting through such records often slows down physicians during appointments or other patient interactions. To make medical -appointments more efficient, DocBot gives physicians a patient “snapshot” with an intuitive interface. The app's organization and mobility features are key because current EHR information is disparate and difficult to -access. Doctors often have to execute dozens of clicks to find the data they need.

DocBot determines the type of upcoming patient encounter and then presents a single mobile interface with the patient snapshot. Watson's cognitive processing extracts and organizes the most relevant information for the situation. For example, a cardiologist will be interested in different data than an orthopedic surgeon. The resulting information is presented to the physician in five parts: details of the upcoming appointment, an organized problem list, active and changed medications, relevant plans for the problem list, and any important past notes. Using Watson's natural language processing, the program parses clinical notes in advance to categorize problems as active, chronic, or unresolved. Technical challenges addressed by this project include autosummarization and assurance of data veracity.

The team is currently working with an allergist/immunologist to develop a new DocBot feature that will incorporate patient-provided information from preclinical forms.


Have you ever struggled to find the right word to express your thoughts? Rajesh Shashi Kumar (PES University), Nihal V. Nayak (MS Ramaiah Institute of Technology), and Sai Charan (PES University) developed the Wordinator, an app that acts as a sort of a reverse dictionary for such situations. Users enter a definition (a phrase), and the app finds a matching word. This isn't as easy as it sounds. Rather than matching an exact definition with a word, Wordinator uses Watson's natural language capabilities to compare an informal phrase to a more formal definition in a database.

The app also supports vocabulary building. In addition to single words, the system can return a collection of related words. Thus, Wordinator can improve language learners’ expressiveness and fluency by introducing them to new words and reinforcing connections between related words. This -feature could also help students preparing for tests that emphasize verbal skills, such as the GRE or TOEFL.


To enrich human–computer interaction, lifelike robots and avatars must be able to both understand and generate the nonverbal cues—such as facial expressions—that are so important to human communication. Although computer animation programs can generate numerous facial expressions, human interpretation is needed to assign meaning to these expressions.

To address this challenge, Crystal Butler, Stephanie Michalowicz, and Hansi Mou (New York University) developed Miface, a Web app that aims to build a large database of semantically tagged facial expressions. They combined crowdsourcing with natural language processing to establish consistent labels for a large set of computer-generated faces.

In Miface, users presented with a computer-generated facial expression are asked to enter a word that best describes the emotion being conveyed. That word is then fed into Watson's tone analyzer module, which responds with a set of synonyms that users can choose from to refine their analysis (see Figure 1). Having users select a -label from a set of terms that's more constrained—but still preserves the tone and meaning of the users’ responses—allows Miface to develop more consistent expression classifications.

Graphic: The Miface app asks users to enter a word that most closely matches a -computer-generated facial expression and then uses Watson's tone analyzer module to suggest other words to refine users’ answers.

Figure 1. The Miface app asks users to enter a word that most closely matches a -computer-generated facial expression and then uses Watson's tone analyzer module to suggest other words to refine users’ answers.

Watson's real-time analysis replaces the offline processing used in earlier projects, creating a more interactive user experience. The team plans to incorporate gaming techniques to engage volunteer participants. By providing feedback in the form of personalized achievement history, leaderboards, and level-ups, the team hopes to entice worldwide science enthusiasts to form a broad lexicon of expression-to-muscle-movement mappings.


Rather than using Watson's language capabilities to solve a particular problem, Ryan Blanchard (University of Florida), Wilson Ding (Texas A&M University), and Joseph Distler (Univer-sity of Texas) set out to understand and demonstrate the uncertainty introduced by processing and translation. Their Telephony app is based on the game “Telephone,” in which a secret message is whispered from person to person in a circle; the message that returns to its originator is often quite different from the original.

In Telephony, the circle of people is replaced by a circle of language processing modules. At each step, the message is converted from one format to another, with the module attempting to keep the message's meaning unchanged across transformations. Specific formats include English speech, English text, Spanish speech, Spanish text, French text, Portuguese text, and Arabic text. Stringing transformations together illustrates the problem of maintaining the message's content and spirit through several processing iterations. The students’ experiment showed that the odds of the final message being the same as the original were quite low when using nontrivial phrases or sentences. Even single words were sometimes transformed into something completely different.

The team's goal wasn't to undermine Watson's effectiveness. Rather, they sought to understand the boundaries of Watson's approach to cognitive computing. They hope to expand the project to achieve insight into language transformations across various domains and possibly use more of Watson's -machine-learning capabilities to further increase the accuracy of its language transformations.

Stack Analyze

The Internet is home to many thriving user communities whose moderators must manually review thousands of posts. To reduce this burden, Sagar Gubbi and Srivatsa Bhargava (Indian Institute of Science) created Stack Analyze.

They focused on Stack Overflow, a collaboratively edited question-and-answer user community for programmers. Stack Overflow only seeks programming questions with answers based on facts, references, or specific expertise. It doesn't allow questions about career advice or those eliciting opinions, such as, “Is C++ better than Java?” Moderators close down discussions with such questions, which are often introduced by new members unaware of the rules. The Stack Analyze app uses Watson to determine whether a user question is acceptable, providing helpful feedback to the user if it isn't.

To use Stack Analyze, users install an applet as a bookmark in their Web browser. Before submitting a question to Stack Overflow, users simply click on this “bookmarklet.” The question is then analyzed and feedback is printed in the browser below the question.

The team trained Watson's Natural Language Classifier service with 800 questions from the Stack Overflow dataset. Questions were classified as either acceptable (open) or unacceptable (not a question, off topic, not constructive, or too localized). Using this training set, the app achieved 72 percent accuracy for acceptable versus unacceptable questions. Identifying multiple types of unacceptable questions provides specific feedback to users on how to change their questions to have a better chance of passing moderation.

The Watson Showcase offered a perfect venue for students to combine their knowledge and programming skills with cutting-edge computing. These teams addressed challenging problems and created useful tools that impact their fellow students and society. Congratulations to all the winning teams—I salute your creativity, energy, and ingenuity.

Greg Byrd is associate head of the Department of Electrical and Computer Engineering at North Carolina State University. Contact him at

Submit Your Project

We want to hear about interesting student-?led design projects in computer science and engineering. If you’d like to see your project featured in this column, complete the submission form at

69 ms
(Ver 3.x)