• IEEE.org
  • IEEE CS Standards
  • Career Center
  • About Us
  • Subscribe to Newsletter

0

IEEE
CS Logo
  • MEMBERSHIP
  • CONFERENCES
  • PUBLICATIONS
  • EDUCATION & CAREER
  • VOLUNTEER
  • ABOUT
  • Join Us
CS Logo

0

IEEE Computer Society Logo
Sign up for our newsletter
FacebookTwitterLinkedInInstagramYoutube
IEEE COMPUTER SOCIETY
About UsBoard of GovernorsNewslettersPress RoomIEEE Support CenterContact Us
COMPUTING RESOURCES
Career CenterCourses & CertificationsWebinarsPodcastsTech NewsMembership
BUSINESS SOLUTIONS
Corporate PartnershipsConference Sponsorships & ExhibitsAdvertisingRecruitingDigital Library Institutional Subscriptions
DIGITAL LIBRARY
MagazinesJournalsConference ProceedingsVideo LibraryLibrarian Resources
COMMUNITY RESOURCES
GovernanceConference OrganizersAuthorsChaptersCommunities
POLICIES
PrivacyAccessibility StatementIEEE Nondiscrimination PolicyIEEE Ethics ReportingXML Sitemap

Copyright 2025 IEEE - All rights reserved. A public charity, IEEE is the world’s largest technical professional organization dedicated to advancing technology for the benefit of humanity.

  • Home
  • /Publications
  • /Tech News
  • /Research
  • Home
  • / ...
  • /Tech News
  • /Research

How the Blind Point a Smartphone at Everyday Control Panels—and Hear Prompts on Which of Those Microwave Buttons to Push. They Can Even Order Up Braille Labels.

By Lori Cameron

By Lori Cameron on
April 19, 2019
Featured ImageFeatured ImagePush-button interfaces are everywhere—microwaves, toasters, coffee makers, thermostats, printers, copiers, checkout terminals, kiosks, and remote controls. And while they afford most of us great convenience, they are largely inaccessible to people who are visually-impaired. But two new technologies aim to change that—VizLens and Facade, say researchers Anhong Guo and Jeffrey P. Bigham of Carnegie Mellon University in their study "Making Everyday Interfaces Accessible: Tactile Overlays by and for Blind People." "Making a physical environment accessible to blind people generally requires sighted assistance. VizLens and Facade put blind users at the center of a crowdsourced, computer-vision-based workflow that lets them make the environment accessible on their own terms," write Guo and Bigham. Want to stay in the know? Sign up for the Computer Society INSIDER newsletter.

Challenges faced by the visually impaired in using digital interfaces

People who are blind or visually impaired have a tough enough time as it is navigating their environment. When it comes to digital interfaces, blind users face several unique challenges:
  • Flat digital touchpads have replaced physical buttons, which blind users could previously distinguish with their fingers.
  • Blind people must rely on a sighted assistant to identify button functions and apply Braille labels to home appliances.
  • Because blind people cannot remember all the abbreviations and functions of complex interfaces, they might choose to label only a few functions, limiting their access.
  • If Braille labels wear out because of frequent use, which happens a lot with kitchen appliances, blind users lose access to the functions and need help relabeling buttons again.
In addition to identifying parts, people who are visually impaired need a means of easily reproducing and attaching braille labels to a digital interface. Like what you're reading? Explore our collection of magazines and journals. That's where VizLens and Facade come in.

How VizLens helps people who are blind

To begin, the blind user takes a picture of the digital interface. How?
VizLens users take a picture of an interface they would like to use, such as that of a microwave oven. This image is interpreted quickly by crowd workers in parallel. The system then uses computer vision to give instantaneous interactive feedback and guidance on using the interface through a mobile (left) or wearable device (right).VizLens users take a picture of an interface they would like to use, such as that of a microwave oven. This image is interpreted quickly by crowd workers in parallel. The system then uses computer vision to give instantaneous interactive feedback and guidance on using the interface through a mobile (left) or wearable device (right).
VizLens users take a picture of an interface they would like to use, such as that of a microwave oven. This image is interpreted quickly by crowd workers in parallel. The system then uses computer vision to give instantaneous interactive feedback and guidance on using the interface through a mobile (left) or wearable device (right).
When the user holds the smartphone up to the interface, the app can tell if it is partially out of frame by detecting whether the corners of the interface are inside the camera frame. If they are not, the app will say something like “Move phone to the right." If it detects no interface at all, it will say, "No object." Once an image is captured, the user sends it to the "crowd" to view and label all of the parts of the interface. For these apps, the developers use Amazon Mechanical Turk, a crowdsourcing platform where tens of thousands of workers are always online completing all kinds of human intelligence tasks. Job-hunting? Subscribe to our Build Your Career newsletter. Within a few minutes, crowd workers rate the image quality (whether it is blurry or partially cropped), mark the layout of the interface, note its buttons or other controls, and describe each button (for example, “baked potato” or “start/pause”). These results are verified using majority vote and locked into the app's server for retrieval. Later, when blind users want to use the interface, they open the VizLens mobile app, point the phone toward the interface, and hover a finger over the buttons.

VizLens gives users auditory feedback when they hover their finger over the buttons of different kinds of digital interfaces.

As can be seen in the video above, "computer vision" matches the crowd-labeled image to the real image. VizLens then detects what button the user is pointing at and tells the user what it is.

How Facade works

Facade uses the same image capturing and crowdsourcing tech that VizLens does—but with one important difference. After gathering all the data about a digital interface, Facade allows users to make 3D prints of tactile overlays for appliance interfaces in just minutes. If users don't have a 3D printer at home, they can use a mail order service like 3D Hubs.
Facade’s crowdsourced fabrication pipeline enables blind people to produce a 3D-printed overlay of tactile buttons.Facade’s crowdsourced fabrication pipeline enables blind people to produce a 3D-printed overlay of tactile buttons.
Facade’s crowdsourced fabrication pipeline enables blind people to independently make physical interfaces accessible by producing a 3D-printed overlay of tactile buttons.
Three-dimensional prints can be made of tactile overlays of any digital interface using customized words or abbreviations. The layout of the tactile overlay is stored in the Facade app so that, if users need to replace it, they can easily print and apply a new one.
Samples of printed tactile overlays and legends generated by Facade. Samples of printed tactile overlays and legends generated by Facade.
Samples of printed tactile overlays and legends generated by Facade. Users can choose to print a legend for the abbreviations. If they do not have a 3D printer at home, overlays can be printed by a commercial mail-order service such as 3D Hubs using PolyFlex or SemiFlex materials.

Static vs. dynamic interfaces

Static interfaces include the types of displays you see on microwave ovens, printers, and remote controls. The buttons don't change, so a single reference image is enough. Dynamic interfaces include the types of displays you see in kiosks or checkout terminals where pushing a button takes the user to a new screen with a different display. Attend the 2019 IEEE Visualization Conference, to be held October 20-25, 2019, in Vancouver. It's the premier forum for visualization. VizLens and Facade promise to make it easier for the visually impaired to interact with both types of digital devices. These two new apps "solve the long-standing challenge of making everyday interfaces accessible by tightly integrating the complementary strengths of the end user, the crowd, computer vision, and fabrication technology," Guo and Bigham say. Related research on tech for the visually impaired in the Computer Society Digital Library:
  • A Public Transit Assistant for Blind Bus Passengers
  • Evaluating Responsive Web Design's Impact on Blind Users
  • An Instrumented Ankle-Foot Orthosis with Auditory Biofeedback for Blind and Sighted Individuals
  • Wearable Auditory Biofeedback Device for Blind and Sighted Individuals
  • From Tapping to Touching: Making Touch Screens Accessible to Blind Users
  • Understanding the Physical Safety, Security, and Privacy Concerns of People with Visual Impairments
  • The Impact of Low Vision on Touch-Gesture Articulation on Mobile Devices
  • Assistive Embedded Technologies

About Lori Cameron Lori Cameron is a Senior Writer for the IEEE Computer Society and writes regular features for its digital platforms. Contact her at l.cameron@computer.org. Follow her on LinkedIn.
LATEST NEWS
From Isolation to Innovation: Establishing a Computer Training Center to Empower Hinterland Communities
From Isolation to Innovation: Establishing a Computer Training Center to Empower Hinterland Communities
IEEE Uganda Section: Tackling Climate Change and Food Security Through AI and IoT
IEEE Uganda Section: Tackling Climate Change and Food Security Through AI and IoT
Blockchain Service Capability Evaluation (IEEE Std 3230.03-2025)
Blockchain Service Capability Evaluation (IEEE Std 3230.03-2025)
Autonomous Observability: AI Agents That Debug AI
Autonomous Observability: AI Agents That Debug AI
Disaggregating LLM Infrastructure: Solving the Hidden Bottleneck in AI Inference
Disaggregating LLM Infrastructure: Solving the Hidden Bottleneck in AI Inference
Get the latest news and technology trends for computing professionals with ComputingEdge
Sign up for our newsletter
Read Next

From Isolation to Innovation: Establishing a Computer Training Center to Empower Hinterland Communities

IEEE Uganda Section: Tackling Climate Change and Food Security Through AI and IoT

Blockchain Service Capability Evaluation (IEEE Std 3230.03-2025)

Autonomous Observability: AI Agents That Debug AI

Disaggregating LLM Infrastructure: Solving the Hidden Bottleneck in AI Inference

Copilot Ergonomics: UI Patterns that Reduce Cognitive Load

The Myth of AI Neutrality in Search Algorithms

Gen AI and LLMs: Rebuilding Trust in a Synthetic Information Age