Imagine the challenge of navigating your way through a building in complete darkness, without a single visual cue to tell you your location or how to get where you need to be. Now imagine an app on your smartphone that acts as your speaking electronic guide, telling you exactly where you are and leading you directly to the elevator, restroom, exit, or your next meeting. An ongoing collaboration between scientists at Carnegie Mellon University and IBM Research have created just that in NavCog.
NavCog is the first app created using HULOP (Human-scale Localization Platform), an open-source project that aims to help developers create “indoor and outdoor seamless navigation” and “engaging experiences” for the visually impaired. Currently, the app is available for iPhone and iPad and is free on the App Store .
To determine the location of the user and pinpoint landmarks, NavCog relies on BLE (Bluetooth Low Energy) beacons that are installed in several areas throughout the CMU campus, both indoors and out. The NavCog app analyzes the signals from the beacons, then helps the user navigate through speech (typically delivered through earbuds) or by vibrating the smartphone.
Chieko Asakawa, an acclaimed computer scientist, IBM Fellow, and visiting faculty at CMU, is one of the researchers working on the project. Blind since the age of 14, she says that while the visually impaired have gained independence in the digital world, “we are still challenged in the real world.”
Her goal is to accelerate the advancement of cognitive assistance research by giving developers tools and opportunities to create accessibility applications, enhancing independence for the visually impaired.
Kris Kitani, PhD, a systems scientist in CMU’s Robotics Institute—Computer Vision Group and leader in the NavCog project, says there’s much more capability to come. One thing he and his fellow researchers are working on is attempting to localize a person without the Bluetooth beacons, which currently need to be installed every 15-30 feet.
“What we’d like is to be able to use the camera to take video or pictures as the person’s walking and use images to figure out where the person is,” Kitani says.
Object and facial recognition, which are demonstrated in the NavCog video, are currently in development. “The ultimate goal is to go beyond direction-giving.
“We want to be able to tell a blind person more things about the environment they’re walking in,” Kitani says. For instance, if there’s another person passing nearby, a restroom or a water fountain, those are all visual cues that sighted people rely on all the time. But they’re not as obvious to a blind person. “If we can get that information in a smart way to a blind user, I think that would be great using computer vision.”
Bluetooth beacons are currently used in a variety of settings, including retailers and stadiums like PNC Park, where they operate with the MLB.com Ballpark app. Although NavCog currently works only on the CMU campus, Asakawa says her team is actively seeking collaborators to implement it elsewhere.
“We would love to have collaborators like museums, orchestra halls, PNC Park, stores,” she says. And she also would like to have visually-impaired participants visit CMU to try NavCog for themselves.
“We need more blind people to try it. Once they do, they’ll see how amazing it really is.”