‘Smart Glasses’ Developed That Can Help Blind People Recognize Objects

0
28

Researchers have developed a smart glasses system that can help blind people or individuals with poor vision to recognize and reach objects in their environment.

The system, described in a study published in the journal PLOS ONE, involves a novel “acoustic touch” feature that is inspired by human echolocation.

Echolocation is the ability to detect objects in the environment by creating sounds that bounce off those objects, helping to pinpoint their location in a given space as well as certain characteristics about them.

Certain animals—most notably bats and toothed whales—rely on echolocation to navigate their environment. But humans are also capable of learning to echolocate, with most cases involving individuals who are blind.

Wearable smart glasses are an emerging technology that’s gaining popularity in the assistive technologies industry. Assistive technology is any item, piece of equipment, software program or product system that is designed to increase, maintain, or improve the functional capabilities of people with disabilities.

A blind participant uses the acoustic touch system. Researchers have developed a set of smart glasses that can help blind people or those with poor vision recognize and reach for objects.
Lil Deverell/Motion Platform and Mixed Reality Lab, University of Technology Sydney, CC-BY 4.0 https://creativecommons.org/licenses/by/4.0/

One central area within the field of assistive technologies is developing systems for people who are blind or have poor vision. It is estimated that blindness affects approximately 39 million people around the globe, with an additional 246 million people suffering from low vision.

Assistive smart glasses designed to help people with poor vision typically work by translating the wearer’s surroundings into computer-synthesized speech. For example, such a system may describe the name of a given object and its direction in the environment. They may also come with features such as voice recognition control for internet searches or booking services, among others.

Such smart glasses have many important benefits, such as being hands-free, inconspicuous in appearance, and multifunctional.

“Past works have found success in using auditory sensory augmentation to provide users with accurate spatial information of their surrounding environment with very little training,” the authors of the study from the University of Technology Sydney and the University of Sydney in Australia wrote.

In the latest study, the scientists explore a novel technique known as “acoustic touch” that can assist blind people or those with poor vision in finding objects.

Unlike traditional systems, the smart glasses described in the study create distinct sounds known as “auditory icons” when an object enters the device’s field of view that convey its identity and location to the user.

Acoustic touch “is the use of head movement and sound to explore unknown environments,” study author Howe Zhu with the University of Technology Sydney told Newsweek. “We believe that the inclusion of head movement and the feedback of sound can provide a sense of agency when exploring and improve the user’s ability to localize specific objects.”

In the study, the researchers equipped a pair of smart glasses with a specially developed audio device to study the efficacy and usability of using acoustic touch to search for and reach items. The glasses employ cameras, and the system is connected to a smartphone running a deep-learning model used for object recognition purposes.

The scientists propose that the acoustic touch system has a number of advantages over more traditional assistive smart glasses. As well as the acoustic device being intuitive to use and easy to integrate into conventional smart glasses technology, they say the use of auditory icons may require less cognitive processing and be more accessible than computer-synthesized speech approaches.

“Speech is useful for conveying clear semantic information. However, speech requires a higher degree of cognitive processing and is more complex,” involving a longer time to generate and a longer sound file, the researchers wrote.

“In contrast, with adequate training, auditory icons could offer a more intuitive approach to presenting spatial information.”

Auditory icons are essentially short, non-speech audio clips based on real-world sounds that convey information about an object, event or situation. For example, an auditory icon could be a telephone ring to represent a phone or the sound of a siren to represent an emergency.

In the study, the researchers evaluated 14 participants—seven who were blind or had poor vision and seven who were blindfolded to serve as a control group—as they used the acoustic touch smart glasses in two primary tasks.

In both tasks, participants were presented with two or three household items—such as a book, bottle, bowl or cup—on a table and asked to use the smart glasses to search and reach for a single target item.

The team found that the wearable device could effectively aid the recognition and reaching of an object, as well as concluding that it did not significantly increase the cognitive workload of the user.

“These promising results suggest that acoustic touch can provide a wearable and effective method of sensory augmentation,” the authors wrote in the study.

The authors note that their acoustic touch system has some important limitations at present that hinder the usability of the device.

In its current state, the system cannot be relied upon in a real-world situation unless the robustness of the object recognition is vastly improved, according to the researchers.

Another limitation identified in the study was the high computing needs of the system on the smartphone, which despite being a high-powered model, struggled with the realtime demands, suffering significant heating and power consumption problems. Thus, improvements need to be made to the system before it can incorporated into future smart glasses.

“Future iterations of acoustic touch paradigms should improve the object recognition’s reliability, refine the system’s usability, and address the device’s technical limitations concerning power consumption and heating,” the authors wrote.