EchoSpeech, sonar goggles that track facial movements for silent communication

The sonar system for reading mouth movements built into glasses is a smart and very promising EchoSpeech idea.

A Cornell University researcher has developed sonar goggles that can “hear”you even when you’re not speaking. These glasses use small microphones and speakers to read the words your lips silently say, whether it’s pausing a song or skipping to the next, entering a password without touching your phone, or working with templates, drawing on a computer without a keyboard.

Sonar system for reading mouth movements built into glasses.

Ruidong Zhang, who developed the device, started with a similar project that used wireless headphones and previous models with cameras. Using glasses eliminates the need to use cameras or wear something in your ear. “Most recognition technologies in the silent world are limited to a set of predefined commands and require the user to look into the camera or wear it, which is not always practical or even feasible,” explained Cheng Zhang, an assistant professor at Cornell University: “We will bring sonar to the human body.”

The researchers explain that this system only takes a few minutes of practice — such as reading a series of numbers — to learn the user’s speech patterns. After that, the glasses are ready. They send and receive sound waves on your face by detecting the movements of your mouth and using a deep learning algorithm to analyze echo profiles in real time “with about 95% accuracy”.

The system does this by delegating data processing wirelessly to your smartphone, allowing the glasses to remain very discreet. The current version offers about 10 hours of battery life for acoustic detection. In addition, no data leaves your phone. “We are excited about this system because it empowers while being powerful and respecting privacy,” said Cheng Zhang. “It’s small, low power and respectful of privacy, all aspects are very important for the deployment of new technologies, in addition to behaving in the real world.”

Smart and very promising idea EchoSpeech

Privacy is very important for real world use. For example, Ruidong Zhang suggests using these glasses to control music (without hands and eyes) in a library, or to dictate a message during a loud concert where other methods would not work. But the most interesting scenario, perhaps, would be to allow people with speech impairments to have a dialogue with a voice synthesizer so that they can be heard.

If all goes well, these points should be sold. A team at Cornell’s Smart Computer Interfaces for Future Interactions (SciFi) Lab is exploring the possibility of bringing this technology to market through Cornell’s funding program. They are also interested in connected eyewear apps to track movements of the face, eyes and even the upper body. “We believe glasses will become an important personal computing platform for understanding human activities in daily life,” said Cheng Zhang.

CDN CTB