- Engadget in April 2023: Researchers at Cornell built EchoSpeech, AI-infused sonar glasses that track lip movements for silent communication
- Also Engadget in November 2023: These AI-infused sonar-equipped glasses from (again) Cornell could pave the way for better VR upper-body tracking (paper on PoseSonic)
- March 2024: Cornell paper on GazeTrak, which uses sonar acoustics with AI to track eye movements from glasses.
- Also in March 2024: Cornell paper on EyeEcho, which tracks other facial expressions, expanding on EchoSpeech
Now Cornell’s Lab have come up with yet another development, but not a pair of glasses: EchoWrist, a wristband using sonar + AI for hand-tracking.
(This also tracks from a 2022 (or 2016?) paper about finger-tracking on smartwatches using sonar (paper).)
Based on what I’ve read from this Hackerlist summary as well as Cornell’s press release, this is a smart, accessible, less power-hungry and more privacy-friendly addition to the list of sound+AI-based tools coming out of Cornell for interacting with AR. The only question is how predictive the neural network can be when it comes to the hand gestures being made.
For comparison, Meta’s ongoing neural wristband project, which was acquired along with CTRL Labs in 2022, uses electromyography (EMG) and AI to read muscle movements and nerve sensations through the wrist to not only track hand, finger and arm positioning, but even interpret intended characters when typing on a bare surface.
There shouldn’t be much distance between EchoWrist, EchoSpeech and using acoustics to detect, interpret and anticipate muscle movements in the wrist (via phonomyography). If sonar+AI can also be enhanced to read neural signals and interpret intended typed characters on a bare surface, then sign me up.
EDIT 4/8/23: surprisingly, there is a way to use ultrasound acoustics to record neural activity.