As reported by [wired]:
Researchers from China's Wuhan University of Technology have developed an ultrasound device combined with Machine Learning techniques to develop an AI that can detect people and poses via Echolocation, like bats. How it works is it breaks down the sound into its component parts and analyzes them for patterns, matching particular frequency shifts to a specific posture, a similar technique has been done using ambient Wi-Fi signals. While still in its infancy, it could provide yet another form of robotic sensory perception.