See-Touch-Predict: Active Exploration and Online Perception of Terrain Physics With Legged Robots

Published: 01 Jan 2025, Last Modified: 24 Jul 2025IEEE Robotics Autom. Lett. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Assessing physical properties of the environment with vision helps humans to respond appropriately before entering risky areas. However, equipping robots with such perceptual ability is challenging due to the lack of labeled data. To overcome this challenge, we present the Active Exploration and Online Perception (AEOP) framework, enabling legged robots to actively estimate and predict physical information of unknown terrains in real-time. The framework contains a learning-based physical sensing policy that controls the robot to actively estimate terrain physics such as friction coefficient and an incremental terrain classifier which performs temporal and spatial consistent terrain segmentation based on color images. A mapping module conjugates physical properties and visual categories of different terrains, building a point cloud map labeled with physical properties at run-time. Extensive real-world experiments were conducted where the proposed framework correctly distinguished a wide range of terrain from slippery ($\mu =0.17$) to rough ($\mu =1.03$) and accurately predicted the friction coefficient of encountered terrains based on vision, providing efficient perception of terrain physics for legged robots.
Loading