Abstract: Robot traversability is critical for its mission success, especially in unstructured environments, where the rough terrain can often lead to collision and tip-over. It is crucial for autonomous robots to analyze their traversability accurately and quickly. Most of past studies tend to use semantic or geometrical information alone, whose results are often not accurate enough since robots misclassify terrain or neglect other modal information. Some scholars have tried to fuse the semantic and geometry information, but they often require expensive sensors such as Lidar and industrial cameras, thereby limiting the applicability of these methods. Moreover, there is few robust methods of traversability analysis of robots over unstructured terrain. This paper proposes a novel and fast method of traversability analysis. Compared to the literature, the proposed method analyzes traversability based on semantic information, geometry information (e.g., slop, step height), and robot mobility. Moreover, this method only requires low-cost RGB-D cameras such as RealSense D435i rather than expensive sensors such as Lidar and industrial cameras. We also conduct multiple experiments to validate the proposed approach. The results prove that the robot can analyze traversability accurately even in the case of misclassified terrains.
Loading