CRL: An Efficient Autonomous Exploration Framework for Large-Scale Environments with Contrastive-Driven Reinforcement Learning
Abstract: Autonomous exploration in large-scale environments is impeded by two critical challenges, namely suboptimal viewpoint selection resulting from inadequate feature extraction, and the continuously rising computational costs as the environment expands. Existing methods struggle to simultaneously tackle these dual challenges within cohesive frameworks. In response, we present an efficient autonomous exploration framework with contrastive-drive reinforcement learning. Inspired by human cognitive mechanisms that reinforce crucial information recognition through contrast, our study implements contrastive constraints on nodes of varying utility levels within high-dimensional feature spaces, achieving a decoupling of their latent representations. This capability empowers decision networks to explicitly capture key regional characteristics, thereby enhancing the precision of optimal viewpoint selection. Moreover, to mitigate issues of backtracking and redundant exploration, we design specialized training rules that enforce effective action constraints, further enhancing viewpoint selection. Additionally, we propose a novel graph rarefaction algorithm to tackle computational costs, simplifying computational complexities while maintaining performance standards. Compared to state-of-the-art approaches, our method achieves 6.7% shorter path lengths, while also demonstrates robust generalization capabilities through real-world robotic experiments across multiple real-world scenarios.
Loading