Underwater Robot Self-Localization Method Using Tightly Coupled Events, Images, Inertial, and Acoustic Fusion

Published: 01 Jan 2025, Last Modified: 03 Aug 2025IEEE Trans. Ind. Electron. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: At present, underwater robot self-localization methods mainly rely on inertial measurement unit (IMU) and Doppler velocity log (DVL) based dead reckoning (DR) and standard camera-based vision localization methods. However, the DR method has the problem of error accumulation with the increase of distance and the standard camera-based vision method is difficult to achieve accurate localization due to little information and sparse features of standard images in low-light underwater environment. To address these problems, a tightly coupled event, standard image, inertia, and acoustic fusion method is proposed to realize robust and accurate self-localization of underwater robots. First, the high-frequency IMU data is fused to generate motion compensation event frames, and the feature points of event frames and standard frames are extracted and tracked respectively. Then, a joint optimization problem of cost function is established by using the reprojection error term of the event frame and standard frame, the IMU inertia error term and the DVL velocity error term, avoiding the explicit switching between event frame and standard frame. Finally, a multisensor-based self-localization system for underwater robots is developed, and a series of event motion compensation and self-localization experiments in real underwater environment are carried out. Experimental results show that the proposed method has better localization accuracy than standard camera, IMU, and DVL fusion methods.
Loading