Abstract: We present a novel interval-based visual-inertial LiDAR SLAM (i-VIL SLAM) method that solely assumes sensor errors to be bounded and propagates the error from the input sources to the estimated map and trajectory using interval analysis. The method allows us to restrict the solution set of the robot poses and the position of the landmarks to the set that is consistent with the measurements. If the error limits are not violated, it is guaranteed that the estimated set contains the true solution. The accumulation of the uncertainty is stabilized by anchoring poses derived from GNSS/INS data. Furthermore, for the first time we compare confidence ellipses determined by a classical SLAM graph optimization approach with the interval estimates of the robot poses provided by our method. In this work, we experimentally show that the marginal co-variances computed by the classical SLAM graph optimization are too overconfident and underestimate the uncertainty of the poses. While the 99.9 %-ellipsoids derived from the marginal covariances of the poses only enclose less than 64 % of the ground truth in the worst case, our method provides interval bounds for the pose parameters that enclose the ground truth for more than 96 % of all frames.
Loading