Abstract: The significance of depth estimation has spurred recent endeavors to enhance it through Multi-Sensor Fusion (MSF). However, prevailing MSF methods exhibit limitations concerning accuracy and resilience when confronted with sensor degradations. While certain forms of degradation, such as suboptimal lighting and adverse weather conditions, can be mitigated by collecting pertinent data in data-driven learning, this approach proves ineffective for Out-of-Distribution (OOD) sensor degradations. In this paper, we propose a novel approach termed Combinable and Separable Multi-Sensor Fusion (CSMSF) designed to bolster depth estimation robustness against multiple sensor degradations. CSMSF hinges on four core principles: i) improved performance is achieved with an increased number of valid sensors, ii) a single valid sensor can independently enable its own depth estimation, iii) maintaining a judicious equilibrium between accuracy and model complexity, and iv) autonomous diagnosis of sensor observation failure. Leveraging these advantages, CSMSF identifies and rejects degraded sensors, allowing autonomous selection of valid sensors for scene depth estimation. The experimental results demonstrate the superior robustness of the proposed CSMSF, underscoring its efficacy in addressing challenges associated with sensor degradations across diverse environmental conditions.
External IDs:dblp:journals/pami/HuFOGGL25
Loading