Abstract: LiDAR-Radar fusion has been widely regarded as an effective strategy for enhancing sensor-level robustness in 3D perception under adverse weather. However, it remains fundamentally insufficient to address feature-level domain shifts induced by diverse weather conditions - a critical yet often overlooked bottleneck in multimodal 3D object detection. In this work, we advocate a new perspective: all-weather 3D detection should be formulated as a lightweight capacity allocation problem, rather than simply enlarging or duplicating models for each weather domain. To this end, we propose DA3D, a Domain-Aware Dynamic Adaptation framework that leverages LoRA as a domain-adaptive capacity controller for efficient and scalable feature modulation. In addition, we introduce a domain-aware rank adaptation strategy that dynamically reallocates LoRA capacity based on domain difficulty, allowing the model to focus its representational power where it matters most. Extensive experiments on the K-Radar benchmark show that DA3D consistently improves 3D detection across both radar-only and LiDAR-Radar fusion backbones, achieving +4.9% AP3D on RTNH, +3.8% on 3D-LRF, and +8.1% on L4DR at IoU=0.5. Notably, DA3D outperforms existing multi-weather modeling methods under the same parameter budget, offering a practical and scalable solution for robust all-weather 3D perception. The code is available at https://github.com/Dawns14/DA3D.
External IDs:doi:10.1145/3746027.3755708
Loading