Abstract: Visual sensors on autonomous vehicles are vulnerable to adverse weather, which seriously reduces the performance of semantic segmentation task and threats people’s safety. Therefore, segmentation models need to carry out extensive training on a large amount of adverse weather data which is difficult and expensive to acquire to improve their robustness. To solve this problem, researchers have proposed domain generalization methods that do not need target domain (such as adverse weather) data to adapt during training. However, most of them focus on the synthetic-to-real problem which is caused by the difference in texture between real and virtual images. To address these challenges, we analyze the formation mechanism behind adverse weather, extract two kinds of weather cues and establish the relationship between them and adverse weather. On this basis, we propose CSCR, a domain randomization framework for simulating adverse weather. Specifically, CSCR includes the Common Cue Randomization (CCR) module which simulates adverse illumination style and the Specific Cue Randomization (SCR) module which randomizes weather cues that occur in specific adverse weather. We conduct extensive experiments from fair weather to fog, night, rain and snow on driving datasets. Compared with the source model, CSCR increases by more than 9% points in mIoU on average which even exceeds some domain adaptation methods while still keeping the memory of fair weather. The CSCR framework can be easily applied to existing segmentation models and significantly improve their generalization ability.
Loading