Calibration-Based Multi-Prototype Contrastive Learning for Domain Generalization Semantic Segmentation in Traffic Scenes
Abstract: Prototypical contrastive learning (PCL) has been widely used to learn class-wise domain-invariant features for domain generalization semantic segmentation. These methods assume that the prototypes in different domains are invariant. However, the prototypes in different domains have discrepancies as well. First, the prototypes of the same class in different domains may be different. Second, the prototypes of different classes may be similar. To address these issues, a calibration-based multi-prototype contrastive learning (CMPCL) approach is proposed, which contains an uncertainty-guided multi-prototype contrastive learning (UMPCL) and a hard-weighted multi-prototype contrastive learning (HMPCL). Specifically, the UMPCL uses an uncertainty probability matrix, derived from element-wise discrepancies between the prototypes of the same class, to calibrate the weights of prototypes for alleviating the discrepancy between the prototypes of the same class in different domains. The HMPCL uses a hard-weighted matrix that is generated by the similarity between the prototypes of different classes, to calibrate the weights of the hard-aligned prototypes for alleviating the issue of similar prototypes between different classes, with hard-aligned prototypes referring to those exhibiting such similarity. Furthermore, since the learned class-wise domain-invariant features may overfit the prototype in the source domain, multi-prototype contrastive learning is used in the UMPCL and HMPCL to avoid this risk. Extensive experiments demonstrate that our approach achieves superior performance over current approaches on multiple benchmarks of domain generalization semantic segmentation. The source code has been released on https://github.com/seabearlmx/CMPCL.
Loading