Abstract: Domain adaptation has driven the progress of Facial Expression Recognition (FER). Existing cross-domain FER methods focus only on the domain alignment of a single source domain to the target domain, ignoring the importance of multisource domains that contain richer knowledge. However, Cross-Multidomain FER (CMFER)needs to combat the domain conflicts caused by the uncertainty of intradomain annotations and the inconsistency of interdomain distributions. To this end, this paper proposes a Domain-Uncertain Mutual Learning (DUML) method to deal with the more challenging CMFER problem. Specifically, we consider a domain-specific global perspective for domain-invariance representation and domain fusion for facial generic detail representation to mitigate cross-domain distribution differences. Further, we develop Intra-Domain Uncertainty (Intra-DU) and Inter-Domain Uncertainty (Inter-DU) to combat the large dataset shifts caused by annotation uncertainty. Finally, extensive experimental results on multiple benchmark across multidomain FER datasets demonstrate the remarkable effectiveness of DUML against CMFER uncertainty. All codes and training logs are publicly available at https://github.com/liuhw01/DUML.
0 Replies
Loading