Abstract: Domain generalization focuses on leveraging the knowledge from the training data of multiple related domains to enhance inference on unseen in-distribution (IN) and out-of-distribution(OOD) domains. In our study, we introduce a multi-task representation learning technique
that leverages the information of multiple related domains to improve the detection of classes from unseen domains. Our method aims to cultivate a latent space from data spanning multiple domains, encompassing both source and cross-domains, to amplify generalization to
OOD domains. Additionally, we attempt to disentangle the latent space by minimizing the mutual information between the input and the latent space, effectively de-correlating spurious correlations among the samples of a specific domain. Collectively, the joint optimization
will facilitate domain-invariant feature learning. We assess the model’s efficacy across multiple cybersecurity datasets, using standard classification metrics on both unseen IN and OOD sets, and validate the results with contemporary domain generalization methods.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Weijian_Deng1
Submission Number: 2880
Loading