Pruning for Better Domain Generalizability

ICML 2023 Workshop SCIS Submission92 Authors

Published: 20 Jun 2023, Last Modified: 28 Jul 2023SCIS 2023 PosterEveryoneRevisions
Keywords: Domain Generalization, Pruning
Abstract: In this paper, we investigate whether we could use pruning as a reliable method to boost the generalization ability of the model. We found that existing pruning method like L2 can already offer small improvement on the target domain performance. We further propose a novel pruning scoring method, called DSS, designed not to maintain the source accuracy of the compressed model as typical pruning work does, but to directly enhance the robustness and the generalization performance of the model. We conduct empirical experiments to validate our method and demonstrate that it can be even combined with state-of-the-art generalization work like MIRO(Cha et al., 2022) to further boost the performance. On MNIST to MNIST-M, we could improve the baseline performance by over 5 points simply by introducing 60% channel sparsity selected by DSS into the model. On the popular DomainBed benchmark and combining with MIRO, we can further boost the state-of-the art performance by 1 point only by introducing 10% sparsity into the model. Code can be found at https://github.com/AlexSunNik/pruning_for_domain_gen.
Submission Number: 92
Loading