Abstract: Designing neural network architectures manually is time-consuming and labor-intensive when applying deep learning to various tasks and datasets. Neural architecture search (NAS) has been proposed to automate this process, but existing methods mainly aim at improving generalization on in-distribution data. NAS methods for out-of-distribution data have been developed to address performance degradation on unseen domains, but they require training large super-networks during the search, which results in long search times and high computational costs. There is a need for an efficient NAS method that can reduce search time while maintaining high performance. In this work, we propose a training-free NAS method for domain generalization. Our method extends zero-cost proxy-based NAS by incorporating uncertainty modeling. We compute architecture scores using autoencoders with uncertainty-based data augmentation and apply feature augmentation during training. In addition, we provide a theoretical connection between our method and domain generalization error bounds. Experiments on standard benchmarks, including PACS, Office-Home, and NICO, show that our method achieves comparable or superior accuracy to existing domain generalization methods while reducing search time from over one hour to under ten minutes and using fewer parameters.
External IDs:dblp:journals/access/WakayamaK25
Loading