SADT: Combining Sharpness-Aware Minimization with Self-Distillation for Improved Model GeneralizationDownload PDF

Published: 20 Oct 2022, Last Modified: 15 Sept 2024HITY Workshop NeurIPS 2022Readers: Everyone
TL;DR: A sketchy union between self-distillation and 'unbounded' sharpness-aware minimization to improve generalization and training behavior.
Abstract: Methods for improving deep neural network training times and model generalizability consist of various data augmentation, regularization, and optimization approaches, which tend to be sensitive to hyperparameter settings and make reproducibility more challenging. This work jointly considers two recent training strategies that address model generalizability: sharpness-aware minimization, and self-distillation, and proposes the novel training strategy of Sharpness-Aware Distilled Teachers (SADT). The experimental section of this work shows that SADT consistently outperforms previously published training strategies in model convergence time, test-time performance, and model generalizability over various neural architectures, datasets, and hyperparameter settings.
Supplementary Material: zip
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/sadt-combining-sharpness-aware-minimization/code)
4 Replies

Loading