Abstract: A recent trend in deep learning algorithms has been towards training large scale models, having high parameter
count and trained on big dataset. However, robustness of
such large scale models towards real-world settings is still
a less-explored topic. In this work, we first benchmark the
performance of these models under different perturbations
and datasets thereby representing real-world shifts, and
highlight their degrading performance under these shifts.
We then discuss on how complete model fine-tuning based
existing robustification schemes might not be a scalable option given very large scale networks and can also lead them
to forget some of the desired characterstics. Finally, we
propose a simple and cost-effective method to solve this
problem, inspired by knowledge transfer literature. It involves robustifying smaller models, at a lower computation
cost, and then use them as teachers to tune a fraction of
these large scale networks, reducing the overall computational overhead. We evaluate our proposed method under
various vision perturbations including ImageNet-C,R,S,A
datasets and also for transfer learning, zero-shot evaluation setups on different datasets. Benchmark results show
that our method is able to induce robustness to these large
scale models efficiently, requiring significantly lower time
and also preserves the transfer learning, zero-shot properties of the original model which none of the existing methods
are able to achieve.
Loading