On the Abilities of Mathematical Extrapolation with Implicit ModelsDownload PDF

Published: 21 Oct 2022, Last Modified: 05 May 2023NeurIPS 2022 Workshop DistShift PosterReaders: Everyone
Keywords: implicit deep learning models, out of distribution extrapolation
TL;DR: Show implicit models' unique advantages for robustness to out of distribution shifts in comparison with classical deep learning models
Abstract: Deep neural networks excel on a variety of different tasks, often surpassing human abilities. However, when presented with out-of-distribution data, these models tend to break down even on the simplest tasks. In this paper, we compare the robustness of implicitly-defined and classical deep learning models on a series of mathematical extrapolation tasks, where the models are tested with out-of-distribution samples during inference time. Throughout our experiments, implicit models greatly outperform classical deep learning networks that overfit the training distribution. We showcase implicit models’ unique advantages for mathematical extrapolation thanks to their flexible and selective framework. Implicit models, with potentially unlimited depth, not only adapt well to out-of-distribution inputs but also understand the underlying structure of inputs much better.
1 Reply

Loading