Wild-Time: A Benchmark of in-the-Wild Distribution Shift over TimeDownload PDF

Published: 12 Jul 2022, Last Modified: 25 Nov 2024Shift Happens 2022 PosterReaders: Everyone
TL;DR: A new benchmark for in-the-wild distribution shift over time
Abstract: Distribution shifts occur when the test distribution differs from the training distribution, and can considerably degrade performance of machine learning models deployed in the real world. While recent works have studied robustness to distribution shifts, distribution shifts arising from the passage of time have the additional structure of timestamp metadata. Real-world examples of such shifts are underexplored, and it is unclear whether existing models can leverage trends in past distribution shifts to reliably extrapolate into the future. To address this gap, we curate Wild-Time, a benchmark of 7 datasets that reflect temporal distribution shifts arising in a variety of real-world applications. On these datasets, we systematically benchmark 9 approaches with various inductive biases. Our experiments demonstrate that existing methods are limited in tackling temporal distribution shift: across all settings, we observe an average performance drop of 21% from in-distribution to out-of-distribution data.
Submission Type: Full submission (technical report + code/data)
Co Submission: Yes, I am also submitting to the NeurIPS dataset and benchmark track.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 4 code implementations](https://www.catalyzex.com/paper/wild-time-a-benchmark-of-in-the-wild/code)
0 Replies

Loading