Identifying the Context Shift between Test Benchmarks and Production DataDownload PDF

Published: 06 Dec 2022, Last Modified: 05 May 2023ICBINB posterReaders: Everyone
Keywords: robust machine learning, data-centered machine learning, context shift, human-AI collaboration
TL;DR: Context shift, an upstream driver of distribution shift, explains lack of robustness in applied machine learning and can be addressed via human intuition and expertise
Abstract: Benchmark datasets have traditionally served dual purposes: first, benchmarks offer a standard on which machine learning researchers can compare different methods, and second, benchmarks provide a model, albeit imperfect, of the real world. The incompleteness of test benchmarks (and the data upon which models are trained) hinder robustness in machine learning, enable shortcut learning, and leave models systematically prone to err on out-of-distribution and adversarially perturbed data. In an effort to clarify how to address the mismatch between test benchmarks and production data, we introduce context shift to describe semantically meaningful changes in the underlying data generation process. Moreover, we identify three methods for addressing context shift that would otherwise lead to model prediction errors: first, we describe how human intuition and expert knowledge can identify semantically meaningful features upon which models systematically fail, second, we detail how dynamic benchmarking – with its focus on capturing the data generation process – can promote generalizability through corroboration, and third, we highlight that clarifying a model's limitations can reduce unexpected errors.
0 Replies

Loading