Causal-structure Driven Augmentations for Text OOD Generalization

ICML 2023 Workshop SCIS Submission59 Authors

Published: 20 Jun 2023, Last Modified: 28 Jul 2023SCIS 2023 PosterEveryoneRevisions
Keywords: Counterfactually Augmented Data, Invariant Learning, Out-of-distribution Generalization, Clinical NLP
TL;DR: We propose counterfactual data augmentation methods, guided by knowledge on the causal structure of the data, to simulate interventions on spurious features.
Abstract: In this work, we propose counterfactual data augmentation methods, guided by knowledge of the causal structure of the data, to simulate interventions on spurious features. Our main motivation is classifying medical notes, and we use these methods to learn more robust text classifiers. In prediction problems where the label is spuriously correlated with an attribute, and under certain assumptions, we show that this strategy is appropriate and can enjoy improved sample complexity compared to importance re-weighting. Pragmatically, we match examples using auxiliary data, based on diff-in-diff methodology, and use a large language model (LLM) to represent a conditional probability of text. Experiments on learning caregiver-invariant predictors of clinical diagnoses from medical narratives and on semi-synthetic data, demonstrate that our method improves out-of-distribution (OOD) accuracy.
Submission Number: 59
Loading