IM-Context: In-Context Learning for Imbalanced Regression Tasks

TMLR Paper2825 Authors

07 Jun 2024 (modified: 04 Oct 2024)Decision pending for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Regression models often fail to generalize effectively in regions characterized by highly imbalanced label distributions. Previous methods for deep imbalanced regression rely on gradient-based weight updates, which tend to overfit in underrepresented regions. This paper proposes a paradigm shift towards in-context learning as an effective alternative to conventional in-weight learning methods, particularly for addressing imbalanced regression. In-context learning refers to the ability of a model to condition itself, given a prompt sequence composed of in-context samples (input-label pairs) alongside a new query input to generate predictions, without requiring any parameter updates. In this paper, we study the impact of the prompt sequence on the model performance from both theoretical and empirical perspectives. We emphasize the importance of localized context in reducing bias within regions of high imbalance. Empirical evaluations across a variety of real-world datasets demonstrate that in-context learning substantially outperforms existing in-weight learning methods in scenarios with high levels of imbalance.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: Dear Reviewers and Area Chair, Thank you for the opportunity to revise our manuscript and for the valuable feedback provided. We have carefully addressed all comments and implemented the requested changes, with special attention to reviewer cCTG's suggestions. We have conducted a thorough proofread, adjusted formatting, and standardized table fonts by removing the resize boxes to ensure consistency across all tables. For your convenience, all changes are still highlighted in blue. Thank you again for your time and valuable input. Sincerely,\ The Authors
Assigned Action Editor: ~Elliot_Meyerson1
Submission Number: 2825
Loading