AlphaCon: In-Context Adaptation for Dynamic Alpha Generation

20 Sept 2025 (modified: 19 Nov 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: In-Context Adapt, Reinforcement Learning, Alpha Generation, Quantitative Finance, Large Language Model
TL;DR: We propose a framework enabling in-context adaptation to generate tailored alphas at inference time without retraining, using a two-stage proposal-refinement process trained via two-level RL
Abstract: Finding predictive signals known as alphas for stock returns is a central challenge in quantitative finance. This challenge is complicated by the non-stationary nature of financial markets. Conventional automated methods learn a single static model from historical data, and may perform poorly when market regimes shift. In this work, we reformulate this task as a problem of in-context adaptation. Our goal is to train a single universal model that can adapt its generation process to different market conditions at inference time. We introduce \model{}, a novel framework that uses recent data as context to guide alpha generation without requiring retraining. The model learns this adaptive capability through a specialized two-level training procedure, where an outer loop optimizes the context encoder across diverse historical market tasks, and an inner loop refines the generation agents within each task. The generation process itself is structured as a two-stage proposal and refinement loop enhanced by a learnable advice mechanism. We train the entire framework using reinforcement learning. Experiments show that \model{} trained once significantly outperforms strong baselines that require periodic retraining. This demonstrates robust performance across diverse market regimes.
Primary Area: transfer learning, meta learning, and lifelong learning
Submission Number: 24748
Loading