Keywords: in‑context reinforcement learning, decision transformer, sequence modeling, value ensembles, randomised prior functions, Bayesian context fusion, posterior UCB, offline reinforcement learning, contextual bandits, exploration–exploitation, epistemic uncertainty, importance weighting, gradient‑free adaptation, meta learning
TL;DR: SPICE uses an ensemble of value heads and bayesian context fusion to perform in-context reinforcement learning on suboptimal data.
Abstract: In-context reinforcement learning (ICRL) promises fast adaptation to unseen environments without parameter updates, but current methods either cannot improve beyond the training distribution or require near-optimal data, limiting practical adoption. We introduce SPICE, a Bayesian ICRL method that learns a prior over Q-values via deep ensemble and updates this prior at test-time using in-context information through Bayesian updates. To recover from poor priors resulting from training on sub-optimal data, our online inference follows an Upper-Confidence Bound rule that favours exploration and adaptation. In bandit settings, we prove this principled exploration reaches regret-optimal behaviour even when pretrained only on suboptimal trajectories. We validate these findings empirically across bandit and control benchmarks. SPICE achieves near-optimal decisions on unseen tasks, substantially reduces regret compared to prior ICRL and meta-RL approaches while rapidly adapting to unseen tasks and remaining robust under distribution shift.
Primary Area: transfer learning, meta learning, and lifelong learning
Submission Number: 21396
Loading