Learning Policy Committees for Effective Personalization in MDPs with Diverse Tasks

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We build small committees of policies to handle diverse tasks. Each task is matched with a policy that performs nearly optimally. This ensures fairness by helping most users do well.
Abstract: Many dynamic decision problems, such as robotic control, involve a series of tasks, many of which are unknown at training time. Typical approaches for these problems, such as multi-task and meta reinforcement learning, do not generalize well when the tasks are diverse. On the other hand, approaches that aim to tackle task diversity, such as using task embedding as policy context and task clustering, typically lack performance guarantees and require a large number of training tasks. To address these challenges, we propose a novel approach for learning a policy committee that includes at least one near-optimal policy with high probability for tasks encountered during execution. While we show that this problem is in general inapproximable, we present two practical algorithmic solutions. The first yields provable approximation and task sample complexity guarantees when tasks are low-dimensional (the best we can do due to inapproximability), whereas the second is a general and practical gradient-based approach. In addition, we provide a provable sample complexity bound for few-shot learning. Our experiments on MuJoCo and Meta-World show that the proposed approach outperforms state-of-the-art multi-task, meta-, and task clustering baselines in training, generalization, and few-shot learning, often by a large margin. Our code is available at https://github.com/CERL-WUSTL/PACMAN.
Lay Summary: When designing intelligent systems that learn to make decisions, such as like robots or personalized digital assistants, we would like them to perform well for as many users or situations as possible. However, tasks in the real world are often very different from each other, making it challenging to train a single system that works well across the board. Our research introduces a new way to address this challenge by building a committee of decision-making policies. Instead of searching for one perfect solution, we construct a small group of policies so that, for almost every task, at least one of them performs nearly as well as possible. This idea is formalized through what we call (ε, 1−δ) coverage, which means we guarantee that most people—or tasks—will get near-optimal performance (within ε), for at least 1−δ fraction of all cases. We developed practical algorithms to create such committees, with strong theoretical guarantees and efficient learning from just a few examples. Our approach not only improves fairness—ensuring many users or downstream tasks get high-quality outcomes—but also outperforms leading AI methods in experiments involving diverse robotic control tasks.
Link To Code: https://github.com/CERL-WUSTL/PACMAN
Primary Area: Reinforcement Learning
Keywords: Policy Committee; Reinforcement Learning
Submission Number: 7210
Loading