MODULI: Unlocking Preference Generalization via Diffusion Models for Offline Multi-Objective Reinforcement Learning
TL;DR: We propose the multi-objective diffusion planner with sliding guidance that has better generalization in OOD preferences.
Abstract: Multi-objective Reinforcement Learning (MORL) seeks to develop policies that simultaneously optimize multiple conflicting objectives, but it requires extensive online interactions. Offline MORL provides a promising solution by training on pre-collected datasets to generalize to any preference upon deployment. However, real-world offline datasets are often conservatively and narrowly distributed, failing to comprehensively cover preferences, leading to the emergence of out-of-distribution (OOD) preference areas. Existing offline MORL algorithms exhibit poor generalization to OOD preferences, resulting in policies that do not align with preferences. Leveraging the excellent expressive and generalization capabilities of diffusion models, we propose MODULI (Multi-objective Diffusion Planner with Sliding Guidance), which employs a preference-conditioned diffusion model as a planner to generate trajectories that align with various preferences and derive action for decision-making. To achieve accurate generation, MODULI introduces two return normalization methods under diverse preferences for refining guidance. To further enhance generalization to OOD preferences, MODULI proposes a novel sliding guidance mechanism, which involves training an additional slider adapter to capture the direction of preference changes. Incorporating the slider, it transitions from in-distribution (ID) preferences to generating OOD preferences, patching, and extending the incomplete Pareto front. Extensive experiments on the D4MORL benchmark demonstrate that our algorithm outperforms state-of-the-art Offline MORL baselines, exhibiting excellent generalization to OOD preferences.
Lay Summary: Offline multi-objective reinforcement learning is a method that enables intelligent agents to automatically make decisions by balancing multiple objectives, using previously collected data for training instead of requiring extensive new experiments. This makes it particularly suitable for real-world applications. However, the data collected in practice often reflects only a limited range of preferences and choices, which means agents may struggle to make optimal decisions when faced with new or unseen objectives. To address this challenge, we propose a novel approach called MODULI. MODULI leverages advanced diffusion models to understand better and adapt to diverse objective preferences. Additionally, we introduce a new technique that helps the model gradually adjust from familiar to unfamiliar preferences, effectively filling in the gaps left by the original dataset. Extensive experiments demonstrate that MODULI not only achieves strong performance on known preferences but also generalizes well to novel, previously unseen objectives. This advancement paves the way for more flexible and versatile decision-making in real-world scenarios such as robotics and autonomous driving.
Link To Code: https://github.com/pickxiguapi/MODULI
Primary Area: Reinforcement Learning->Batch/Offline
Keywords: Generalization, Diffusion, Multi-Objective Reinforcement Learning, Slider Guidance
Submission Number: 6605
Loading