Stabilizing Reinforcement Learning in Differentiable Simulation of Deformables

Published: 30 Sept 2024, Last Modified: 30 Oct 2024D3S3 2024 PosterEveryoneRevisionsBibTeXCC BY-SA 4.0
Keywords: reinforcement learning, differentiable simulation
Abstract: Recent advances in GPU-based parallel simulation have enabled practitioners to collect large amounts of data and train complex control policies using deep reinforcement learning (RL), on commodity GPUs. However, such successes for RL in robotics have been limited to tasks sufficiently simulated by fast rigid-body dynamics. Simulation techniques for soft bodies are comparatively several orders of magnitude slower, thereby limiting the use of RL due to sample complexity requirements. To address this challenge, this paper presents both a novel RL algorithm and a simulation platform to enable scaling RL on tasks involving rigid bodies and deformables. We introduce Soft Analytic Policy Optimization (SAPO), a maximum entropy first-order model-based actor-critic RL algorithm, which uses first-order analytic gradients from differentiable simulation to train a stochastic actor to maximize expected return and entropy. Alongside our approach, we develop Rewarped, a parallel differentiable multiphysics simulation platform that supports simulating various materials beyond rigid bodies. We show that SAPO outperforms baselines on a challenging soft-body locomotion and dexterous deformable manipulation task that we re-implement in Rewarped.
Submission Number: 25
Loading