Reinforcement Learning with Neural Radiance FieldsDownload PDF

Published: 23 Jun 2022, Last Modified: 03 Jul 2024L-DOD 2022 PosterReaders: Everyone
Abstract: It is a long-standing problem to find effective representations for training reinforcement learning (RL) agents. This paper demonstrates that learning state representations from offline data with supervision from Neural Radiance Fields (NeRFs) can improve the performance of RL compared to other learned representations or even low-dimensional, hand-engineered state information. Specifically, we propose to pretrain an encoder that maps multiple image observations to a latent space describing the objects in the scene. The decoder built from a latent-conditioned NeRF serves as the supervision signal to learn the latent space. An RL algorithm then operates on the learned latent space as its state representation. We call this NeRF-RL. Our experiments indicate that NeRF as supervision leads to a latent space better suited for our downstream RL tasks involving robotic object manipulations like hanging mugs on hooks, pushing objects, or opening doors. Video:
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](
0 Replies