Equivariant $Q$ Learning in Spatial Action SpacesDownload PDF

19 Jun 2021, 10:04 (edited 29 Oct 2021)CoRL2021 PosterReaders: Everyone
  • Keywords: Reinforcement Learning, Equivariance, Manipulation
  • Abstract: Recently, a variety of new equivariant neural network model architectures have been proposed that generalize better over rotational and reflectional symmetries than standard models. These models are relevant to robotics because many robotics problems can be expressed in a rotationally symmetric way. This paper focuses on equivariance over a visual state space and a spatial action space -- the setting where the robot action space includes a subset of $\rm{SE}(2)$. In this situation, we know a priori that rotations and translations in the state image should result in the same rotations and translations in the spatial action dimensions of the optimal policy. Therefore, we can use equivariant model architectures to make $Q$ learning more sample efficient. This paper identifies when the optimal $Q$ function is equivariant and proposes $Q$ network architectures for this setting. We show experimentally that this approach outperforms standard methods in a set of challenging manipulation problems.
  • Supplementary Material: zip
  • Poster: png
14 Replies

Loading