Feature-Attending Recurrent Modules for Generalization in Reinforcement Learning

Published: 06 Nov 2023, Last Modified: 06 Nov 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Many important tasks are defined in terms of object. To generalize across these tasks, a reinforcement learning (RL) agent needs to exploit the structure that the objects induce. Prior work has either hard-coded object-centric features, used complex object-centric generative models, or updated state using local spatial features. However, these approaches have had limited success in enabling general RL agents. Motivated by this, we introduce “Feature- Attending Recurrent Modules” (FARM), an architecture for learning state representations that relies on simple, broadly applicable inductive biases for capturing spatial and temporal regularities. FARM learns a state representation that is distributed across multiple modules that each attend to spatiotemporal features with an expressive feature attention mechanism. We show that this improves an RL agent’s ability to generalize across object-centric tasks. We study task suites in both 2D and 3D environments and find that FARM better generalizes compared to competing architectures that leverage attention or multiple modules.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: We have changed the text in the main pdf to improve clarity. We have added an appendix with a description of all baselines in a unified notation. We also detail how all baselines are implemented.
Code: https://github.com/wcarvalho/farm
Supplementary Material: zip
Assigned Action Editor: ~Dinesh_Jayaraman2
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 814
Loading