Fourier Features in Reinforcement Learning with Neural NetworksDownload PDF

29 Sept 2021 (modified: 13 Feb 2023)ICLR 2022 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Deep Reinforcement Learning, Fourier features, interference, sparsity, expressiveness, preprocessing
Abstract: In classic Reinforcement Learning (RL), encoding the inputs with a Fourier feature mapping is a standard way to facilitate generalization and add prior domain knowledge. In Deep RL, such input encodings are less common, since they could, in principle, be learned by the network and may therefore seem less beneficial. In this paper, we present experiments on Multilayer Perceptron (MLP) that indicate that even in Deep RL, Fourier features can lead to significant performance gains, in both rewards and sample efficiency. Furthermore, we observe that they increase the robustness with respect to hyperparameters, lead to smoother policies, and benefit the training process by reducing learning interference, encouraging sparsity, and increasing the expressiveness of the learned features. According to our experiments, other input preprocessings, such as random Fourier features or Polynomial features, do not give similar advantages.But a major bottleneck with conventional Fourier features is that they exponentially increase the number of features with the state dimension. We remedy this by proposing a simple, light version that only has a linear number of features, yet still maintains the benefits. Our experiments cover both shallow/deep, discrete/continuous, and on/off-policy RL settings. To the best of our knowledge, this is the first reported application of Fourier features in Deep RL.
One-sentence Summary: In this first reported application of Fourier features to Deep Reinforcement Learning, we observe better performance, empirically study the effects on the learned network, and propose a light version avoiding the exponential explosion of features.
5 Replies

Loading