Which model to trust: assessing the influence of models on the performance of reinforcement learning algorithms for continuous control tasksDownload PDF

Sep 29, 2021 (edited Oct 05, 2021)ICLR 2022 Conference Desk Rejected SubmissionReaders: Everyone
  • Keywords: reinforcement learning, model-based reinforcement learning, deep learning, bayesian deep learning, gaussian processes, continuous control, model uncertainty
  • Abstract: The need for algorithms able to solve Reinforcement Learning (RL) problems with few trials has motivated the advent of model-based RL methods. The reported performance of model-based algorithms has dramatically increased within recent years. However, it is not clear how much of the recent progress is due to improved algorithms or due to improved models. While different modeling options are available to choose from when applying a model-based approach, the distinguishing traits and particular strengths of different models are not clear. The main contribution of this work lies precisely in assessing the model influence on the performance of RL algorithms. A set of commonly adopted models is established for the purpose of model comparison. These include Neural Networks (NNs), ensembles of NNs, two different approximations of Bayesian NNs (BNNs), that is, the Concrete Dropout NN and the Anchored Ensembling, and Gaussian Processes (GPs). The model comparison is evaluated on a suite of continuous control benchmarking tasks. Our results reveal that significant differences in model performance do exist. The Concrete Dropout NN reports persistently superior performance. We summarize these differences for the benefit of the modeler and suggest that the model choice is tailored to the standards required by each specific application.
  • Supplementary Material: zip
1 Reply

Loading