Out-of-distribution generalization of internal models is correlated with rewardDownload PDF

Mar 09, 2021 (edited Apr 26, 2021)ICLR 2021 Workshop SSL-RL Blind SubmissionReaders: Everyone
  • Keywords: self-supervised learning, reinforcement learning, robustness
  • TL;DR: Performance of self-supervised and reinforcement learning models is correlated during evaluation on perturbed environments.
  • Abstract: We investigate the behavior of reinforcement learning (RL) agents under morphological distribution shifts. Similar to recent robustness benchmarks in computer vision, we train algorithms on selected RL environments and test transfer performance on perturbed environments. We specifically test perturbations to popular RL agent's morphologies by changing the length and mass of limbs, which in biological settings is a major challenge (e.g., after injury or during growth). In this setup, called PyBullet-M, we compare the performance of policies obtained by reward-driven learning with self-supervised models of the observed state-action transitions. We find that out-of-distribution performance of self-supervised models is correlated to degradation in reward.
0 Replies