Out-of-distribution generalization of internal models is correlated with rewardDownload PDF

Anonymous

09 Mar 2021, 17:17 (modified: 15 Jun 2022, 19:17)SSL-RL 2021 PosterReaders: Everyone
Keywords: self-supervised learning, reinforcement learning, robustness
TL;DR: Performance of self-supervised and reinforcement learning models is correlated during evaluation on perturbed environments.
Abstract: We investigate the behavior of reinforcement learning (RL) agents under morphological distribution shifts. Similar to recent robustness benchmarks in computer vision, we train algorithms on selected RL environments and test transfer performance on perturbed environments. We specifically test perturbations to popular RL agent's morphologies by changing the length and mass of limbs, which in biological settings is a major challenge (e.g., after injury or during growth). In this setup, called PyBullet-M, we compare the performance of policies obtained by reward-driven learning with self-supervised models of the observed state-action transitions. We find that out-of-distribution performance of self-supervised models is correlated to degradation in reward.
0 Replies

Loading