Keywords: Motion Planning, Reinforcement Learning
TL;DR: Geometric symmetries in robotic motion planning tasks can be exploited for enhanced outcomes from value based learning.
Abstract: Motion planning tasks like catching, interception, and manipulation require high frequency perception and control to account for the agility required to complete the task. Reinforcement learning (RL) can produce such solutions, but can often be difficult to train and generalize.
However, by exploiting the intrinsic geometric properties of agile task workspaces, we can enhance the performance of an RL and generalize it to new tasks. In this work we leverage geometric symmetry to enhance the performance of a value estimation policy (Actor Critic, A2C). Our method involves applying a geometric transformation to the observation during execution to provide the policy an alternate perspective of the current state. We show the effect of the symmetry exploitation policy on a trained A2C model on a WidowX reach task.
The results show that by using symmetry exploitation, a trained model improves its performance, and generalizes to new tasks.
0 Replies
Loading