Fearful Goal Generation for Reliable Policy LearningDownload PDF

Anonymous

07 Nov 2022 (modified: 05 May 2023)CoRL Agility Workshop 2022Readers: Everyone
Keywords: Fearful Goal Generation for Reliable Policy Learning
TL;DR: We use a crash prediction network to inform goal selection during training to train policies that crash less during evaluation
Abstract: By learning from experience, reinforcement learning (RL) methods learn from their environments adaptively, making them a promising direction for generalizable robots. However, training robotic goal-conditioned RL policies often requires careful tuning of reward functions, especially because of early termination problems: giving the RL agent negative feedback (such as about crashes) can cause it to be overly cautious. And yet, we desire agents that know to avoid such crashes as they can damage robot hardware. We propose DEIMOS, a novel safety-aware automatic goal selector that requires no safety constraint Jacobian or conditional value at risk computation, nor any difference in observation space or reward shaping, and no extra neural parameters at deployment, making it ideal for agents acting on complex robotic morphologies. We showcase the efficacy of our method on a challenging quadruped locomotion and manipulation task. We empirically show that using our method, policies are tuned to optimize for safety, producing populations of final agents that crash less often than populations trained with baseline curricula. Their reward performance is also similarly improved.
0 Replies

Loading