Back to Base: Towards Hands-Off Learning via Safe Resets with Reach-Avoid Safety Filters
Abstract: Designing controllers that accomplish tasks while guaranteeing safety constraints remains a significant challenge. We often want an agent to perform well in a nominal task, such as environment
exploration, while ensuring it can avoid unsafe states and return to a desired target by a specific
time. In particular we are motivated by the setting of safe, efficient, hands-off training for reinforcement learning in the real world. By enabling a robot to safely and autonomously reset to a
desired region (e.g., charging stations) without human intervention, we can enhance efficiency and
facilitate training. Safety filters, such as those based on control barrier functions, decouple safety
from nominal control objectives and rigorously guarantee safety. Despite their success, constructing these functions for general nonlinear systems with control constraints and system uncertainties
remains an open problem. This paper introduces a safety filter obtained from the value function
associated with the reach-avoid problem. The proposed safety filter minimally modifies the nominal controller while avoiding unsafe regions and guiding the system back to the desired target
set. By preserving policy performance while allowing safe resetting, we enable efficient handsoff reinforcement learning and advance the feasibility of safe training for real world robots. We
demonstrate our approach using a modified version of soft actor-critic to safely train a swing-up
task on a modified cartpole stabilization problem.
Loading