SoNIC: Safe Social Navigation with Adaptive Conformal Inference and Constrained Reinforcement Learning
Abstract: Reinforcement Learning (RL) has enabled social
robots to generate trajectories without human-designed rules
or interventions, which makes it more effective than hard-
coded systems for generalizing to complex real-world scenarios.
However, social navigation is a safety-critical task that requires
robots to avoid collisions with pedestrians while previous RL-
based solutions fall short in safety performance in complex
environments. To enhance the safety of RL policies, to the
best of our knowledge, we propose the first algorithm, SoNIC,
that integrates adaptive conformal inference (ACI) with con-
strained reinforcement learning (CRL) to learn safe policies
for social navigation. More specifically, our method augments
RL observations with ACI-generated nonconformity scores and
provides explicit guidance for agents to leverage the uncertainty
metrics to avoid safety-critical areas by incorporating safety
constraints with spatial relaxation. Our method outperforms
state-of-the-art baselines in terms of both safety and adherence
to social norms by a large margin and demonstrates much
stronger robustness to out-of-distribution scenarios. Our code
and video demos are available on our project website: https:
//sonic-social-nav.github.io/.
Loading