Deception Game: Closing the Safety-Learning Loop in Interactive Robot AutonomyDownload PDF

Published: 30 Aug 2023, Last Modified: 17 Oct 2023CoRL 2023 PosterReaders: Everyone
Keywords: Learning-Aware Safety Analysis, Active Information Gathering, Adversarial Reinforcement Learning
TL;DR: A novel safety analysis framework that closes the loop between the robot's prediction-planning-control pipeline and its runtime learning process.
Abstract: An outstanding challenge for the widespread deployment of robotic systems like autonomous vehicles is ensuring safe interaction with humans without sacrificing performance. Existing safety methods often neglect the robot’s ability to learn and adapt at runtime, leading to overly conservative behavior. This paper proposes a new closed-loop paradigm for synthesizing safe control policies that explicitly account for the robot’s evolving uncertainty and its ability to quickly respond to future scenarios as they arise, by jointly considering the physical dynamics and the robot’s learning algorithm. We leverage adversarial reinforcement learning for tractable safety analysis under high-dimensional learning dynamics and demonstrate our framework’s ability to work with both Bayesian belief propagation and implicit learning through large pre-trained neural trajectory predictors.
Student First Author: yes
Supplementary Material: zip
Instructions: I have read the instructions for authors (https://corl2023.org/instructions-for-authors/)
Website: https://saferoboticslab.github.io/Belief-Game/
Publication Agreement: pdf
Poster Spotlight Video: mp4
17 Replies

Loading