Auditing Robot Learning for Safety and Compliance during DeploymentDownload PDF

13 Jul 2021, 14:31 (edited 26 Oct 2021)CoRL 2021, Blue SkyReaders: Everyone
  • Keywords: Robot Learning, Auditing, Safety, Compliance, Alignment
  • TL;DR: Auditing robot learning algorithms for safety and compliance is important to ensure human compatibility
  • Abstract: Robots of the future are going to exhibit increasingly human-like and super-human intelligence in a myriad of different tasks. They are also likely going to fail and be incompliant with human preferences in increasingly subtle ways. Towards the goal of achieving autonomous robots, the robot learning community has made rapid strides in applying machine learning techniques to train robots through data and interaction. This makes the study of how best to audit these algorithms for checking their compatibility with humans, pertinent and urgent. In this paper, we draw inspiration from the AI Safety and Alignment communities and make the case that we need to urgently consider ways in which we can best audit our robot learning algorithms to check for failure modes, and ensure that when operating autonomously, they are indeed behaving in ways that the human algorithm designers intend them to. We believe that this is a challenging problem that will require efforts from the entire robot learning community, and do not attempt to provide a concrete framework for auditing. Instead, we outline high-level guidance and a possible approach towards formulating this framework which we hope will serve as a useful starting point for thinking about auditing in the context of robot learning.
3 Replies