Learning Differentiable and Safe Multi-Robot Control for Generalization to Novel Environments using Control Barrier Functions
Abstract: Ensuring safety in the navigation of multi-robot systems using control barrier functions has traditionally involved the utilization of a pre-tuned class- $\mathcal{K}$ function specifically tailored to a given environment. However, these pre-tuned class$\mathcal{K}$ functions struggle to generalize to different environments. In this work, we address these challenges for control-affine systems with actuation constraints. Our key insight is that incorporating environment-specific information implicitly into the class- $\mathcal{K}$ function can enhance generalization to unseen environments. We introduce a parameterization of the class$\mathcal{K}$ functions for multi-robot systems using a Graph Neural Network (GNN). We formulate safety conditions and safe control using control barrier functions utilizing this GNN-based class- $\mathcal{K}$ function, which is optimized with both environmental information and information perceived by the robot in its local neighborhood leading to decentralized execution. To enable end-to-end learning of class $-\mathcal{K}$ functions and decentralized control policy, we employ a differentiable optimization layer, facilitating the embedding of optimization problem for computing safe control policies jointly with class $-\mathcal{K}$ functions using environment information and information perceived by the robot in its local neighborhood. We show through simulation results the effectiveness of our proposed method in generating scalable and generalizable safe control policies which are adaptable to novel environments.
Loading