Monocular Event-Based Vision for Dodging Static Obstacles with a Quadrotor

Published: 05 Sept 2024, Last Modified: 16 Oct 2024CoRL 2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: event-based vision, learning for control, simulation-to-real transfer, aerial robotics
TL;DR: We demonstrate the dodging of trees with an event camera onboard a fast-flying quadrotor.
Abstract: We present the first events-only static-obstacle avoidance method for a quadrotor with just an onboard, monocular event camera. Quadrotors are capable of fast and agile flight in cluttered environments when piloted manually, but vision-based autonomous flight in unknown environments is difficult in part due to sensor limitations of onboard cameras. Event cameras promise nearly zero motion blur and high dynamic range, but produce a very large volume of events under significant ego-motion and further lack a continuous-time sensor model in simulation, making direct sim-to-real transfer not possible. By leveraging depth prediction as an intermediate step in our learning framework, we can pre-train a reactive obstacle avoidance events-to-control policy in simulation, and then fine-tune the perception component with limited events-depth real-world data to achieve dodging in indoor and outdoor settings. We demonstrate this across two quadrotor-event camera platforms in multiple settings, and find, perhaps counter-intuitively, that low speeds (1m/s) make dodging harder and more prone to collisions, while high speeds (5m/s) result in better depth estimation and dodging. We also find that success rates in outdoor scenes can be significantly higher than certain indoor scenes.
Supplementary Material: zip
Website: https://www.anishbhattacharya.com/research/evfly
Publication Agreement: pdf
Student Paper: yes
Submission Number: 539
Loading