Causal Navigation by Continuous-time Neural NetworksDownload PDF

Published: 09 Nov 2021, Last Modified: 14 Jul 2024NeurIPS 2021 PosterReaders: Everyone
Keywords: Continuous-time neural networks, Causality, neural ODEs, continuous-depth models, visual navigation
Abstract: Imitation learning enables high-fidelity, vision-based learning of policies within rich, photorealistic environments. However, such techniques often rely on traditional discrete-time neural models and face difficulties in generalizing to domain shifts by failing to account for the causal relationships between the agent and the environment. In this paper, we propose a theoretical and experimental framework for learning causal representations using continuous-time neural networks, specifically over their discrete-time counterparts. We evaluate our method in the context of visual-control learning of drones over a series of complex tasks, ranging from short- and long-term navigation, to chasing static and dynamic objects through photorealistic environments. Our results demonstrate that causal continuous-time deep models can perform robust navigation tasks, where advanced recurrent models fail. These models learn complex causal control representations directly from raw visual inputs and scale to solve a variety of tasks using imitation learning.
Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
TL;DR: We showed that a particular subclass of continuous-time neural network models are dynamic causal models that can perform interpretable, causal and robust visual navigation.
Supplementary Material: pdf
Code: https://github.com/mit-drl/deepdrone
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/causal-navigation-by-continuous-time-neural/code)
7 Replies

Loading