Evaluating Domain-Shift Generalization of Liquid Neural Networks in Autonomous Driving

Published: 28 Feb 2026, Last Modified: 04 Apr 2026CAO PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: recurrent neural networks, liquid neural networks, monitoring, domain-shift, interpretability, closed-loop evaluation, robustness, autonomous driving, adaptability
TL;DR: Liquid neural networks are more adaptable and generalize better than standard gated RNNs, demonstrating superior zero-shot transfer from indoor to outdoor driving due to more task-aligned and domain-stable internal representations.
Abstract: Specialized small models are gaining increasing interest for autonomous driving subtasks such as steering control, where efficient learning and strong generalization are essential. Liquid neural networks have demonstrated promising performance in continuous control, yet their task-learning behavior and cross-domain generalization remain underexplored. In this work, we compare bio-inspired liquid recurrent architectures with gated recurrent networks by training them on an indoor small-scale driving dataset and evaluating their transfer to an outdoor, full-scale driving environment. Liquid models exhibit substantially stronger zero-shot transfer, whereas gated recurrent networks often fail to complete driving episodes without large deviations or crashes. To better understand these differences, we analyze internal representations using saliency-based and manifold learning techniques. Our results show that liquid models learn more task-aligned representations that remain stable across domains, indicating stronger task abstraction capabilities.
Submission Number: 69
Loading