DeepDFA: Learning and Integration of Regular Languages with Deep Learning

Published: 29 Aug 2025, Last Modified: 29 Aug 2025NeSy 2025 - Phase 2 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: temporal logic, Reinforcement learning, automata learning, neurosymbolic integration
Abstract: Most Neuro-Symbolic (NeSy) systems in the current literature are not designed to handle sequential tasks—scenarios where logical rules unfold over time and are best represented through formalisms such as Regular Expressions, Deterministic Finite Automata (DFAs), or Linear Temporal Logic over finite traces (LTLf). To address this gap, we propose DeepDFA, a general framework for integrating temporal logical knowledge into neural systems. DeepDFA is a continuous and differentiable logic layer capable of representing temporal rules expressed as DFAs or Moore Machines. Conceptually, it acts as a hybrid between a Recurrent Neural Network (RNN) and a symbolic automaton. Built upon the theory of Probabilistic Finite Automata (PFA), DeepDFA allows temporal logic to be encoded as neural components that are both trainable and compatible with gradient-based optimization. This enables two main capabilities: (i) Temporal knowledge injection, where symbolic knowledge is embedded as fixed parameters, and (ii) Temporal rule learning, where the automaton is trained from data. We show that we can use DeepDFA to advance the state of the art across multiple domains, including non-Markovian reinforcement learning, autoregressive sequence generation, and automata induction.
Track: Main Track
Paper Type: Extended Abstract
Resubmission: No
Publication Agreement: pdf
Submission Number: 79
Loading