everyone
since 13 Oct 2023">EveryoneRevisionsBibTeX
Neural Networks are powerful approximators for learning to reason from raw data (e.g., pixels, text) in spatio-temporal domains (e.g., traffic-scene understanding). However, several recent studies have shown that neural networks are prone to erroneous or sometimes absurd reasoning that lacks domain-grounding (e.g., adhering to intuitive physics and causality). Incorporating comprehensive symbolic representation for domain understanding as part of a consolidated architecture offers a promising solution. In this paper, we take a dynamical systems perspective of a neural network and its training process, and formulate domain knowledge-dependent constraints over its internal structures (parameters and inductive biases) during training. This is inspired by \textit{control barrier function}, a constraint specification method from control theory. In particular, we specify the domain knowledge using Knowledge Graphs in our approach. To demonstrate the effectiveness of our approach, we apply it to two benchmark datasets focused on spatiotemporal reasoning: CLEVRER and CLEVRER-Humans, both centered around the task of question answering. Furthermore, we propose novel ways to evaluate if domain-grounding is achieved using our method. Our results show that the proposed methodology improves domain-grounding and question-answering accuracy while endowing the model with enhanced interpretability - an interpretability score that specifies to which extent the domain constraints are followed or violated.