Neurosymbolic Active Goal Recognition in Partially Observable Environments

Published: 19 Dec 2025, Last Modified: 05 Jan 2026AAMAS 2026 ExtendedAbstractEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Goal Recognition, Reinforcement Learning, Neurosymbolic Agent, POMDP
Abstract: Active goal recognition, despite its importance for human–AI interaction and autonomous systems, has received relatively limited attention. Unlike passive goal recognition, which infers an actor’s intent from observations alone, active goal recognition allows an observer to select informative actions to reduce uncertainty about the actor’s goal. Building upon prior work in symbolic active goal recognition under POMDP settings, this paper introduces a neurosymbolic framework that addresses two key limitations. First, we extend the modeling capacity to account for heterogeneous actor behaviors, moving beyond the hand-crafted actor behaviour assumption. Second, we integrate neural models into the active goal recognition framework in two complementary ways: (i) by replacing actor models with neural network–based models trained from data, and (ii) by employing reinforcement learning to train the observer over belief maps, thereby enabling adaptive decision-making beyond symbolic observer policy. Experiments on grid-world domains show that our neurosymbolic approach achieves comparative performance over state-of-the-art symbolic methods. These results highlight the promise of neurosymbolic methods for robust active goal recognition in complex, uncertain environments.
Area: Search, Optimization, Planning, and Scheduling (SOPS)
Generative A I: I acknowledge that I have read and will follow this policy.
Submission Number: 1096
Loading