Abstract: Goal recognition (GR) involves inferring an agent's goals based on observed actions. In addition to goals, however, in various cases it may be useful to infer additional agent attributes, such as preferences, beliefs, and ability level, so as to gain deeper insights into the agent's decision-making process. Recent advances in GR have incorporated Reinforcement Learning (RL), which provides greater practicality and adaptability, especially in stochastic environments. This adaptability creates the opportunity to extend RL-based frameworks beyond goal recognition. In this work, we build upon a recent RL-based GR framework to propose a generalised approach capable of inferring a wider range of agent attributes. By integrating these attributes within the problem formulation, we demonstrate how off-the-shelf RL techniques can be applied to infer them effectively. Our results show that this extended framework accurately distinguishes fine-grained differences in agent attributes across diverse scenarios. Moreover, we show that recognising these additional attributes can in turn improve goal recognition accuracy.
Loading