AAMAS Extended Abstract ID: 634
Keywords: applied ontology, explainable robots, foundation models, collaborative robotics, contrastive explanations
TL;DR: This paper introduces a novel approach that integrates ontology-based knowledge with large language models (LLMs) to generate robot explanations that are semantically coherent and naturally expressed.
Abstract: Building effective human-robot interaction requires robots to derive conclusions from their experiences that are both logically sound and communicated in ways aligned with human expectations. This paper presents a hybrid framework that blends ontology-based reasoning with large language models (LLMs) to produce semantically grounded and natural robot explanations. Ontologies ensure logical consistency and domain grounding, while LLMs provide fluent, context-aware and adaptive language generation. The proposed method grounds data from human-robot experiences, enabling robots to reason about whether events are typical or atypical based on their properties. We integrate a state-of-the-art algorithm for retrieving and constructing contrastive ontology-based narratives with an LLM agent that refines them into concise, clear explanations. The approach is validated through a laboratory study replicating an industrial collaborative task. Empirical results show significant improvements in the clarity and brevity of ontology-based narratives while preserving their semantic accuracy. Overall, this work highlights the potential of ontology–LLM integration to advance explainable agency, enhance transparency, and promote more intuitive human-robot collaboration.
Email Sharing: We authorize the sharing of all author emails with Program Chairs.
Data Release: We authorize the release of our submission and author names to the public in the event of acceptance.
Submission Number: 3
Loading