Keywords: Neurosymbolic AI, Theory Revision, Predicate Invention
TL;DR: NeTheR is a neurosymbolic AI method that improves imperfect logical theories by making high-impact, structure-preserving revisions using learned neural concepts, enabling better performance in dynamic, real-world settings.
Abstract: Neurosymbolic AI approaches typically assume perfect and complete symbolic knowledge. This assumption limits their usability, as it is unrealistic in dynamic, real-world environments, particularly in domains that require both structured reasoning and perception. To address this issue, we propose a novel methodology that iteratively revises an initial imperfect logical background theory. Our approach, termed NeTheR, performs a limited number of high-impact modifications to improve the model's performance while maintaining the integrity of the original symbolic structure. Historically, theory revision has been achieved by adding or removing symbolic features to improve logical models. In contrast, NeTheR achieves this by leveraging predicate invention to introduce new neural concepts, allowing us to learn and use concepts beyond those available in the symbolic data. These high-impact modifications, like the insertion of a new neural concept into a specific part of the model, are identified using a variant of the Sharpe ratio, which measures the potential performance gains. Empirical evaluation shows that NeTheR outperforms its baseline competitors.
Primary Area: neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)
Submission Number: 9114
Loading