Keywords: Neuro-symbolic Learning, Argumentation, Slot Attention
Track: Neurosymbolic Methods for Trustworthy and Interpretable AI
Abstract: Over the last decade, as we rely more on deep learning technologies to make critical decisions, concerns regarding their safety, reliability and interpretability have emerged.
We introduce a novel Neural Argumentative Learning (NAL) architecture that integrates Assumption-Based Argumentation (ABA) with deep learning for image analysis.
Our architecture consists of neural and symbolic components. The former segments and encodes images into facts using object-centric learning, while the latter applies ABA learning to develop ABA frameworks enabling predictions with images.
Experiments on synthetic data show that the NAL architecture can be competitive with a state-of-the-art alternative.
Paper Type: Long Paper
Submission Number: 19
Loading