Keywords: Neurosymbolic AI, Epistemic AI, Fuzzy Logic, T-norm Functions, Belief Functions, Swin Transformer, Image Classification, Vision Transformer, Multilabel Classification, Random Sets, Logical Consistency
TL;DR: We propose a neurosymbolic epistemic framework that integrates fuzzy logic t-norms and focal sets into Swin Transformers. The approach maintains competitive accuracy while improving calibration, logical consistency, and interpretability.
Abstract: Deep neural networks achieve strong recognition performance, but they often produce overconfident predictions and fail to respect structural constraints in data. We propose a neuro-symbolic framework that augments Swin Transformers with focal set reasoning and differentiable fuzzy logics. Rather than treating labels as isolated categories, the model induces focal sets by modelling overlaps in the learned embedding space, which helps capture epistemic alternatives beyond single labels. These focal sets form the basis of a belief-theoretic layer that uses fuzzy membership functions and $t$-norm conjunctions to encourage consistency between fine- and coarse-grained predictions. A learnable loss further balances calibration, mass regularisation, and logical consistency, allowing the model to adaptively trade off symbolic structure with data-driven evidence. In experiments on hierarchical image classification, our framework maintains accuracy on par with transformer baselines while providing more calibrated and interpretable predictions, reducing overconfidence, and enforcing high logical consistency across hierarchical outputs. Overall, our results suggest that combining focal set reasoning with fuzzy logics provides a practical step toward deep learning models that are both accurate and epistemically aware.
Supplementary Material: zip
Primary Area: neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)
Submission Number: 24850
Loading