Rule-Augmented Neural Networks for Trustworthy Decision-Making: Bridging Symbolic Logic and Deep Learning in High-Stakes Domains

01 Sept 2025 (modified: 05 Nov 2025)Submitted to NLDL 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Neuro-symbolic AI, Explainable AI, Rule-based reasoning, Trustworthy decision-making, Deep learning
TL;DR: Rule-Augmented Neural Networks for Trustworthy AI
Abstract: Deep neural networks (DNNs) excel in predictive tasks but their lack of interpretability hinders adoption in high-stakes domains like healthcare and finance, where trustworthy decision-making is critical. We propose RuleNet, a novel framework that augments DNNs with Datalog rules to enhance explainability and trustworthiness while maintaining predictive accuracy. By embedding symbolic logic into neural architectures via predicate grounding and a semantic loss, RuleNet ensures predictions align with domain-specific constraints without dominating the DNN, providing human-readable explanations. The DNN handles noisy, high-dimensional data, while rules inject prior knowledge for robustness. Experiments on healthcare (MIMIC-III, with synthetic augmentation) and finance (Fraud-D) datasets show RuleNet achieves 0.1\% accuracy improvement, 100\% rule-coverage, and inference time of 4.5 ms per prediction compared to baselines like MLP and CNN. RuleNet offers a scalable, interpretable solution for trustworthy AI, with applications in semantic reasoning and decision-making. We provide comprehensive method descriptions, detailed data handling, ablations, and expanded related work. The full code is in the appendix.
Serve As Reviewer: ~Vinay_K._Chaudhri1
Submission Number: 16
Loading