A Modular Abstraction for Integrating Domain Rules into Deep Learning Models

TMLR Paper5656 Authors

17 Aug 2025 (modified: 27 Aug 2025)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Domain-specific knowledge can often be expressed as suggestive rules defined over subgroups of data. Such rules, when encoded as hard constraints, are often not directly compatible with deep learning frameworks that train neural networks over batches of data. Also, domain-experts often use heuristics that should not be encoded as logical rules. In this work, we propose a framework to capture domain-experts' knowledge as domain-specific rules over subgroups of data, and to leverage such rules in training deep learning models using the modular components of regularization, data augmentation, and parameter optimization. This translation of domain knowledge into custom primitives that can be augmented to existing state-of-the-art deep learning models improves the ability of domain experts to interpret and express model behavior, intervene through changes in the modeling specifications, and improve the overall performance of the model as compared to existing frameworks that incorporate deterministic declarative predicates. On one synthetic and three real-world tasks, we show that our method allows iterative refinement and is demonstrably more accurate.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Emanuele_Sansone1
Submission Number: 5656
Loading