DANCE-ST: Why Trustworthy AI Needs Constraint Guidance, Not Constraint Penalties

ICLR 2026 Conference Submission17974 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Constraint-guided learning, Neurosymbolic systems, Multi-agent learning, Physics-informed neural networks, Safe machine learning, Spatiotemporal prediction, Trustworthy AI, Fault-tolerant systems
TL;DR: DANCE-ST transforms physical constraints from obstacles into guidance signals, using a fault-tolerant multi-agent architecture to combine neural networks and physics models for safe, accurate spatiotemporal prediction in safety-critical systems.
Abstract: Neural networks achieve high accuracy in spatiotemporal prediction but often violate physical constraints, creating a fundamental accuracy-safety dilemma. We introduce DANCE-ST, a constraint-guided learning framework that resolves this trade-off by treating physical laws not as adversarial penalties, but as collaborative information sources that actively guide learning. Our core contribution is a novel three-phase architecture that (1) identifies critical system components by diffusing state-dependent "constraint potentials" through a knowledge graph, (2) intelligently fuses neural and physics-based predictions with provable error bounds for asynchronous sensors, and (3) projects predictions onto the constraint-satisfying space with guaranteed linear convergence. This architecture is orchestrated by a fault-tolerant multi-agent system for robust deployment. Experiments on industrial datasets demonstrate 97.2\% constraint satisfaction while achieving state-of-the-art accuracy and the fastest inference time (38.4s) among constraint-aware methods. Critically, DANCE-ST delivers superior, verifiable interpretability (4.6/5 vs 3.8/5). By design, it provides explainable insights into which system components drive constraint violations, directly addressing the transparency requirements of emerging safety regulations (e.g., EU AI Act, FDA AI guidelines) in a way black-box enforcement cannot. Our work establishes constraint-guided learning as a foundational paradigm for trustworthy AI, demonstrating that the accuracy-safety trade-off is a false dilemma when constraints become collaborative guides.
Supplementary Material: zip
Primary Area: neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)
Submission Number: 17974
Loading