Reveal-to-Revise: Causal Multimodal Attention for Explainable and Bias-Resilient Generative Modeling

15 Mar 2026 (modified: 01 Apr 2026)ICLR 2026 Workshop LLM Reasoning Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Track: long paper (up to 10 pages)
Keywords: Explainable AI (XAI), Generative Adversarial Networks (GANs), Multimodal Attention, Reveal-to-Revise, Bias-Aware Generative Modeling, Grad-CAM++, WGAN-GP, Cross-Modal Fusion, Fairness in AI, Adversarial Robustness, Epistemic Uncertainty, Cognitive Alignment Score (CAS), Saliency-First Privacy
TL;DR: Reveal-to-Revise embeds explainability and bias-aware feedback directly into the training loop. It achieves 93.2% accuracy and 78.1% IoU-XAI, outperforming baselines in fairness, robustness, and structural coherence.
Abstract: We present an explainable, bias-aware generative framework that unifies cross-modal attention fusion, Grad-CAM++ attribution, and a Reveal-to-Revise feedback loop within a single training paradigm. The architecture couples a conditional attention WGAN-GP with bias regularization and iterative local explanation feedback and is evaluated on Multimodal MNIST and Fashion MNIST for image generation and subgroup auditing, as well as a toxic/non-toxic text classification benchmark. All experiments use stratified 80/20 splits, validation-based early stopping, and AdamW with cosine annealing, and results are averaged over three random seeds. The proposed model achieves 93.2\% accuracy, a 91.6\% F1-score, and a 78.1\% IoU-XAI on the multimodal benchmark, outperforming all baselines across every metric, while adversarial training restores 73-77\% robustness on Fashion-MNIST. Ablation studies confirm that fusion, Grad-CAM++ and bias feedback each contribute independently to final performance, with explanations improving structural coherence (SSIM=88.8\%, NMI=84.9\%) and fairness across protected subgroups. These results establish attribution-guided generative learning as a practical and trustworthy approach for high-stakes AI applications.
Presenter: ~Noor_Islam_S._Mohammad1
Format: Maybe: the presenting author will attend in person, contingent on other factors that still need to be determined (e.g., visa, funding).
Anonymization: This submission has been anonymized for double-blind review via the removal of identifying information such as names, affiliations, and identifying URLs.
Submission Number: 161
Loading