CAESAR++: Uncertainty-Driven Contextual Reasoning for Trustworthy and Explainable Road Object Detection

Published: 26 Apr 2026, Last Modified: 06 May 2026RJCIA2026 LongEveryoneRevisionsCC BY 4.0
Keywords: Explainable artificial intelligence, road object detection, conformal prediction, contextual reasoning
TL;DR: Look around when in doubt for safer, smarter, and more explainable road-object detection
Abstract: This paper introduces CAESAR++, a framework for road object detection that combines conformal prediction, adaptive contextual reasoning, and dual-colour saliency maps. CAESAR++ first calibrates classification and localisation uncertainty using a two-step conformal procedure, then enlarges the context window around each detection in proportion to its uncertainty, and finally produces object-wise explanations that disentangle bottom up sensory evidence from top-down contextual cues. Experiments indicate consistent improvements in detection accuracy, uncertainty calibration, and explanation stability without retraining the base models.
Email Sharing: We authorize the sharing of all author emails with Program Chairs.
Data Release: We authorize the release of our submission and author names to the public in the event of acceptance.
Submission Number: 7
Loading