TL;DR: A cognitively inspired causal hypothesis testing layer that augments neural anomaly detection with human-interpretable causal chains.
Abstract: Remote sensing anomaly detection models often identify unusual patterns but struggle to provide causal, human-interpretable explanations for why an environmental change is anomalous. Humans commonly interpret unexpected observations through causal hypothesis testing: proposing plausible causes, checking consistency with observed evidence, and selecting the simplest explanation that fits. We present CogChain, a cognitively inspired neurosymbolic reasoning layer that augments a neural feature extractor with structured causal hypothesis testing over short causal chains. CogChain scores competing causal explanations using probabilistic inference regularized by human-inspired priors such as temporal causality, spatial contiguity, and simplicity. We illustrate CogChain on remote sensing anomaly detection using a small library of causal templates and show that adding causal hypothesis testing can improve detection performance in our experimental setting while producing transparent, chain-structured explanations.
Submission Number: 11
Loading