Abstract: In the task of fake news detection, ensuring authenticity and accuracy is of paramount importance. This task, however, is susceptible to the influence of confounders, necessitating effective confounder debiasing strategies. Conventional methods are typically designed to address specific confounders, resulting in frameworks that relatively lack generalization and overlook potential correlations among confounders. The presence of multiple confounders further escalates the complexity and challenges of debiasing learning. To tackle this issue, we introduce the Adversarial Multi-Deconfounded (AMD) Learning Paradigm, a generic training framework designed to eliminate biases from multiple confounders. Our approach leverages adversarial networks to extract confounder-invariant feature representations, guiding the model to ignore potential biases introduced by confounders and extract stable representations independent of these confounders, thereby enhancing generalization. Comprehensive experiments demonstrate that our approach outperforms state-of-the-art methods on the Weibo and GossipCop datasets, and significantly exceeds other methods in generalization evaluation on CHEF. Additionally, we validate that our AMD framework exhibits improved robustness against confounders.
External IDs:dblp:conf/kdd/SunXLL25
Loading