Position: Causality Is Key to Understand and Balance Multiple Goals in Trustworthy ML and Foundation Models

22 Jan 2025 (modified: 18 Jun 2025)Submitted to ICML 2025 Position Paper TrackEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We emphasize that causal inference techniques can help balance multiple goals in Trustworthy ML, e.g., fairness, privacy, robustness, and utility.
Abstract: Ensuring trustworthiness in machine learning (ML) systems is crucial as they become increasingly embedded in high-stakes domains. This paper advocates for the integration of causal methods into machine learning to navigate the trade-offs between key principles of trustworthy ML, including fairness, privacy, robustness, accuracy, and explainability. While these objectives should ideally be satisfied simultaneously, they are often addressed in isolation, leading to conflicts and suboptimal solutions. Drawing on existing applications of causality in ML that successfully align goals such as fairness and accuracy or privacy and robustness, this position paper argues that a causal approach is essential for balancing multiple competing objectives in both trustworthy ML and foundation models. Beyond highlighting these trade-offs, we examine how causality can be practically integrated into ML and foundation models, offering solutions to enhance their reliability and interpretability. Finally, we discuss the challenges, limitations, and opportunities in adopting causal frameworks, paving the way for more accountable and ethically sound AI systems.
Primary Area: Model Understanding, Explainability, Interpretability, and Trust
Keywords: Trustworthy AI, Causality, Foundation Models, Fairness, Explainability, Robustness, Privacy
Submission Number: 188
Loading