Trustworthy and Responsible AI for Human-Centric Autonomous Decision-Making Systems

TMLR Paper4918 Authors

22 May 2025 (modified: 03 Aug 2025)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Artificial Intelligence (AI) has paved the way for revolutionary decision-making processes, which if harnessed appropriately, can contribute to advancements in various sectors, from healthcare to economics. However, its black box nature presents significant ethical challenges related to bias and transparency. AI applications are hugely impacted by biases, presenting inconsistent and unreliable findings, leading to significant costs and consequences, highlighting and perpetuating inequalities and unequal access to resources. Hence, developing safe, reliable, ethical, and Trustworthy AI systems is essential. Our interdisciplinary team of researchers focuses on Trustworthy and Responsible AI, including fairness, bias mitigation, reproducibility, generalization, interpretability, explainability, and authenticity. In this paper, we review and discuss the intricacies of AI biases, definitions, methods of detection and mitigation, and metrics for evaluating bias. We also discuss open challenges with regard to the trustworthiness and widespread application of AI across diverse domains of human-centric decision making, as well as guidelines to foster Responsible and Trustworthy AI models.
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission: We sincerely appreciate the constructive feedback provided by reviewers. In response to their comments, we have carefully revised the manuscript to address their suggestions. The modifications include clarifications, refinements, and additional details to improve the overall quality and clarity of the paper.
Assigned Action Editor: ~Niki_Kilbertus1
Submission Number: 4918
Loading