BiasChain: A Multi-Agent LLM Framework for Justified Peer Review Bias Detection

ACL ARR 2025 July Submission464 Authors

28 Jul 2025 (modified: 18 Aug 2025)ACL ARR 2025 July SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Peer review forms the cornerstone of academic quality control, yet it remains vulnerable to latent biases, topic preferences, and methodological disagreements, which can unfairly influence acceptance decisions. Manual bias audits are often resource-intensive and lack consistency. To address this, we propose a modular AI framework that leverages specialized large language model (LLM) agents to analyze sentiment and justification coherence, assess internal consistency, and evaluate inter-review alignment. These insights are then integrated into a schema-based module that identifies bias types, estimates confidence levels, and generates actionable recommendations. By automating these steps, our approach offers an added layer of confidence to editors and area chairs by providing a transparent and scalable tool for continuous bias monitoring in the peer review process. We have made the source code and supplementary materials publicly available https://github.com/submitaccount/BiasChain.git to support reproducibility and encourage future research.
Paper Type: Short
Research Area: Computational Social Science and Cultural Analytics
Research Area Keywords: Peer Review Bias, Large Language Models (LLMs) and Bias Detection Scholarly Communication
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Publicly available software and/or pre-trained models, Data analysis, Position papers
Languages Studied: English
Submission Number: 464
Loading