Towards Practical Defect-Focused Automated Code Review

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 spotlightposterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: This work presents an end-to-end approach to automated code review that goes beyond snippet-level generation and text-similarity metrics, achieving significant gains over existing baselines in real-world, industry-scale codebases.
Abstract: The complexity of code reviews has driven efforts to automate review comments, but prior approaches oversimplify this task by treating it as snippet-level code-to-text generation and relying on text similarity metrics like BLEU for evaluation. These methods overlook repository context, real-world merge request evaluation, and defect detection, limiting their practicality. To address these issues, we explore the full automation pipeline within the online recommendation service of a company with nearly 400 million daily active users, analyzing industry-grade C++ codebases comprising hundreds of thousands of lines of code. We identify four key challenges: 1) capturing relevant context, 2) improving key bug inclusion (KBI), 3) reducing false alarm rates (FAR), and 4) integrating human workflows. To tackle these, we propose 1) code slicing algorithms for context extraction, 2) a multi-role LLM framework for KBI, 3) a filtering mechanism for FAR reduction, and 4) a novel prompt design for better human interaction. Our approach, validated on real-world merge requests from historical fault reports, achieves a 2× improvement over standard LLMs and a 10× gain over previous baselines. While the presented results focus on C++, the underlying framework design leverages language-agnostic principles (e.g., AST-based analysis), suggesting potential for broader applicability.
Lay Summary: Spotting errors in software code, a vital process called code review, is challenging and time-consuming for developers. Many current automated tools aren't very helpful because they look at tiny, isolated pieces of code, focus on generating text that sounds like a human reviewer rather than actually finding important bugs, and don't fit well with how developers build software. This limits their practical use in real-world scenarios. Our research introduces an advanced AI system designed to review code more like an experienced human expert. When developers submit new code changes, our AI intelligently analyzes all the relevant code across the entire project. It uses a collaborative, team-like AI strategy to specifically hunt for critical errors and filters out distracting or incorrect suggestions. We also built it to smoothly integrate into developers' existing daily workflows. When tested on large-scale, complex software used by nearly 400 million daily active users, our system proved significantly more effective at identifying serious, real-world bugs compared to standard AI techniques and older methods—achieving up to a tenfold improvement. This work helps transform automated code review into a truly practical and powerful tool, ultimately leading to higher quality software and boosting developer productivity.
Application-Driven Machine Learning: This submission is on Application-Driven Machine Learning.
Link To Code: https://zenodo.org/records/15118678
Primary Area: Applications
Keywords: Automated Code Review, Merge Request Analysis, Large Language Models (LLMs), Defect Detection, Evaluation Metrics for Code Review, Code Context Extraction, Multi-Agent LLM Collaboration
Submission Number: 16368
Loading