Otter: Generating Tests from Issues to Validate SWE Patches

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: This paper introduces Otter, a system that generates tests from software issues, supporting test-driven development and validation of automated code patch generation, and outperforms state-of-the-art systems in experiments.
Abstract: While there has been plenty of work on generating tests from existing code, there has been limited work on generating tests from issues. A correct test must validate the code patch that resolves the issue. This paper focuses on the scenario where that code patch does not yet exist. Doing so supports two major use-cases. First, it supports TDD (test-driven development), the discipline of "test first, write code later" that has well-documented benefits for human software engineers. Second, it also validates SWE (software engineering) agents, which generate code patches for resolving issues. This paper introduces TDD-Bench-Verified, a benchmark for generating tests from issues, and Otter, an LLM-based solution for this task. Otter augments LLMs with rule-based analysis to check and repair their outputs, and introduces a novel self-reflective action planner. Experiments show Otter outperforming state-of-the-art systems for generating tests from issues, in addition to enhancing systems that generate patches from issues. We hope that Otter helps make developers more productive at resolving issues and leads to more robust, well-tested code.
Lay Summary: While there has been extensive work on generating tests from existing code, there is a gap in creating tests directly from issues, especially before a code patch is developed. This is crucial for practices like Test-Driven Development (TDD) and validating software engineering agents that generate code patches. We developed Otter, an LLM-based solution that generates tests from issues. Otter enhances large language models with rule-based analysis to check and repair their outputs and introduces a novel self-reflective action planning stage. This ensures that the tests are accurate and effective. Otter outperforms state-of-the-art systems in generating tests from issues and assists systems that generate patches from issues. By improving test generation, Otter helps developers be more productive in resolving issues and leads to more robust, well-tested code, thereby improving software quality and reliability.
Application-Driven Machine Learning: This submission is on Application-Driven Machine Learning.
Primary Area: Applications->Language, Speech and Dialog
Keywords: LLMs, SWE Patches, Reproduction Tests
Submission Number: 11300
Loading