ClariGen: Bridging Instruction Gaps via Interactive Clarification in Code Generation

Published: 13 Jan 2025, Last Modified: 26 Feb 2025AAAI 2025 PDLM PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: code generation, llm
TL;DR: We introduce ClariGen, a framework that improves code generation by enabling LLMs to ask clarifying questions, enriching underspecified prompts to produce more accurate, context-aware code.
Abstract: Large Language Models(LLMs) excel at generating code but often struggle when faced with incomplete or underspecified instructions. Drawing on the practice of experienced developers who seek clarification before coding, we introduce a framework that integrates a clarifying Q\&A phase into the code generation process. Instead of working blindly from vague prompts, our approach encourages users to refine their requirements, enabling the LLM to produce more contextually informed and accurate code. We apply this technique to a range of challenging tasks, demonstrating that high-quality clarifications substantially improve both code correctness and reliability. Our results highlight a promising avenue for enhancing human-LLM collaboration, making generated code solutions more aligned with user intent and reducing the need for subsequent revisions.
Submission Number: 35
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview