ChatCoder: Human-in-loop Refine Requirement Improves LLMs' Code GenerationDownload PDF

Anonymous

16 Dec 2023ACL ARR 2023 December Blind SubmissionReaders: Everyone
Abstract: Large language models have shown good performances in generating code to meet human requirements. However, human requirements expressed in natural languages can be vague, incomplete, and ambiguous, leading large language models to misunderstand human requirements and make mistakes. Worse, it is difficult for a human user to refine the requirement. To help human users refine their requirements and improve large language models' code generation performances, we propose ChatCoder, a method to refine the requirements via chatting with large language models. We design a chat scheme in which the large language models will guide the human users to refine their expression of requirements to be more precise, unambiguous, and complete than before. Experiments show that ChatCoder has improved by a large margin. Besides, ChatCoder has the advantage over refine-based methods and LLMs fine-tuned via human response.
Paper Type: long
Research Area: NLP Applications
Languages Studied: Programming Languages
0 Replies

Loading