Guiding ChatGPT for Better Code Generation: An Empirical Study

Published: 01 Jan 2024, Last Modified: 27 Sept 2024SANER 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Automated code generation is a powerful technique for software development, which can significantly reduce developers' effort and time for writing code. Recently, OpenAI's large language model ChatGPT has emerged as a powerful tool for generating human-like responses to a wide range of textual inputs (i.e., prompts), including those related to code generation. However, the effectiveness of ChatGPT in code generation is still not well understood. The code generation performance could also be heavily influenced by the choice of prompts, which should be further explored. In this paper, we report an empirical study on ChatGPT's capabilities for two types of code generation tasks, namely text-to-code and code-to-code generation. We investigate different types of prompts by leveraging the chain-of-thought strategy with multi-step optimizations. Our empirical results show that by carefully designing prompts to guide ChatGPT, the code generation performance can be improved substantially. We also analyze the factors that influence the prompt design and provide insights that could guide future research.
Loading