Beyond Functionality: Studying Non-Functional-Requirement-Aware Code Generation Using Large Language Models
Recently, developers have increasingly utilized Large Language Models (LLMs) to assist with their coding. Apart from functional correctness, Non-Functional Requirements (NFRs), such as code performance, play a crucial role in ensuring software quality. However, the capability of LLMs in addressing NFRs has yet to be systematically investigated. In this paper, we propose NFRGen, an automated framework aimed at investigating how can LLMs better perform in NFR-aware coding. Our evaluation reveals that incorporating NFRs in the prompts considerably improves the effectiveness in addressing them. In the meantime, incorporating NFRs results in a decrease in Pass@1 by up to 26%. However, such impact can be mitigated when NFRs are initially specified in the same prompt. Our study highlights the implications for balancing both functional correctness and addressing NFRs in various coding workflows.