Beyond Functionality: Studying Non-Functional-Requirement-Aware Code Generation Using Large Language Models

ACL ARR 2024 December Submission562 Authors

14 Dec 2024 (modified: 05 Feb 2025)ACL ARR 2024 December SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract:

Recently, developers have increasingly utilized Large Language Models (LLMs) to assist with their coding. Apart from functional correctness, Non-Functional Requirements (NFRs), such as code performance, play a crucial role in ensuring software quality. However, the capability of LLMs in addressing NFRs has yet to be systematically investigated. In this paper, we propose NFRGen, an automated framework aimed at investigating how can LLMs better perform in NFR-aware coding. Our evaluation reveals that incorporating NFRs in the prompts considerably improves the effectiveness in addressing them. In the meantime, incorporating NFRs results in a decrease in Pass@1 by up to 26%. However, such impact can be mitigated when NFRs are initially specified in the same prompt. Our study highlights the implications for balancing both functional correctness and addressing NFRs in various coding workflows.

Paper Type: Long
Research Area: Language Modeling
Research Area Keywords: Large Language Models, Non-Functional Requirements, Code Generation, Robustness
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Publicly available software and/or pre-trained models
Languages Studied: Python, Large Language Models
Submission Number: 562
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview