ConCodeEval: Evaluating Large Language Models for Code Constraints in Domain-Specific Languages

ACL ARR 2024 August Submission471 Authors

16 Aug 2024 (modified: 22 Sept 2024)ACL ARR 2024 August SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Recent work shows Large Language Models (LLMs) struggle to understand natural language constraints for various text generation tasks in zero- and few-shot settings. While in the code domain, there is wide usage of constraints in code format to maintain the integrity of code written in Domain-Specific Languages (DSLs) like JSON and YAML, which are widely used for system-level programming tasks in enterprises. Given that LLMs are increasingly used for system-level code tasks, evaluating if they can comprehend these code constraints is crucial. However, no work has been done to evaluate their controllability over code constraints. Hence, we introduce ConCodeEval first-of-its-kind benchmark having two novel tasks for code constraints across five representations. Our findings suggest that language models struggle with code constraints. Code languages that perform excellently for normal code tasks do not perform well when the same languages represent fine-grained constraints.
Paper Type: Short
Research Area: NLP Applications
Research Area Keywords: code generation and understanding
Contribution Types: Data resources
Languages Studied: JSON, YAML, XML, Python
Submission Number: 471
Loading