Context Matters: Understanding Practitioner Priorities for Alignment in Large Language Models for Code

13 Feb 2026 (modified: 02 Apr 2026)Submitted to AIware 2026EveryoneRevisionsCC BY 4.0
Keywords: LLM Alignment, Functional Requirements, Non functional Requirements, LLM Code Generation
TL;DR: A survey of 30 practitioners reveals that code LLM alignment is context-dependent: 75.9% treat non-functional properties as equally important to functional correctness, and 100% want systematic alignment guidance.
Abstract: Large Language Models (LLMs) are increasingly used for code generation tasks. The LLM generated code may be functionally correct but could remain insecure, non-compliant, or non-maintainable. The alignment of code LLM is promising to ensure that the generated code satisfies both functional and non-functional requirements. However, we are aware of very little research on the software practitioners' (ML/software engineers) needs of LLM alignment for code generation. We report a survey of 30 practitioners from diverse industry and research backgrounds. We investigate how they prioritize functional versus non-functional code properties and what factors drive their alignment decisions. Around 75.9\% of practitioners view non-functional properties as equally or contextually important alongside functional correctness. Task requirements (80.0\%) and compute resources (73.3\%) also influence alignment decision-making. We find that all the respondents considered the need of systematic guidance (e.g., domain-specific context) is necessary during alignment due it being a more niche area than fine-tuning.
Email Sharing: We authorize the sharing of all author emails with Program Chairs.
Data Release: We authorize the release of our submission and author names to the public.
Paper Type: Full-length papers (i.e. case studies, theoretical, applied research papers). 8 pages
Reroute: false
Submission Number: 35
Loading