Developer Behaviors in Validating and Repairing LLM-Generated Code Using IDE and Eye Tracking

Published: 01 Jan 2024, Last Modified: 16 May 2025VL/HCC 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The increasing use of large language model (LLM)-powered code generation tools, such as GitHub Copilot, is transforming software engineering practices. This paper investigates how developers validate and repair code generated by Copilot and examines the impact of code provenance awareness during these processes. We conducted a lab study with 28 participants tasked with validating and repairing Copilot-generated code in three software projects. Participants were randomly divided into two groups: one informed about the provenance of LLM-generated code and the other not. We collected data on IDE interactions, eye-tracking, cognitive workload assessments, and conducted semi-structured interviews. Our results indicate that, without explicit information, developers often fail to identify the LLM origin of the code. Developers exhibit LLM-specific behaviors such as frequent switching between code and comments, different attentional focus, and a tendency to delete and rewrite code. Being aware of the code’s provenance led to improved performance, increased search efforts, more frequent Copilot usage, and higher cognitive workload. These findings enhance our understanding of developer interactions with LLM-generated code and inform the design of tools for effective human-LLM collaboration in software development.
Loading