Observing How Students Program with an LLM-powered Assistant: Quantifying Visual Expertise Through Eye-TrackingDownload PDF

Anonymous

16 Dec 2023ACL ARR 2023 December Blind SubmissionReaders: Everyone
Abstract: The proliferation of language models is revolutionizing Human-AI Interaction, offering users a conversational interface to accomplish various tasks and access information. Understanding how these models affect the way students learn the skill of computer programming remains an unstudied area of research. This paper presents an experiment designed to investigate the interaction dynamics of university students with varying computer programming abilities when utilizing ChatGPT, as an AI-assistive tool to accomplish coding tasks. Eye-tracking technology was employed to capture gaze patterns and visual attention during their interactions with the language model. For this study, data was collected from 26 university students with a range of programming experience (from Sophomore to Ph.D.-level). More experienced programmers spent 3x more time focusing on the programming IDE over the ChatGPT UI, compared to their less experienced peers (as measured by fixations $p < 0.05$), while novice programmers fixated equally on both interfaces, but were 5.5x faster at completing the tasks with reduced levels of complex visual attention (as measured by saccades $p < 0.05$) indicating an over-reliance on LLM outputs. This work provides an avenue for the development of systems that can assess programmer's focus and attention as they problem solve.
Paper Type: short
Research Area: Dialogue and Interactive Systems
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Data analysis
Languages Studied: Python
0 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview