From Code Generation to Conceptual Learning: Student Use of LLMs in a Web Programming Course

Published: 13 Apr 2026, Last Modified: 24 Apr 2026CHI 2026EveryoneCC BY 4.0
Abstract: As AI-assisted coding becomes standard in software development, computer science educators need a clearer understanding of how Large Language Models (LLMs) can support the learning process. Recent work has examined how students can benefit from using LLMs in their courses, but most studies rely on self-reported usage or controlled experiments with short, isolated programming tasks. To complement these approaches, this paper investigates how students organically leverage LLMs in an advanced computer science course where assignments reflect real-world complexity. We analyze 448 LLM chat logs from 147 students across two offerings of a senior-level web programming course at a large U.S. research university. Through open coding, we identified 14 distinct prompt–response pair types that cluster into three categories: to generate code, debug code, and explain programming concepts. Our analysis reveals that how students interact with LLMs correlates with academic performance. High-effort detailed specifications for code generation positively correlated with final grades (r = 0.25, p < 0.01), whereas low-effort behaviors such as pasting raw error messages showed negative correlations (r = −0.34, p < 0.01). We also observed a temporal shift toward explanation-oriented interactions, suggesting that students increasingly use LLMs as conceptual tutors and not just as code generators.
Loading