Instruction Tuning with LLMs for Programming Exercise Generation

Published: 01 Jan 2024, Last Modified: 20 May 2025WISA 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Large language models (LLMs) have been applied to help programming education on aspects such as question answering and program repair. While they make students learn more efficiently, how to use LLMs to help increase teaching efficiency is rarely explored. In this paper, we focus on harnessing LLMs to automatically generate programming exercises with the goal of alleviating teachers’ workload and enhancing teaching efficiency. We first evaluate the performance of seven open-source LLMs using prompts, and then fine-tune two winning LLMs using instructions constructed with the Evol-Instruct and the ACES algorithms, respectively. Experimental results demonstrate the improved performance on the two LLMs after the instruction tuning. Additionally, our contribution encompasses the formulation of evaluation metrics and the exploration of various prompt methods.
Loading