Abstract: This study refines Large Language Models (LLMs) prompts to enhance the generation of code tracing questions, where the new expert-guided prompts consider features identified from prior research. Expert evaluations compared new LLM-generated questions against previously preferred ones, revealing improved quality in aspects like complexity and concept coverage. While providing insights into effective question generation and affirming LLMs' potential in educational content creation, the study also contributes an expert-evaluated question dataset to the computing education community. However, generating high-quality reverse tracing questions remains a nuanced challenge, indicating a need for further LLM prompting refinement.
Loading