Consistent Coding Problem Synthesis with Reflective Analysis

ACL ARR 2025 July Submission171 Authors

24 Jul 2025 (modified: 15 Aug 2025)ACL ARR 2025 July SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large Language Models (LLMs) have shown great promise in educational applications, e.g., generating coding exercises for programming instruction. However, two major challenges remain in automatic coding exercise synthesis: (1) the generated solution code often fails to pass all test cases, and (2) there is no automatic metric to assess the conceptual relevance or pedagogical quality of the synthesized problems. In this paper, we present a three-stage framework for educational coding problem synthesis. First, we perform Chain-of-Thought-based Reflective Analysis, incorporating Error Analysis and Concept Analysis, to improve the pedagogical quality of generation. Second, we introduce an iterative code refinement to ensure the generated solution passes a Code Check. Third, we propose a Concept Check procedure to automatically evaluate the conceptual alignment between the input and the synthesized problem. Experiments show that our methods significantly improve both correctness and concept-level consistency, providing a reliable pipeline for automatic coding exercise synthesis.
Paper Type: Short
Research Area: NLP Applications
Research Area Keywords: NLP Applications, Human-Centered NLP
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 171
Loading