Abstract: Teachers are increasingly using prompted LLMs to generate exam questions, and students can use generated questions for self-assessment. When generating questions from a given educational text—rather than relying solely on the LLM’s internal knowledge—handling long textual content, such as a textbook spanning hundreds of pages, presents a challenge. In this paper, we experiment with three knowledge representation approaches tailored for educational question generation using LLMs. As a novel contribution among these alternatives, we adapt the atomic fact decomposition method from fact-checking research to the educational domain. We manually evaluate the generated questions based on various criteria. Our empirical results indicate that a list of atomic facts provides a better foundation for question generation than long plain text and that LLM-based question generation from Knowledge Graph triplets outperforms rule-based question generation from Knowledge Graphs.
External IDs:dblp:conf/tsd/NagyBSKF25
Loading