Abstract: Teachers have used prompted LLMs to generate exam questions. When generating questions from a given educational text - rather than relying solely on the LLM’s internal knowledge - handling long textual content, such as a textbook spanning hundreds of pages, presents a challenge. In this paper, we experiment with three knowledge representation approaches tailored for educational question generation using LLMs. As a novel contribution among these alternatives, we adapt the atomic fact decomposition method from fact-checking research to the educational domain. We manually evaluate the generated questions based on various criteria. Our empirical results indicate that a list of atomic facts provides a better foundation for question generation than long plain text and that LLM-based question generation from Knowledge Graph triplets outperforms rule-based question generation from Knowledge Graphs.
Paper Type: Long
Research Area: NLP Applications
Research Area Keywords: educational question generation, atomic facts, knowledge graphs, faithfulness
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Data resources, Data analysis
Languages Studied: English
Submission Number: 3240
Loading