Generating Contextualized Mathematics Multiple-Choice Questions Utilizing Large Language Models

Published: 01 Jan 2024, Last Modified: 11 Feb 2025AIED Companion (1) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Applying mathematics to solve authentic question play important roles in math-ematics education. How to generate high-quality multiple-choice questions that have authentic context is a great challenge. By combining multiple iterations of large language model dialogues with auxiliary external tools and the LangChain framework, this work presents a novel method for automatically generating contextualized multiple-choice mathematics questions. To check the quality of generated questions, 30 questions were randomly selected and 13 human experts were invited to rate these questions. The survey result indicates that the questions produced by the proposed method exhibit a significantly higher quality compared to those generated directly by GPT4, and are already quite comparable in performance to questions that are meticulously crafted by humans across multiple dimensions. The code is available on the project home page: https://github.com/youzizzz1028/MCQ-generation-Chain.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview