Abstract: In recent years, Question Generation (QG) has gained significant attention as a research topic, particularly in the context of its potential to support automatic reading comprehension assessment preparation. However, current QG models are mostly trained on factoid-type datasets, which tend to produce questions that are too simple for assessing advanced abilities. One promising alternative is to train QG models on exam-type datasets, which contain questions that require content reasoning. Unfortunately, there is a shortage of such training data compared to factoid-type questions. To address this issue and improve the quality of QG for generating advanced questions, we propose the Handover QG framework. This framework involves the joint training of exam-type QG and factoid-type QG, and controls the question generation process by interleavingly using the exam-type QG decoder and the factoid-type QG decoder. Furthermore, we employ reinforcement learning to enhance QG performance. Our experimental evaluation shows that our model significantly outperforms the compared baselines, with a BLEU-4 score increase from 5.31 to 6.48. Human evaluation also confirms that the questions generated by our model are answerable and appropriately difficult. Overall, the Handover QG framework offers a promising solution for improving QG performance in generating advanced questions for reading comprehension assessment.
External IDs:dblp:journals/taslp/ChungCF24
Loading