Abstract: In recent decades, there has been a significant
push to leverage technology to aid both teachers
and students in the classroom. Language
processing advancements have been harnessed
to provide better tutoring services, automated
feedback to teachers, improved peer-to-peer
feedback mechanisms, and measures of student
comprehension for reading. Automated question
generation systems have the potential to
significantly reduce teachers’ workload in the
latter. In this paper, we compare three different
neural architectures for question generation
across two types of reading material: narratives
and textbooks. For each architecture, we
explore the benefits of including question attributes
in the input representation. Our models
show that a T5 architecture has the best overall
performance, with a RougeL score of 0.536
on a narrative corpus and 0.316 on a textbook
corpus. We break down the results by attribute
and discover that the attribute can improve the
quality of some types of generated questions,
including Action and Character, but this is not
true for all models.
0 Replies
Loading