Comparison of Cross-encoder and Bi-encoder Approaches for Arabic question answering task

ACL ARR 2024 June Submission1122 Authors

14 Jun 2024 (modified: 22 Jul 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: With the recent advancement in Transformer networks and large language models, various encoder-based approaches have been proposed as solutions. When textual data for questions and answers are available, cross-encoder approaches encode them jointly, while bi-encoder approaches encode them separately. In this research, the performance of these approaches for question-answer pairs using an Arabic medical dataset is compared. Five variants of the Transformer model were utilized for this study. These models differ in design but share the objective of leveraging large amounts of text data to build a general language understanding model. Then, fine-tuned on an answer selection task and evaluated for performance using accuracy and execution time metrics. The results indicate that the AraBERT model with a cross-encoder architecture achieved the highest accuracy of 0.96.
Paper Type: Short
Research Area: Question Answering
Research Area Keywords: Cross-encoder, Bi-encoder, Arabic language, question answering
Contribution Types: Model analysis & interpretability
Languages Studied: Arabic language
Submission Number: 1122
Loading