A Comparative Analysis of English-to-Bangla Machine Translation Systems and Quality Estimation for Low-Resource Data Creation, Applied to Conversational Question Answering
Abstract: Creating datasets for low-resource languages like Bangla often involves machine translation and quality estimation (QE) filtering, but the process currently lacks standardization. Different studies use a variety of translation systems and outdated metrics, making it difficult to compare findings. Likewise, the QE filtering step is often applied using methods and thresholds that have not been systematically tested. To address this, our paper first presents a unified evaluation of English-to-Bangla MT systems using both legacy and modern metrics. We then conduct a small scale human evaluation study to compare automated QE scores with human judgments, which helps us determine the best existing QE system and a more systematically grounded threshold for filtering. Using this improved strategy, we introduce BCoQA, a novel Bangla Conversational Question Answering dataset. We are making the BCoQA dataset and our evaluation scripts publicly available. For complete reproducibility of our study, we also release all model outputs and their corresponding metric scores via this link.
Paper Type: Long
Research Area: Resources and Evaluation
Research Area Keywords: human evaluation, conversational QA, corpus creation, NLP datasets, few-shot/zero-shot MT, evaluation, datasets for low resource languages, metrics, reproducibility
Contribution Types: Model analysis & interpretability, Reproduction study, Approaches to low-resource settings, Data resources, Surveys
Languages Studied: Bangla, English
Previous URL: https://openreview.net/forum?id=uZ4NJWOuwA
Explanation Of Revisions PDF: pdf
Reassignment Request Area Chair: No, I want the same area chair from our previous submission (subject to their availability).
Reassignment Request Reviewers: No, I want the same set of reviewers from our previous submission (subject to their availability)
Data: zip
A1 Limitations Section: This paper has a limitations section.
A2 Potential Risks: N/A
B Use Or Create Scientific Artifacts: Yes
B1 Cite Creators Of Artifacts: Yes
B1 Elaboration: Section 5.2
B2 Discuss The License For Artifacts: Yes
B2 Elaboration: Appendix E
B3 Artifact Use Consistent With Intended Use: Yes
B3 Elaboration: Appendix E
B4 Data Contains Personally Identifying Info Or Offensive Content: N/A
B5 Documentation Of Artifacts: No
B5 Elaboration: Our primary focus in this work was on establishing a standardized methodology for the translation and filtering pipeline, rather than on a linguistic analysis of the dataset's content. The domains and linguistic phenomena are inherited directly from the source English datasets (CoQA and QuAC).
B6 Statistics For Data: Yes
B6 Elaboration: Section 5.2
C Computational Experiments: Yes
C1 Model Size And Budget: Yes
C1 Elaboration: Appendix G
C2 Experimental Setup And Hyperparameters: Yes
C2 Elaboration: Appendix G
C3 Descriptive Statistics: Yes
C3 Elaboration: Appendix C.3
C4 Parameters For Packages: Yes
C4 Elaboration: Appendix A
D Human Subjects Including Annotators: Yes
D1 Instructions Given To Participants: Yes
D1 Elaboration: Appendix D
D2 Recruitment And Payment: Yes
D2 Elaboration: Appendix F
D3 Data Consent: No
D3 Elaboration: We didn't take any personal information from any human pariticipant, each were given a random identifier, the and data were non personal evalution or contexual question answering.
D4 Ethics Review Board Approval: No
D4 Elaboration: Formal ethics review was not sought as this study involved voluntary, anonymous participation in a low-risk task: the linguistic evaluation of non-personal text. No sensitive or personally identifiable data was collected.
D5 Characteristics Of Annotators: Yes
D5 Elaboration: Appendix F
E Ai Assistants In Research Or Writing: No
E1 Information About Use Of Ai Assistants: N/A
Author Submission Checklist: yes
Submission Number: 916
Loading