A Comparative Study of Using Pre-trained Language Models for Mental Healthcare Q&A Classification in Arabic

Published: 02 Aug 2024, Last Modified: 12 Nov 2024WiNLP 2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Mental health, Natural language processing, Question/answer classification, Text classification, Large Language Models
Abstract: This study explores Pre-trained Language Models (PLMs) for Arabic mental health question answering using the novel MentalQA dataset. We establish a baseline for future research and compare PLMs to classical models. Fine-tuned PLMs outperform classical models, with MARBERT achieving the best results (0.89 F1-score). Few-shot learning with GPT models also shows promise. This work highlights PLMs' potential for Arabic mental health applications while identifying areas for further development.
Submission Number: 10
Loading