Leveraging Multi-Task Learning for Detecting Aggression, Emotion, Violence, and Sentiment in Bengali Texts
Keywords: Natural language processing, Text classification, Multi-task learning, Low-resource languages, Transformer models
Abstract: Despite remarkable advances in text classification (TC) for high-resource languages, progress in resource-constrained languages such as Bengali remains limited by the scarcity of standardized corpora, domain adaptation protocols, and robust pre-trained models. We introduce $\textbf{MTL-MuRIL}$, a transformer-based Multi-Task Learning (MTL) framework that jointly learns four interrelated classification tasks—aggression detection, emotion classification, violence detection, and sentiment analysis—within Bengali texts. Our approach leverages shared linguistic representations across tasks to improve generalization and mitigate overfitting in low-resource settings. Comprehensive experiments show that MTL-MuRIL consistently outperforms single-task baselines, achieving F1-scores of 0.893 (±0.005) for aggression detection, 0.743 (±0.030) for sentiment analysis, 0.717 (±0.015) for violence detection, and 0.570 (±0.020) for emotion classification. These results underscore the effectiveness of multi-task learning for enhancing Bengali text understanding and point toward a scalable paradigm for multilingual low-resource NLP.
Track: Track 2: ML by Muslim Authors
Submission Number: 47
Loading