SINA-BERT: A Pre-Trained Language Model for Analysis of Medical Texts in PersianDownload PDF

Anonymous

May 16, 2021 (edited Jun 23, 2021)ACL ARR 2021 May Blind SubmissionReaders: Everyone
  • Abstract: We have released SINA-BERT, a language model pre-trained on BERT to address the lack of a high-quality Persian language model in the medical domain. SINA-BERT utilizes pre-training on a large-scale corpus of medical contents including formal and informal texts collected from various online resources in order to improve the performance on health-care related tasks. We employ SINA-BERT to complete following representative tasks: categorization of medical questions, medical sentiment analysis, medical named entity recognition, and medical question retrieval. For each task, we have developed Persian annotated data sets for training and evaluation and learnt a representation for the data of each task especially complex and long medical questions. With the same architecture being used in each task, SINA-BERT outperforms BERT-based models that were previously made available in the Persian language.
0 Replies

Loading