Abstract: Natural language processing (NLP) techniques have become increasingly significant in the medical domain. However, the amount of relevant medical text data remains limited. In this work, we propose a BERT-based multilingual simultaneous learning (MSL) model for reducing the problem of scarce data. We evaluate the benefit of MSL on the NTCIR-13 MedWeb multi- label symptoms classification task. The results indicate that the MSL model performs slightly better than Single-Task Learning (STL) models. Additionally, it shows that the similarity between languages has an impact on the performance of the MSL model.
Loading