Domain and Language adaptive pre-training of BERT models for Korean-English bilingual clinical text analysis
Abstract: To develop bilingual Korean-English medical language models through domain- and language-adaptive pre-training and evaluate their performance in clinical text analysis tasks, specifically semantic similarity and multi-label classification.
External IDs:dblp:journals/midm/JoCLSJ25
Loading