Development and bilingual evaluation of Japanese medical large language model within reasonably low computational resources
Keywords: medical LLM, Japanese, domain adaptation, bilingual, multilingual, benchmark
TL;DR: We develop and comprehensively evaluate the bilingual ability of medical LLMs in Japanese and English. Our 7B model outperforms many 70B models.
Abstract: The recent success of large language models (LLMs) and the scaling law has led to a widespread adoption of larger models. Particularly in the healthcare industry, there is an increasing demand for locally operated LLMs due to security concerns. However, the majority of high quality open-source LLMs have a size of 70B parameters, imposing significant financial burdens on users for GPU preparation and operation. To overcome these issues, we present a medical adaptation based on the recent 7B models, which enables the operation in low computational resources.
We compare the performance on medical question-answering benchmarks in two languages (Japanese and English), demonstrating that its scores reach parity with or surpass those of currently existing medical LLMs that are ten times larger. We find that fine-tuning an English-centric base model on Japanese medical dataset improves the score in both language, supporting the effect of cross-lingual knowledge transfer. We hope that this study will alleviate financial challenges, serving as a stepping stone for clinical institutions to practically utilize LLMs locally.
Our trained model and evaluation code will both be available at [hidden for anonymity].
Submission Number: 7
Loading