Keywords: Hindi, medical LLM, cross-lingual transfer, low-resource languages
Abstract: Medical large language models hold promise for reducing healthcare disparities, yet Hindi remains severely underrepresented. While medical LLMs excel in high-resource languages, their performance degrades sharply in Hindi, particularly on Indian systems of medicine. We argue that robust cross-lingual medical transfer requires Hindi reasoning. To this end, we introduce HiMed, a comprehensive Hindi reasoning medical corpus and benchmark suite covering both Western and Indian medicine. We further propose HiMed-8B, a Hindi medical reasoning LLM based on LLaMA-3.1-8B-Instruct, through the design of decaying scaffolding reward. Extensive experiments have been conducted, demonstrating consistent improvements in Hindi medical reasoning performance and substantial reduction in the English--Hindi accuracy gap. Ablation studies further validate the contribution of each training stage and reward component. All data and code are available at an anonymous GitHub repository: https://anonymous.4open.science/r/anon-repo-54EC/README.md.
Paper Type: Long
Research Area: Multilinguality and Language Diversity
Research Area Keywords: cross-lingual transfer, multilingual benchmarks, multilingual evaluation, resources for less-resourced languages, chain-of-thought, fine-tuning
Contribution Types: NLP engineering experiment, Approaches to low-resource settings, Publicly available software and/or pre-trained models, Data resources
Languages Studied: English, Hindi
Submission Number: 3518
Loading