Keywords: Clinical Natural Language Inference, Clinical Trial NLI, NLI4CT, Prompt Engineering, Chain-of-Thought, Self-Critique, ReAct, Quasi-Symbolic Reasoning, QuaSAR, LoRA, Parameter-Efficient Fine-Tuning, Reasoning Type Annotation, Reasoning-Aware Evaluation, MedNLI, TREC Clinical Trials
Abstract: Recent works on large language models (LLMs) have demonstrated the impact of prompting strategies and fine-tuning techniques on their reasoning capabilities. Yet, their effectiveness on clinical natural language inference (CTNLI) remains underexplored. This study presents the first controlled evaluation of how prompt structure and efficient fine-tuning jointly shape model performance in CTNLI.
We inspect four classes of prompting strategies to elicit reasoning in LLMs at different levels of abstraction, and evaluate their impact on a range of clinically motivated reasoning types. For each prompting strategy, we construct high-quality demonstrations using a frontier model to distil multi-step reasoning capabilities into smaller models (≤ 4B parameters) via Low-Rank Adaptation (LoRA). Across different LLMs fine-tuned on the NLI4CT benchmark, we found that prompt type alone accounts for up to 44% of the variance in macro-F1. Moreover, LoRA fine-tuning yields consistent gains of +8 to 12 F1, raises output alignment above 97%, and narrows the performance gap to GPT-4o-mini to within 7.1%. Additional experiments on reasoning generalisation reveal that LoRA improves performance in 75% of the models on MedNLI and TREC Clinical Trials.
Overall, these findings demonstrate that (i) prompt structure is a primary driver of clinical NLI reasoning performance, (ii) compact models equipped with strong prompts and LoRA can rival frontier-scale systems, and (iii) reasoning-type-aware evaluation is essential to uncover prompt-induced trade-offs. Our results highlight the promise of combining prompt design and lightweight adaptation for more efficient and trustworthy clinical NLP systems, providing insights on the strengths and limitations of widely adopted prompting and parameter-efficient techniques in specialised domains. All code, annotations, prompts, demonstrations, and checkpoints will be released upon publication.
Paper Type: Long
Research Area: Clinical and Biomedical Applications
Research Area Keywords: healthcare applications, clinical NLP, natural language inference, textual entailment, prompting, chain-of-thought, fine-tuning, parameter-efficient-training, distillation, benchmarking, evaluation methodologies
Contribution Types: Model analysis & interpretability, Approaches low compute settings-efficiency, Publicly available software and/or pre-trained models, Data resources, Data analysis
Languages Studied: english
Submission Number: 3404
Loading