Asking the Right Questions: Adapting LLMs to Analyze Clinical Notes from Multiple Care-Domains

ACL ARR 2026 January Submission7234 Authors

06 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Clinical Notes, AI4Health, Prompt Optimization, Large Language Models
Abstract: Clinical notes contain fine-grained and domain-specific information that can significantly improve patient risk estimation. However, these notes differ widely across care-domains such as nursing, physician, and radiology as each subscribe to a particular viewpoint when documenting patient's status. Although Large Language Models (LLMs) can reason over long clinical narratives, their performance depends heavily on prompting, and fixed or manually crafted prompts often fail to reflect the linguistic and semantic variations across note types. Our empirical analysis shows that notes from distinct care-domains exhibit large differences in topic distributions, underscoring the need for viewpoint-aware modeling. To tackle this problem, we propose an end-to-end framework that learns optimal guiding questions for each viewpoint, enabling LLMs to extract clinically meaningful and interpretable risk factors tailored to each. The guiding questions are optimized using only supervision from downstream prediction tasks, without any instruction tuning of the base LLM. Across two real-world EHR datasets and three prediction tasks, our framework outperforms domain-agnostic prompting and demonstrates that viewpoint-specific guiding questions are crucial for accurate and explainable patient risk estimation.
Paper Type: Long
Research Area: Clinical and Biomedical Applications
Research Area Keywords: Clinical and biomedical language models
Contribution Types: Model analysis & interpretability, NLP engineering experiment
Languages Studied: English
Submission Number: 7234
Loading