Trustworthy LLM-Based Medical Decision-Making Framework: An Iterative Validation Methodology with Safety Guarantees
Keywords: Large Language Models; Trustworthy AI; Medical Decision Support; Iterative Validation; Explainable AI; Safety Constraints; Clinical Diagnosis; Solution Verification; Healthcare AI; Reliable Machine Learning
TL;DR: An iterative, trustworthy LLM-based medical decision-support system that generates multiple candidate solutions, evaluates their correctness, and safely reformulates —ensuring reliable, verifiable conclusions and preventing errors
Abstract: This article proposes a trustworthy AI framework with a task-oriented and iterative approach based on Large Language Models (LLM) for decision-support systems in the medical domain. In the proposed model, the initial medical problem is first analyzed using an LLM, based on which multiple potential solutions t₁, t₂, …, tₖ are generated with transparent reasoning traces. These solutions are then rigorously validated according to their correctness, that is, their ability to provide a logically complete, well-justified, and verifiable solution to the problem. If a solution passes validation, the system generates the final output with full provenance tracking and terminates. Otherwise, a safety-critical constraint condition N determines whether to continue, considering iteration limits, computational resources, and safety boundaries to prevent harmful outputs. If N is satisfied, the problem is reformulated and iterative analysis continues; otherwise, the system fails safely without output. This trustworthy AI approach reduces the risk of incorrect or premature conclusions while enhancing reliability, explainability, verifiability, and safety of LLM-based medical decision-support systems.
Submission Number: 8
Loading