From RAG to Agentic: Validating Islamic-Medicine Responses with LLM Agents

Published: 19 Jun 2025, Last Modified: 12 Jul 20254th Muslims in ML Workshop co-located with ICML 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Submission Track: Track 1: Machine Learning Research by Muslim Authors
Keywords: LLM Agents, RAG, Question-Answering, Islamic Healthcare
TL;DR: We develop and validate a reproducible framework to benchmark how well different LLMs can answer domain-specific questions about Islamic medicine, and how RAG and Agentic scientific prompting impact their performance.
Abstract: Centuries-old Islamic medical texts like Avicenna’s Canon of Medicine and the Prophetic Tibb-e-Nabawi, encode a wealth of preventive care, nutrition, and holistic therapies, yet remain inaccessible to many and underutilized in modern AI systems. Existing language-model benchmarks focus narrowly on factual recall or user preference, leaving a gap in validating culturally grounded medical guidance at scale. We propose a unified evaluation pipeline, Tibbe-AG, that aligns 30 carefully curated Prophetic-medicine questions with human-verified remedies and compares three LLMs (LLaMA-3, Mistral-7B, Qwen2-7B) under three configurations: direct generation, retrieval-augmented generation, and a scientific self-critique filter. Each answer is then assessed by a secondary LLM serving as an agentic judge, yielding a single 3C3H quality score. Retrieval improves factual accuracy by 13%, while the agentic prompt adds another 10% improvement through deeper mechanistic insight and safety considerations. Our results demonstrate that blending classical Islamic texts with retrieval and self-evaluation enables reliable, culturally sensitive medical question-answering.
Submission Number: 7
Loading