Keywords: Explainable AI, xAI, LLM, feature tree, uber tree
Abstract: The "black-box" nature of Large Language Models (LLMs) poses a significant barrier to their adoption in high-stakes, regulated domains like finance and healthcare, where verifiable explanations are mandatory. We propose a novel hybrid framework that enhances LLM explainability by generating hierarchical feature trees from individual Question-Answer (Q\&A) pairs and merging them into a unified, global "Uber Tree." This structure provides both local explanations for specific answers and a global overview of the model's knowledge landscape. Our method combines the semantic understanding of LLMs for tree generation and merging with traditional recursive algorithms for robustness, ensuring scalability. Crucially, we introduce a formal consistency verification step to validate the alignment between individual explanations and the global knowledge structure. Applied to the domain of mortgage compliance using a comprehensive dataset of 1000 Q\&A pairs, our framework demonstrates high-quality tree generation, effective merging that outperforms purely algorithmic baselines, and strong consistency (95\%). A human evaluation with domain experts confirms a significant improvement in explainability and auditability over standard Chain-of-Thought explanations. This work offers a practical pathway toward auditable and verifiable AI systems at enterprise scale.
Primary Area: interpretability and explainable AI
Submission Number: 11579
Loading