Keywords: Uncertainty Quantification, LLM Explanations, Graph Mining
TL;DR: A framework quantifies uncertainty in LLM explanations through a formal reasoning topology perspective.
Abstract: Understanding the uncertainty in large language model (LLM) explanations is important for evaluating their faithfulness and reasoning consistency, thus providing insights into the reliability of LLM's output. In this work, we propose a novel framework that quantifies uncertainty in LLM explanations through a formal reasoning topology perspective. By designing a structural elicitation strategy, we can decompose the explanation into the knowledge and reasoning dimensions, which allows us to not only quantify reasoning uncertainty but also assess knowledge redundancy and provide interpretable insights into the model’s reasoning structure. Our method offers a systematic way to interpret the LLM reasoning process, analyze limitations, and provide guidance for enhancing robustness and faithfulness. This work pioneers the use of graph-structured uncertainty measurement in LLM explanations, offering a new perspective on evaluating and improving reasoning capabilities.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
Author Guide: I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
Submission Number: 238
Loading