ReFIne: A Framework for Trustworthy Large Reasoning Models with Reliability, Faithfulness, and Interpretability
Keywords: Large Reasoning Models, Trustworthy Machine Learning, Reinforcement learning
Abstract: Recent advances in long chain-of-thought (CoT) reasoning have largely prioritized answer accuracy and token efficiency, while overlooking aspects critical to trustworthiness. We argue that usable reasoning systems must be trustworthy, characterized by three properties: interpretability, faithfulness, and reliability. To this end, we propose $\textbf{\texttt{ReFIne}}$, a new training framework that integrates supervised fine-tuning with GRPO to encourage models to: (i) improve interpretability by producing structured, tag-based traces with high-level planning that are easier for humans to follow; (ii) enhance faithfulness by explicitly disclosing the decisive information guiding each solution, with consistent cross-section references; and (iii) promote reliability by providing self-assessments of both the derivation’s soundness and the confidence of the final answer. We apply $\textbf{\texttt{ReFIne}}$ to the Qwen3 models at multiple scales (1.7B/4B/8B) and evaluate across mathematical benchmarks of varying difficulty. Our experimental results show that $\textbf{\texttt{ReFIne}}$ models generate clearer and better-structured reasoning traces (interpretability +44.0\%), more faithfully expose their underlying decision process (faithfulness +18.8\%), and offer informative confidence estimates (reliability +42.4\%). These findings highlight an overlooked but important direction: reasoning models should be optimized not only for accuracy, but also for broader dimensions of trustworthiness.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 13321
Loading