Keywords: Mathematical Reasoning, Reasoning Consistency
Abstract: While Large language models (LLMs) have proved able to address some complex reasoning tasks, we also know that they are highly sensitive to input variation, which can lead to different solution paths and final answers. Answer consistency across input variations can thus be taken as a sign of stronger confidence. Leveraging this aspect, we introduce a framework where models are systematically pushed to diversify solution paths towards a final answer, thereby testing them for answer consistency across multiple input variations. Focusing on math problems, we induce variations in (i) order of shots in prompt, (ii) problem phrasing, and (iii) languages used. Experiments on a wide range of open-source LLMs of various sizes show that reasoning consistency differs by variation dimension, and that by aggregating consistency across dimensions, our framework enhances mathematical reasoning performance on monolingual datasets GSM8K and MATH500, and the multilingual dataset MGSM. We also discuss the different efficacy of the types of variations induced and how they could be further leveraged in future work.
Paper Type: Long
Research Area: Mathematical, Symbolic, Neurosymbolic, and Logical Reasoning
Research Area Keywords: Mathematical, Symbolic, Neurosymbolic, and Logical Reasoning
Contribution Types: NLP engineering experiment
Languages Studied: Bengali (BN), Chinese (ZH), French (FR), English (EN), German (DE), Japanese (JA), Russian (RU), Spanish (ES), Swahili (SW), Telugu (TE) and Thai (TH)
Submission Number: 9393
Loading