EnrichMath: Enriching Idea and Solution Elicit Mathematical Reasoning in Large Language ModelsDownload PDF

Anonymous

16 Feb 2024ACL ARR 2024 February Blind SubmissionReaders: Everyone
Abstract: Large language models (LLMs) have witnessed remarkable advancements across a spectrum of language tasks. Despite great progress, mathematical problem-solving is still a particularly formidable challenge. Previous studies have tried to address this problem by augmenting questions and found that the performance is saturated with more training data. To further enhance the complex mathematical reasoning capabilities of LLMs, we propose EnrichMath, which is fine-tuned on our EnrichMathQA dataset. EnrichMathQA is constructed by enhancing answers in MATH and GSM8K to have a leading summary and reducing the thought jumping with our proposed Enrich Reasoning Idea(ERI) and Enrich Reasoning Solution(ERS) strategies. EnrichMath achieves state-of-the-art performance among current open-source mathematical models. Our EnrichMath-70B achieves 32.5% accuracy on the MATH benchmark, outperforming MetaMath by 2.7%. Furthermore, EnrichMath-70B achieves an accuracy of 84.1% on GSM8K, which is comparable to the methods that use external calculation tools.
Paper Type: long
Research Area: Machine Learning for NLP
Contribution Types: Model analysis & interpretability, Data resources
Languages Studied: English
0 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview