Automatic Generation of In-Context Math Examples Using Multi-Modal Consistency

ACL ARR 2024 June Submission3872 Authors

16 Jun 2024 (modified: 02 Jul 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large Language Models (LLMs) have advanced Natural Language Processing (NLP) tasks but limited in mathematical reasoning. To address this, few-shot examples are used in prompts for in-context learning. However, existing methods require annotated datasets, resulting in higher computational costs and lower quality examples. To mitigate these limitations, we propose APMath, a framework that automatically generates high-quality in-context examples to enhance LLMs’ mathematical reasoning. APMath ensures consistency across different modalities (e.g., Chain-of-Thought (CoT), code snippets, and equations) by generating and selecting mutations that improve response consistency. Evaluated on four math problem datasets, APMath outperforms six baselines, with LLM accuracy ranging from 87.0% to 99.3%. It surpasses the state-of-the-art in-context example retrieval method in three of the four datasets by 1.9% to 4.4%, without relying on an external annotated dataset.
Paper Type: Long
Research Area: Question Answering
Research Area Keywords: Large language Model, Mathematical Reasoning, In-context Learning
Contribution Types: NLP engineering experiment, Approaches to low-resource settings
Languages Studied: English
Submission Number: 3872
Loading