Give me a hint: Can LLMs take a hint to solve math problems?

Published: 10 Oct 2024, Last Modified: 31 Oct 2024MATH-AI 24EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Math, LLMs, Reasoning, Prompting
TL;DR: A study on enhancing LLM performance in solving math problems through hints, while examining the impact of adversarial prompts.
Abstract: While state-of-the-art LLMs have shown poor logical and basic mathematical reasoning, recent works try to improve their problem-solving abilities using prompting techniques. We propose giving "hints" to improve the language model's performance on advanced mathematical problems, taking inspiration from how humans approach math pedagogically. We also test robustness to adversarial hints and demonstrate their sensitivity to them. We demonstrate the effectiveness of our approach by evaluating various diverse LLMs, presenting them with a broad set of problems of different difficulties and topics from the MATH dataset and comparing against techniques such as one-shot, few-shot, and chain of thought prompting. Our code is available at https://github.com/vlgiitr/LLM-Math
Concurrent Submissions: N/A
Submission Number: 27
Loading