Enhancing Reasoning Capabilities of LLMs via Principled Synthetic Logic Corpus

Published: 10 Oct 2024, Last Modified: 28 Oct 2024Sys2-Reasoning PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: large language model, artificial intelligence, reasoning, logical reasoning, math, coding, synthetic corpus
TL;DR: We enhanced LLM's reasoning capabilities by principled synthetic corpus.
Abstract: Large language models (LLMs) are capable of solving a wide range of tasks, yet they have struggled with reasoning. To address this, we propose $\textbf{Additional Logic Training (ALT)}$, which aims to enhance LLMs' reasoning capabilities by program-generated logical reasoning samples. We first establish principles for designing high-quality samples by integrating symbolic logic theory and previous empirical insights. Then, based on these principles, we construct a synthetic corpus named $\textbf{Formal} \ \textbf{Logic} \ \textbf{\textit{D}eduction} \ \textbf{\textit{D}iverse}$ (FLD$^{\times2}$), comprising numerous samples of multi-step deduction with unknown facts, diverse reasoning rules, diverse linguistic expressions, and challenging distractors. Finally, we empirically show that ALT on FLD$^{\times2}$ substantially enhances the reasoning capabilities of state-of-the-art LLMs, including LLaMA-3.1-70B. Improvements include gains of up to 30 points on logical reasoning benchmarks, up to 10 points on math and coding benchmarks, and 5 points on the benchmark suite BBH. Case analyses demonstrate that LLMs successfully integrate their knowledge acquired during pre-training with reasoning capabilities acquired through ALT.
Submission Number: 12
Loading