TL;DR: We argue that general intelligence in LLMs requires reward-based pretraining to decouple reasoning from knowledge, enabling better generalization to novel tasks.
Abstract: Large Language Models (LLMs) have demonstrated impressive real-world utility, exemplifying artificial useful intelligence (AUI). However, their ability to reason adaptively and robustly -- the hallmarks of artificial general intelligence (AGI) -- remains fragile. While LLMs seemingly succeed in commonsense reasoning, programming, and mathematics, they struggle to generalize algorithmic understanding across novel contexts. Our experiments with algorithmic tasks in esoteric programming languages reveal that LLM's reasoning overfits to the training data and is limited in its transferability. We hypothesize that the core issue underlying such limited transferability is the coupling of reasoning and knowledge in LLMs.
To transition from AUI to AGI, we propose disentangling knowledge and reasoning through three key directions: (1) pretaining to reason using RL from scratch as an alternative to the widely used next-token prediction pretraining, (2) using a curriculum of synthetic tasks to ease the learning of a \textit{reasoning prior} for RL that can then be transferred to natural language tasks, and (3) learning more generalizable reasoning functions using a small context window to reduce exploiting spurious correlations between tokens. Such a reasoning system coupled with a trained retrieval system and a large external memory bank as a knowledge store can overcome several limitations of existing architectures at learning to reason in novel scenarios.
Lay Summary: Today’s AI models are good at answering questions and solving familiar problems, but they often fail when faced with new, unfamiliar challenges. We believe this is because current models mix up two separate skills: knowledge and reasoning. Our research shows that although these models seem smart, their reasoning often just copies patterns from training data instead of truly understanding how to think.
To fix this, we propose a fundamental change to the common pretraining paradigm. Instead of just predicting the next word like most current models do, we propose to train models to *reason* from the ground up using trial-and-error learning—reinforcement learning. We start with a set of simple, synthetic tasks that teach the model how to think step by step, and then move on to real-world language tasks. We also propose to keep the model’s focus small to avoid it relying on spurious features and to have an architectural separation between the memory and the reasoning model.
In the long run, by decoupling reasoning from knowledge, we hope to achieve models that can actually generalize to new problems it hasn’t seen before.
Link To Code: https://improbableai.notion.site/General-Intelligence-Requires-Reward-Based-Pretraining-2023b66e4cf580d3ab44c7860b75d25f?pvs=74
Primary Area: Research Priorities, Methodology, and Evaluation
Keywords: reasoning, LLM, AGI, reinforcement learning, reward-based pretraining, generalization
Submission Number: 62
Loading