Keywords: Chain of Paradigms, Human cognitive processes, Memory-Augmented Framework, Pattern Reuse, Cognitive Modeling
Abstract: Large language models (LLMs) have achieved strong performance in text generation, yet their inductive reasoning processes often exhibit instability and limited generalization across tasks. In this work, we propose Chain of Paradigms (COP), a memory-augmented inductive reasoning framework that enables reusable high-level reasoning patterns to be stored, retrieved, and instantiated during inference. COP consists of a problem expander for extracting task-critical information, a lightweight paradigm buffer that maintains structured reasoning patterns, and a dynamic retrieval mechanism that selects relevant paradigms via semantic matching. These components form a closed-loop reasoning process that supports pattern reuse across tasks while mitigating erratic inference behaviors. We evaluate COP on the Big-Bench Hard (BBH) benchmark using exact match accuracy, inference cost, and cross-task pattern reuse metrics, with controlled comparisons against existing prompting and agent-based reasoning methods. Experimental results demonstrate consistent improvements in accuracy and robustness over strong baselines, while maintaining efficient inference. This work enhances the reliability of generative models in complex reasoning tasks, provides insights into aligning AI inference with human cognitive patterns, and contributes to interdisciplinary research at the intersection of cognitive psychology and AI alignment.
Paper Type: Long
Research Area: Linguistic theories, Cognitive Modeling and Psycholinguistics
Research Area Keywords: Linguistic Theories, Cognitive Modeling, and Psycholinguistics, Generation, Language Modeling
Contribution Types: NLP engineering experiment, Publicly available software and/or pre-trained models, Data resources, Data analysis, Theory
Languages Studied: English, Chinese
Submission Number: 1348
Loading