IMAP: A Mind Mapping Construct To Enhance Inductive Reasoning In Generative Model

ICLR 2026 Conference Submission17718 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Inductive thinking paradigm, large-scale language modeling, BBH benchmark assessment, jina-score, reliability assessment
Abstract: Inductive reasoning is crucial in human thinking, allowing us to distill universal laws from limited samples. However, incorporating inductive reasoning has not been studied enough in the field of artificial intelligence, especially in the application of large-scale language models, limiting the ability of models to abstract broad rules and trends from limited data. We introduce inductive thinking into generative models, designing rigorous rules to compare generated results with real ones, and verify its effectiveness in improving generation. To achieve this, we developed IMap (Intellectual Mapping based on Reinforcement Learning), which integrates the inductive thinking paradigm to improve the model's inference capabilities. We designed a thinking data structure based on the inductive paradigm, consisting of four core elements: COTs, Cases, Patterns, and Reasonability. We also propose an algorithm, the RL-Paradigm model (RLP), to acquire unknown thinking paradigms. By using figurative inductive thinking as input cues, we successfully guided multiple large models to generate an average of 270 results. Comparative experiments show that input cues combined with inductive thinking perform well in most models, significantly improving the generation results. We conducted a comprehensive evaluation of RLP against other models using BLEU, bert-score, and Jina-score metrics. The results show that RLP significantly outperforms other models in several areas. We unlocked the generative potential of inductive thinking paradigms, developed reusable thinking data maps, and designed RLP, a generative model specialized for unknown paradigms. This innovation is expected to advance the generative capabilities of LLMs and offer insights for interdisciplinary research in brain sciences.
Supplementary Material: zip
Primary Area: generative models
Submission Number: 17718
Loading