MindGYM: What Matters in Question Synthesis for Thinking-Centric Fine-Tuning?

Published: 18 Sept 2025, Last Modified: 05 Feb 2026NeurIPS 2025 Datasets and Benchmarks Track posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Data Synthesis; Thinking-Centric Data; Data Quality Analysis; Multi-Hop Thinking
TL;DR: We propose MindGYM, a thinking-centric data synthesis framework that injects cognitive traits into QA generation, enabling language and vision-language models to self-synthesize high-quality, low-variance data for efficient fine-tuning.
Abstract: Large foundation models face challenges in acquiring transferable, structured thinking abilities, especially when supervised with rigid templates or crowd-annotated instruction datasets. Unlike prior approaches, we focus on a thinking-centric data synthesis paradigm that enables models to evolve through self-generated, cognitively guided data. We propose MindGYM, a structured and scalable framework for question synthesis, composed of: (1) Cognitive Thinking Process Injection, which infuses high-level reasoning objectives to shape the model’s synthesis behavior; (2) Seed Single-Hop Question Synthesis, generating atomic questions from diverse semantic types to encourage broader thinking; and (3) Challenging Multi-Hop QA Synthesis, composing more complex multi-hop questions based on QA seeds for deeper reasoning. Detailed analysis shows that synthetic data generated by our method achieves 16.7% higher average quality and 67.91% lower quality variance compared to baseline sources, highlighting that both high-quality and self-contained data are essential for effective, thinking-oriented fine-tuning. MindGYM improves performance on six reasoning benchmarks, achieving gains of up to 16% on MathVision using only 400 data samples, and generalizable improvements across different model sizes and architectures. MindGYM underscores the viability of self-challenging mechanisms in refining large model capabilities while minimizing human intervention and resource demands. Code and data are released to promote data-centric research into self-evolving foundation models driven by their internal reasoning capabilities.
Croissant File: json
Dataset URL: https://github.com/modelscope/data-juicer/tree/MindGYM
Code URL: https://anonymous.4open.science/r/MindGYM-DD48
Primary Area: Machine learning approaches to data and benchmarks enrichment, augmentation and processing (supervised, unsupervised, online, active, fine-tuning, RLHF, SFT, alignment, etc.)
Submission Number: 2545
Loading