Abstract: Recent advancements in large language models (LLMs)
have demonstrated their potential in automating the scientific research ideation.
Existing approaches primarily focus on prompting techniques, often producing ideas misaligned with expert standards -- novelty, feasibility, and effectiveness, which are widely recognized by the research community as the three key subdimensions of high-quality ideas.
Also, balancing these dimensions remains challenging due to their inherent trade-offs.
To address these limitations, we propose the first framework that employs a two-stage approach combining Supervised Fine-Tuning (SFT) and controllable Reinforcement Learning (RL) for the task.
In the SFT stage, the model learns foundational patterns from pairs of research papers and their corresponding follow-up ideas.
In the RL stage, multi-dimensional reward models guided by fine-grained feedback evaluate and optimize the model across key dimensions.
During inference, dimensional controllers coordinated by a sentence-level decoder enable dynamic context-aware steering of the idea generation process.
Our framework provides a balanced approach to research idea generation, achieving high-quality outcomes in the experiment by dynamically navigating the trade-offs among novelty, feasibility, and effectiveness.
Paper Type: Long
Research Area: Generation
Research Area Keywords: LLMs, Scientific Discovery, Fine-tuning, Reinforcement Learning
Contribution Types: NLP engineering experiment, Data resources
Languages Studied: English
Submission Number: 1583
Loading