Abstract: Large reasoning models (LRM) with long chain-of-thought (CoT) capabilities have shown strong performance on objective tasks, such as math reasoning and coding.
However, their effectiveness on subjective questions that may have different responses from different perspectives is still limited by a tendency towards homogeneous reasoning, introduced by the reliance on a single ground truth in supervised fine-tuning and verifiable reward in reinforcement learning.
To bridge this gap, we conduct a pilot analysis on the scaling laws of reasoning length and the number of role perspectives, where we uncover that increasing role perspectives consistently yields performance gain.
Then, we propose Multirole-R1, a diversity-enhanced framework with multiple role perspectives, enhancing the accuracy and diversity in subjective reasoning tasks. Multirole-R1 features an unsupervised data construction pipeline that constructs reasoning chains that incorporate diverse role perspectives. We further employ reinforcement learning via Group Relative Policy Optimization (GRPO) with reward shaping, taking diversity as an additional reward signal. With specially designed reward functions, we successfully promote perspective diversity and lexical diversity, and discover a positive relation between reasoning diversity and accuracy.
Our experiment on six benchmarks demonstrates Multirole-R1's effectiveness and generalizability in enhancing both subjective and objective reasoning, showcasing the potential of diversity-enhanced training in LRMs.
Paper Type: Long
Research Area: Language Modeling
Research Area Keywords: reasoning, chain-of-thought
Contribution Types: NLP engineering experiment, Data analysis
Languages Studied: English
Submission Number: 1870
Loading