RLAR: An Agentic Reward System for Multi-task Reinforcement Learning on Large Language Models

ACL ARR 2026 January Submission10089 Authors

06 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: reinforcement learning, large language model, reward modeling, llm agent
Abstract: Large language model alignment via reinforcement learning depends critically on reward function quality. However, static, domain-specific reward models are often costly to train and exhibit poor generalization in out-of-distribution scenarios encountered during RL iterations. We present RLAR (**R**einforcement **L**earning from **A**gent **R**ewards), an agent‑driven framework that dynamically assigns tailored reward functions to individual queries. Specifically, RLAR transforms reward acquisition into a dynamic tool synthesis and invocation task. It leverages LLM agents to autonomously retrieve optimal reward models from the Internet and synthesize programmatic verifiers through code generation. This allows the reward system to self-evolve with the shifting data distributions during training. Experimental results demonstrate that RLAR yields consistent performance gains ranging from $10%$ to $60%$ across mathematics, coding, translation, and dialogue tasks. On RewardBench-V2, RLAR significantly outperforms static baselines and approaches the performance upper bound, demonstrating superior generalization through dynamic reward orchestration.
Paper Type: Long
Research Area: AI/LLM Agents
Research Area Keywords: LLM agents, other LLM agent topics, reinforcement learning, transfer learning / domain adaptation, generalization
Contribution Types: NLP engineering experiment, Publicly available software and/or pre-trained models
Languages Studied: English
Submission Number: 10089
Loading