MoRAgent: Parameter Efficient Agent Tuning with Mixture-of-Roles

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Despite recent advancements of fine-tuning large language models (LLMs) to facilitate agent tasks, parameter-efficient fine-tuning (PEFT) methodologies for agent remain largely unexplored. In this paper, we introduce three key strategies for PEFT in agent tasks: 1) Inspired by the increasingly dominant \textit{Reason+Action} paradigm, we first decompose the capabilities necessary for the agent tasks into three distinct roles: reasoner, executor, and summarizer. The reasoner is responsible for comprehending the user's query and determining the next role based on the execution trajectory. The executor is tasked with identifying the appropriate functions and parameters to invoke. The summarizer conveys the distilled information from conversations back to the user. 2) We then propose the Mixture-of-Roles (MoR) framework, which comprises three specialized Low-Rank Adaptation (LoRA) groups, each designated to fulfill a distinct role. By focusing on their respective specialized capabilities and engaging in collaborative interactions, these LoRAs collectively accomplish the agent task. 3) To effectively fine-tune the framework, we develop a multi-role data generation pipeline based on publicly available datasets, incorporating role-specific content completion and reliability verification. We conduct extensive experiments and thorough ablation studies on various LLMs and agent benchmarks, demonstrating the effectiveness of the proposed method. This project is publicly available at https://mor-agent.github.io
Lay Summary: In this paper, we propose a novel parameter-efficient fine-tuning method for LLM-based agent tasks. 1) We first decompose the capabilities necessary for the agent tasks into three distinct roles: reasoner, executor, and summarizer. 2) We then propose the Mixture-of-Roles (MoR) framework, which comprises three specialized LoRA groups, each designated to fulfill a distinct role. By focusing on their respective specialized capabilities and engaging in collaborative interactions, these LoRAs collectively accomplish the overall agent task. 3) We also develop a multi-role data generation pipeline based on publicly available datasets to effectively fine-tune the framework. We conduct extensive experiments on various LLMs and agent benchmarks, demonstrating the effectiveness of our method.
Link To Code: https://mor-agent.github.io/
Primary Area: Deep Learning->Large Language Models
Keywords: LLMs, Agent, Mixture-of-Roles
Submission Number: 8708
Loading