LimAgents: A Multi-Agent RAG Framework and Large-Scale Corpus for Research Limitation Generation

Published: 28 Apr 2026, Last Modified: 28 Apr 2026MSLD 2026 PosterEveryoneRevisionsCC BY 4.0
Keywords: Limitations, LLM Agents, RAG, Peer Review
TL;DR: Scientific article's limitations generation using Multi agent framework with RAG.
Abstract: Identifying and articulating limitations is essential for transparent and rigorous scientific research. However, zero-shot large language models (LLMs) often produce superficial or general limitation statements. They usually repeat limitations reported by authors without looking at deeper methodological issues and contextual gaps. This problem is made worse because many authors disclose only partial or trivial limitations. We introduce LimAgents, a model-agnostic multi-agent framework for generating both general limitations and novelty-focused limitations from scholarly text in a substantive, evidence-based manner. LimAgents uses teams of specialized worker agents that analyze a paper from complementary perspectives, a leader agent that iteratively challenges weak or vague points through follow-up feedback, and a master agent that merges outputs into a concise, non-redundant final list. To strengthen context, we augment agents with a shared-memory retrieval module built from a large RAG database of 120K+ research papers. We evaluate on a large AI/ML corpus (128K papers) using 8K sampled papers and show consistent gains across multiple backbone models. Our proposed framework improves limitation-generation F1 by 20.15–44.77\% with GPT-4o mini, 17.81–41.83\% with LLaMA-3-70B, and 11.83–32.74\% with Mistral-7B, demonstrating that structured agent collaboration and retrieval-based context substantially reduce generic outputs and increase coverage of meaningful limitations.
Email Sharing: We authorize the sharing of all author emails with Program Chairs.
Data Release: We authorize the release of our submission and author names to the public in the event of acceptance.
Submission Number: 179
Loading