ARGUS: Feedback-Reinforced Gradual LLM-Based Framework for Interpretable and Robust Archive Review

ACL ARR 2025 May Submission3687 Authors

19 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Automated archive review faces challenges in interpreting domain-specific semantics and ensuring traceable decisions, because existing methods relying on rigid rules or generic language models lack complex context understanding and review transparency. Regarding these issues, we propose ARGUS, a feedback-reinforced gradual framework for archive review. ARGUS uses hierarchical rule-embedded prompts for stepwise inference, feedback-driven sample enhancement via LLM inference logs for robustness, and parameter-efficient fine-tuning via low-rank adaptation. Evaluations on real-world archives and benchmarks show ARGUS achieves 10.5–15.5\% higher accuracy than baselines, reduces ASR by 25\%, and has been proven to effectively complete review tasks under limited resources.
Paper Type: Long
Research Area: NLP Applications
Research Area Keywords: Archive Review,LLM-based,Hierarchical Rule-Embedded Prompting,Feedback-Reinforced Learning
Languages Studied: Chinese,English
Submission Number: 3687
Loading