FIKSurvey: An Automated Peer Review Loop to Raise the Ceiling on AI Academic Writing

ICLR 2026 Conference Submission16785 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: large language model, AI applications
TL;DR: We design a framework FIKSurvey which leverages LLMs for automatic survey generation.
Abstract: The escalating demand for comprehensive literature surveys in rapidly evolving research areas makes manual writing increasingly impractical, underscoring the necessity of automation. Large Language Models (LLMs) provide a promising foundation for this task, yet guiding them to generate accurate, reliable remains a fundamental challenge, as issues such as hallucinations and vague organization often persist. To address this, we propose FIKSurvey, a feedback-driven framework grounded in the idea that ``Feedback is the key for automatic survey generation.'' Specifically, FIKSurvey systematically incorporates feedback three dimensions: outline feedback for structural clarity, citation feedback for evidence validation, and content feedback for readability and analytical depth. The framework also supports optional human-in-the-loop intervention for user-specific needs. Experiments confirm that FIKSurvey substantially improves both citation recall and content quality, demonstrating feedback as the critical mechanism for automatic survey generation.
Supplementary Material: zip
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 16785
Loading