LLM-Ref: Enhancing Reference Handling in Technical Writing with Large Language Models

ACL ARR 2025 February Submission5792 Authors

16 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large Language Models (LLMs) are effective at synthesizing knowledge but often lack accuracy in domain-specific tasks. Retrieval-Augmented Generation (RAG) systems, utilizing user-provided data, can mitigate the issue and assist in article writing. However, such systems lack the capability to generate proper references. In this paper, we present LLM-Ref, a writing assistant tool that aids researchers in writing articles from multiple source documents with enhanced reference synthesis and handling capabilities. Unlike traditional RAG systems, which rely on chunking and indexing, LLM-Ref retrieves and generates content at the paragraph level, allowing for seamless reference extraction for the generated text. Furthermore, the tool incorporates iterative response generation to accommodate extended contexts within language model constraints while actively mitigating hallucinations. Compared to baseline RAG-based systems, our approach achieves a $3.25\times$ to $6.26\times$ increase in Ragas score, a comprehensive metric that provides a holistic view of a RAG system’s ability to produce accurate, relevant, and contextually appropriate responses.
Paper Type: Long
Research Area: Generation
Research Area Keywords: Retrieval-augmented generation, interactive and collaborative generation
Contribution Types: NLP engineering experiment, Publicly available software and/or pre-trained models
Languages Studied: English
Submission Number: 5792
Loading