REVIEW OF REFERENCE GENERATION METHODS IN LARGE LANGUAGE MODELS

Published: 01 Mar 2025, Last Modified: 04 May 2025IJAIMLEveryoneCC BY 4.0
Abstract: Large Language Models (LLMs) are now central to a wide range of applications, from academic writing and legal analysis to scientific research. Yet, one area that has consistently challenged their broader adoption is the problem of accurate and verifiable citation generation. Hallucinated or inaccurate citations erode trust, so it is essential to create reliable methods of citation generation. This survey covers notable approaches used to improve citation generation in LLMs, including Retrieval-Augmented Generation (RAG), prompt engineering, instruction tuning, and incorporating external knowledge. We also cover emerging approaches such as multimodal citation generation using structured data and visual information for improved accuracy. A survey of evaluation metrics, benchmark datasets, and ethical concerns—such as biases, risks of misinformation, and transparency—identifies current limitations and possible areas of improvement. Future research directions include real-time citation verification, normalizing evaluation schemas, and developing AI explainability for citation selection. By addressing these problems, this project aims to contribute to the development of more reliable, ethically sound, and academically rigorous LLM-based citation generation systems.
Loading