Research Area: Evaluation, Safety
Keywords: Retrieval Augmented Generation; Irrelevant Information; Misleading Information
TL;DR: In this paper, we introduce a framework for categorizing irrelevant information into three graded levels, aiming to explore LLMs' robustness when encountering graded semantic related yet irrelevant information under various conditions.
Abstract: By leveraging the retrieval of information from external knowledge databases, Large Language Models (LLMs) exhibit enhanced capabilities for accomplishing many knowledge-intensive tasks.
However, due to the inherent flaws of current retrieval systems, there might exist irrelevant information within those retrieving top-ranked passages.
In this work, we present a comprehensive investigation into the robustness of LLMs to different types of irrelevant information under various conditions.
We initially introduce a framework to construct high-quality irrelevant information that ranges from semantically unrelated, partially related, and related to questions.
Furthermore, our analysis demonstrates that the constructed irrelevant information not only scores highly on similarity metrics, being highly retrieved by existing systems, but also bears semantic connections to the context.
Our investigation reveals that current LLMs still face challenges in discriminating highly semantically related information and can be easily distracted by these irrelevant yet misleading content.
Besides, we also find that current solutions for handling irrelevant information have limitations in improving the robustness of LLMs to such distractions.
All the resources are available on [GitHub](https://github.com/Di-viner/LLM-Robustness-to-Irrelevant-Information).
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
Author Guide: I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
Submission Number: 148
Loading