Robust in-context RALM: simulating noisy contexts resolve noisy retrievalDownload PDF

Anonymous

16 Dec 2023ACL ARR 2023 December Blind SubmissionReaders: Everyone
TL;DR: We showcase that simulating noisy contexts as in-context examples can resolve RALM under noisy retrieval settings.
Abstract: Retrieval Augmented Language Models (RALMs) have emerged as a leading approach in Open-Domain Question Answering (ODQA), leveraging external knowledge to enhance answer generation. However, RALMs face challenges when confronted with irrelevant or distracting contexts, particularly in real-world applications with less curated data sources. Addressing these challenges is crucial for improving model accuracy and trustworthiness. In this study, we introduce an innovative in-context learning method Simluate-The-Noise (\stn) designed to increase language model resilience in scenarios with absent answers or high distraction. By integrating perturbation techniques with in-context learning, we develop examples that simulate noisy retrieval conditions. Our method notably enhances model robustness without additional training or annotation, enabling the model to accurately identify `unanswerable' situations in distracting contexts. This cost-effective approach, which simply adds pre-constructed examples to prompts during inference, significantly improves model inference robustness in complex real-world scenarios, thus advancing the reliability of RALMs in ODQA tasks.
Paper Type: short
Research Area: Interpretability and Analysis of Models for NLP
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Data analysis
Languages Studied: English
0 Replies

Loading