In-Context Interference In Chat-Based Large Language Models

Published: 01 Jan 2024, Last Modified: 15 May 2025ERF (1) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Large Language Models (LLMs) have transformed society, but modifying their internal knowledge remains challenging. Here, we focus on interference in in-context learning, examining how new knowledge affects performance in self-aware robots. We propose an evaluation benchmark based on the bAbI dataset to assess the robot’s ability to manage interference, maintain stability, ensure flexible information routing, and facilitate task performance. Addressing these challenges is crucial for improving LLMs’ effectiveness in developing self-aware robots.
Loading