Track: Extended abstract
Keywords: in-context-learning, knowledge conflict, language model priming
Abstract: Few-shot prompting has been shown to help large language models produce desired outputs or reduce instances of hallucination. However, consistently providing models with examples that are intentionally contrary to facts can lead to the models' in-context learning abilities adapting to these inputs and generating answers that do not align with the truth. This study aims to examine whether such language model priming also occurs when validating linguistic knowledge, and has crafted two scenarios to this end. The first scenario involves consistently providing false examples to provoke a conflict between the model's parameter knowledge and its contextual understanding, while the second mixes false and true examples to create a conflict within the context. Five models were employed to explore eight linguistic phenomena related to Syntax: Subject-Verb Agreement, Determiner-Noun Agreement, Anaphor Agreement, Irregular Verb/Noun Forms, Filler-Gap Dependencies, Island Constraints, Argument Structure, and Elliptical Constructions. We conducted experiments with various instruction options and demonstration designs to evaluate the robustness of language models against erroneous linguistic information and their capability to manage conflicts between linguistic contexts.
Submission Number: 89
Loading