Knowledgeable In-Context Tuning: Exploring and Exploiting Factual Knowledge for In-Context LearningDownload PDF

Anonymous

16 Dec 2023ACL ARR 2023 December Blind SubmissionReaders: Everyone
Abstract: Large pre-trained language models (PLMs) enable in-context learning (ICL) by conditioning on a few labeled training examples as a text-based prompt, eliminating the need for parameter updates and achieving competitive performance. In this paper, we demonstrate that actual knowledge is imperative for the performance of ICL in three core facets, i.e., the inherent knowledge learned in PLMs, the factual knowledge derived from the selected in-context examples, and the knowledge biases in PLMs for output generation. To unleash the power of large PLMs in few-shot scenarios, we introduce a novel Knowledgeable In-Context Tuning (KICT) framework to further improve the ICL's performance: 1) injecting knowledge to PLMs during continual self-supervised pre-training, 2) judiciously selecting the examples with high knowledge relevance, and 3) calibrating the prediction results based on prior knowledge. We evaluate the proposed approaches on auto-regressive models (e.g., GPT-style PLMs) over multiple text classification and question answering tasks. Experiments results demonstrate that KICT substantially outperforms strong baselines, and improves by more than 13% and 7% on text classification and question answering tasks, respectively.
Paper Type: long
Research Area: NLP Applications
Contribution Types: NLP engineering experiment, Approaches to low-resource settings, Publicly available software and/or pre-trained models
Languages Studied: English
0 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview