Evaluating In-Context Learning for Computational Literary Studies: A Case Study Based on the Automatic Recognition of Knowledge Transfer in German Drama
Abstract: In this paper, we evaluate two different natural
language processing (NLP) approaches to solve
a paradigmatic task for computational literary
studies (CLS): the recognition of knowledge
transfer in literary texts. We focus on the ques-
tion of how adequately large language models
capture the transfer of knowledge about fam-
ily relations in German drama texts when this
transfer is treated as a classification or textual
entailment task using in-context learning (ICL).
We find that a 13 billion parameter LLAMA 2
model performs best on the former, while GPT-
4 performs best on the latter task. However, all
models achieve relatively low scores compared
to standard NLP benchmark results, struggle
from inconsistencies with small changes in
prompts and are often not able to make simple
inferences beyond the textual surface, which is
why an unreflected generic use of ICL in the
CLS seems still not advisable.
Loading