Abstract: Metaphors are everywhere. They appear extensively across all domains of natural language, from the
most sophisticated poetry to seemingly dry academic prose. A significant body of research in the cogni-
tive science of language argues for the existence of conceptual metaphors, the systematic structuring of
one domain of experience in the language of another. Conceptual metaphors are not simply rhetorical
flourishes but are crucial evidence of the role of analogical reasoning in human cognition. In this paper,
we ask whether Large Language Models (LLMs) can accurately identify and explain the presence of such
conceptual metaphors in natural language data. Using a novel prompting technique based on metaphor
annotation guidelines, we demonstrate that LLMs are a promising tool for large-scale computational
research on conceptual metaphors. Further, we show that LLMs are able to apply procedural guidelines
designed for human annotators, displaying a surprising depth of linguistic knowledge.
Loading