Can Language Models Serve as Analogy Annotators?

ACL ARR 2025 February Submission1908 Authors

14 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Conceptual abstraction and analogy-making are crucial for human learning, reasoning, and adapting to unfamiliar domains. Recently, large language models (LLMs) have made the synthesis of analogical data possible, which, however, still heavily relies on extensive human efforts to be annotated. This paper empirically examines the LLMs' capability to annotate story-level analogical data. Specifically, we propose a novel multi-stage progressive reasoning prompt framework $\texttt{A3E}$ (Automated Analogy Annotation Expert), which is based on the structure mapping theory from cognitive psychology and efficiently annotates candidate story pairs across six fine-grained categories. We use $\texttt{A3E}$ to evaluate how well the state-of-the-art LLMs can serve as analogy annotators. Experimental results demonstrate that our proposed $\texttt{A3E}$ achieves an average performance gain of + 73\% across a range of prompting baselines and base LLMs. The code and data will be available at https://anonymous.4open.science/r/A3E-3064.
Paper Type: Short
Research Area: Language Modeling
Research Area Keywords: prompting,applications
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Data resources
Languages Studied: English
Submission Number: 1908
Loading