A Systematic Review of Analogy Generation and Evaluation: Methods, Metrics, and Challenges

ACL ARR 2025 May Submission4017 Authors

19 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Analogy, a quintessential human cognitive capability, has long been studied for its role in transferring knowledge across domains, from generating novel analogies to evaluating their quality. The field of artificial intelligence (AI) has long sought to model the analogical reasoning process computationally, from using logical representations to adopting connectionist methods. However, the rapidly improving large language models (LLMs) capabilities have led to the creation of new families of LLM-powered analogy generation systems, creating a need for a comprehensive review that situates these developments within the broader historical context. Following the PRISMA framework, we systematically reviewed computational analogy research across computer science (CS), AI, and natural language processing (NLP), focusing on methods for analogy generation and evaluation. We categorized existing approaches across various dimensions, from symbolic, embedding-based, to LLM-driven methods, and identified core challenges, including difficulties in generating novel analogies, conflating relational and literal similarity, and limitations in current evaluation metrics and datasets. Based on this analysis, we propose future directions aimed at enhancing both the generation process and the quality of outputs in analogy generation and evaluation systems.
Paper Type: Long
Research Area: Linguistic theories, Cognitive Modeling and Psycholinguistics
Research Area Keywords: Linguistic Theories, Cognitive Modeling, and Psycholinguistics, NLP Applications
Contribution Types: Surveys
Languages Studied: English
Submission Number: 4017
Loading