MuseRAG: Idea Originality Scoring At Scale

ACL ARR 2025 May Submission5648 Authors

20 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Assessing the originality of creative ideas often relies on their statistical infrequency within a population---an approach long used in creativity research but difficult to automate at scale. Human annotation via manual bucketing of idea rephrasings is labor-intensive, subjective, and brittle under large corpora. We introduce a fully automated, psychometrically validated pipeline for frequency-based originality scoring. Our method, MuseRAG, combines large language models (LLMs) with an externally orchestrated retrieval-augmented generation (RAG) framework. Given a new idea, the system retrieves semantically similar prior idea buckets and zero-shot prompts the LLM to judge whether the new idea belongs to an existing bucket or forms a new one. The resulting buckets enable computation of frequency-based originality metrics. MuseRAG matches human annotators in both idea clustering (AMI = 0.59) and participant-level originality scores (r = 0.89), while exhibiting strong convergent and external validity. Our work enables intent-sensitive, human-aligned originality scoring, aiding creativity research at scale.
Paper Type: Long
Research Area: NLP Applications
Research Area Keywords: Computational Creativity Assessment, Originality Scoring, Retrieval-Augmented Generation, LLM-as-a-Judge, Idea Clustering
Contribution Types: NLP engineering experiment, Publicly available software and/or pre-trained models, Data analysis
Languages Studied: English
Submission Number: 5648
Loading