What Is Novel? A Knowledge-Driven Framework for Bias-Aware Literature Originality Evaluation

ACL ARR 2026 January Submission10555 Authors

06 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Automated Novelty Assessment, Large Language Models, Idea-Level Plagiarism Detection, Knowledge Graphs, Information Retrieval
Abstract: Assessing research novelty is a core yet highly subjective aspect of peer review, typically based on implicit judgment and incomplete comparison to prior work. We introduce a literature-aware novelty assessment framework that explicitly learns how humans judge novelty from peer-review reports and grounds these judgments in structured comparison to existing research. Using nearly 80K novelty-annotated reviews from top-tier AI conferences, we fine-tune a large language model to capture reviewer-aligned novelty evaluation behavior. For a given manuscript, the system extracts structured representations of its ideas, methods, and claims, retrieves semantically related papers, and constructs a similarity graph that enables fine-grained, concept-level comparison to prior work. Conditioning on this structured evidence, the model produces calibrated novelty scores and human-like explanatory assessments, reducing overestimation and improving consistency relative to existing approaches.
Paper Type: Short
Research Area: Retrieval-Augmented Language Models
Research Area Keywords: Resources and Evaluation, Human-Centered NLP, Interpretability and Analysis of Models for NLP, Information Retrieval and Text Mining, Language Modeling, Machine Learning for NLP, Ethics, Bias, and Fairness
Contribution Types: Model analysis & interpretability, Publicly available software and/or pre-trained models, Data resources, Data analysis
Languages Studied: English
Submission Number: 10555
Loading