GraphMind: Unveiling Scientific Reasoning through Contextual Graphs for Novelty Assessment

ACL ARR 2025 May Submission4529 Authors

20 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large Language Models (LLMs) have shown promise in scientific discovery, but their ability to assess scientific novelty remains underexplored. Understanding novelty requires more than surface-level comparisons, it requires reconstructing the scientific reasoning process from claims, methods, experiments, and results. To bridge this gap, we introduce a new benchmark, SciNova, that captures hierarchical scientific reasoning from papers and their related works to enhance novelty assessment. It contains 3,063 papers from ICLR 2022-2025 and NeurIPS 2022-2024 with their full content, hierarchical graphs representing their key elements (claims, methods, experiments, and results), and papers related by citation and semantic similarity. Furthermore, we propose GraphMind, a method that leverages these structured elements into a prompting-based novelty assessment framework. Experimental results demonstrate the benefits of this enriched representation, improving novelty assessment accuracy. Additionally, our analysis of LLM-generated reviews reveals strong faithfulness and factuality.
Paper Type: Long
Research Area: Information Extraction
Research Area Keywords: large language model, information extraction, AI for science, novelty evaluation
Contribution Types: Publicly available software and/or pre-trained models, Data resources
Languages Studied: English
Submission Number: 4529
Loading