Keywords: Scientific Impact Prediction, Scientific NLP, Corpus Creation, Benchmarking, NLP Datasets
Abstract: The rapid growth of scientific literature calls for automated methods to assess and predict research impact.
Prior work has largely focused on citation-based metrics, leaving limited evaluation of models’ capability to reason about other impact dimensions.
To this end, we introduce SciImpact, a large-scale, multi-dimensional benchmark for scientific impact prediction spanning 19 fields.
SciImpact captures various forms of scientific influence, ranging from citation counts to award recognition, media attention, patent reference, and artifact adoption, by integrating heterogeneous data sources and targeted web crawling.
It comprises 215,928 contrastive paper pairs reflecting meaningful impact differences in both short- (e.g., Best Paper Award) and long-term settings (e.g., Nobel Prize).
We evaluate 11 widely used large language models (LLMs) on SciImpact.
Results show that off-the-shelf models show substantial variability across dimensions and fields, while multi-task supervised fine-tuning consistently enables smaller LLMs (e.g., 4B) to markedly outperform much larger models (e.g., 30B) and surpass powerful closed-source LLMs (e.g., o4-mini).
These results establish SciImpact as a challenging benchmark and demonstrate its value for multi-dimensional, multi-field scientific impact prediction.
Our benchmark and code are available at https://gitlab.com/user-paper-review/SciImpact.git.
Paper Type: Long
Research Area: Resources and Evaluation
Research Area Keywords: Information Retrieval and Text Mining, Question Answering, Resources and Evaluation
Contribution Types: Data resources, Data analysis
Languages Studied: English
Submission Number: 7107
Loading