Keywords: Scientific instruction following, Benchmark and dataset, Scientific reasoning
Abstract: As Large Language Models (LLMs) transition from general knowledge retrieval to complex scientific discovery, evaluation standards must evolve to enforce the rigorous norms of scientific inquiry.
Existing benchmarks create a critical blind spot: general instruction-following metrics focus on superficial formatting, while domain-specific scientific benchmarks assess only final-answer correctness, often rewarding models that arrive at the right result for the wrong reasons.
We address this limitation by defining scientific instruction following, the capability to solve problems while strictly adhering to the constraints that establish scientific validity.
We introduce SciIF, a multi-discipline benchmark that evaluates this capability by pairing university-level problems with a fixed catalog of constraints across three pillars: scientific conditions (e.g., boundary checks and assumptions), semantic stability (e.g., unit and symbol conventions), and specific processes (e.g., required numerical methods). Uniquely, SciIF emphasizes auditability, requiring models to provide explicit evidence of constraint satisfaction rather than implicit compliance.
By measuring both solution correctness and multi-constraint adherence, SciIF enables fine-grained diagnosis of compositional reasoning failures, ensuring that LLMs can function as reliable agents within the strict logical frameworks of science.
Paper Type: Long
Research Area: Resources and Evaluation
Research Area Keywords: AI / LLM Agents, Discourse, Pragmatics, and Reasoning, Generation, Language Modeling, Question Answering, Resources and Evaluation
Contribution Types: Model analysis & interpretability, Data resources, Data analysis
Languages Studied: English
Submission Number: 9435
Loading