Keywords: AI-Generated Academic Image Forensics, Benchmark and Evaluation, Multimodal Reasoning
Abstract: We introduce **AEGIS**, **A** holistic benchmark for **E**valuating forensic analysis of AI-**G**enerated academic **I**mage**S**. Compared to existing benchmarks, AEGIS features three key advances:
1. **Domain-Specific Complexity**: covering 7 academic categories with 39 fine-grained subtypes, exposing intrinsic forensic difficulty, where even GPT~5.1 reaches 48.80% overall performance and expert models achieve only limited localization accuracy (IoU 30.09%).
2. **Diverse Forgery Simulations**: modeling four prevalent academic forgery strategies across 25 generative models, with 11 yielding average forensic accuracy below 50%, showing that forensics lag behind generative advances.
3. **Multi-Dimensional Forensic Evaluation**: jointly assessing detection, reasoning, and localization, revealing complementary strengths between model families, with multimodal large language models (MLLMs) at 84.74% accuracy in texture artifact recognition and expert detectors peaking at 79.54% accuracy in binary authenticity detection.
By evaluating 25 leading MLLMs, 9 expert models, and one unified multimodal understanding and generation model, AEGIS serves as a diagnostic testbed exposing fundamental limitations in academic image forensics. Data and code are available: https://anonymous.4open.science/r/AEGIS-2E31.
Paper Type: Long
Research Area: Resources and Evaluation
Research Area Keywords: benchmarking, evaluation, metrics, reproducibility
Contribution Types: Model analysis & interpretability, Data resources, Data analysis
Languages Studied: English
Submission Number: 2043
Loading