IR3D-Bench Logo IR3D-Bench: Evaluating Vision-Language Model
Scene Understanding as Agentic Inverse Rendering

What I cannot create, I do not understand. ---Richard Feynman

Anonymous Author(s)

Motivation

IR3D-Bench Pipeline Diagram

Humans demonstrate true understanding through creation and recreate observed scenes because we genuinely comprehend spatial relationships and physical attributes. In contrast, current Vision-Language Agents (VLAs) are primarily evaluated on recognition tasks like captioning or QA, which fail to assess deeper understanding. Can VLAs truly understand what they see ? IR3D-Bench test it by letting them recreating the observations.

Abstract

Vision-language models (VLMs) excel at descriptive tasks, but whether they truly understand scenes from visual observations remains uncertain. We introduce IR3D-Bench, a benchmark challenging VLMs to demonstrate understanding through active creation rather than passive recognition. Grounded in the analysis-by-synthesis paradigm, IR3D-Bench tasks Vision-Language Agents (VLAs) with actively using programming and rendering tools to recreate the underlying 3D structure of an input image, achieving agentic inverse rendering through tool use. This "understanding-by-creating" approach probes the tool-using generative capacity of VLAs, moving beyond the descriptive or conversational capacity measured by traditional scene understanding benchmarks. We provide a comprehensive suite of metrics to evaluate geometric accuracy, spatial relations, appearance attributes, and overall plausibility. Initial experiments on agentic inverse rendering powered by various state-of-the-art VLMs highlight current limitations, particularly in visual precision rather than basic tool usage. IR3D-Bench, including data and evaluation protocols, is released to facilitate systematic study and development of tool-using VLAs towards genuine scene understanding by creating.

Pipeline Overview

IR3D-Bench Pipeline Diagram

Overview of the IR3D-Bench Pipeline:

  1. Stage 1: Inverse Rendering: Given a raw image and camera parameters, the agent is prompted to infer a structured scene representation in JSON format. The predicted objects are rendered in Blender and matched to GT annotations using geometric alignment and per-object mask comparisons.
  2. Stage 2: Benchmark Evaluation these metrics provide a comprehensive view of the VLA's internal world model and generative precision:
    • Localization: Object count, spatial alignment, and relation consistency
    • Visual Appearance: Shape and material accuracy via mask- and attribute-level scores
    • Language-Aligned Semantics: Layout fidelity and object plausibility assessed via GPT-4o

Evaluation Results

Holistic comparison over Metrics Suite

IR3D-Bench Pipeline Diagram

Visual Results

IR3D-Bench Pipeline Diagram

Conclusion:

  • Gemini-2.5-pro demonstrates strong understanding of object spatial positions and relative layouts.
  • Grok-3 excels at modeling fine-grained details such as material and color.
  • Qwen2.5-VL-72B struggles in more complex scenarios.

Iterative Refinements

IR3D-Bench Pipeline Diagram
IR3D-Bench Pipeline Diagram

Conclusion:

As the number of refinements increases, the performance of cases that performed poorly on gpt-4o gradually improves, even outperforming Gemini-2.5-pro.

BibTeX

If you find our work useful, please consider citing our paper:

@inproceedings{ir3dbench2025,
  title={IR3D-Bench: Evaluating Vision-Language Model Scene Understanding as Agentic Inverse Rendering},
  author={Anonymous Authors},
  booktitle={NeurIPS 2025 submission},
  year={2025}
}