Keywords: Code agent, Multi-modal Large Language Model, Large Language Model
Abstract: Large language models (LLMs) have recently shown strong potential for Automated Program Repair (APR), yet most existing approaches remain unimodal and fail to leverage the rich diagnostic signals contained in visual artifacts such as screenshots and control-flow graphs. In practice, many bug reports convey critical information visually (e.g., layout breakage or missing widgets), but directly using such dense visual inputs often causes context loss and noise, making it difficult for MLLMs to ground visual observations into precise fault localization and executable patches.
To bridge this semantic gap, we propose SVRepair, a multimodal APR framework with structured visual reasoning. SVRepair first fine-tunes a vision-language model, Structured Visual Representation (SVR), to uniformly transform heterogeneous visual artifacts into a semantic scene graph that captures GUI elements and their structural relations (e.g., hierarchy), providing normalized, code-relevant context for downstream repair. Building on the graph, SVRepair drives a coding agent to localize faults and synthesize patches, and further introduces an iterative visual-artifact segmentation strategy that progressively narrows the input to bug-centered regions to suppress irrelevant context and reduce hallucinations.
Extensive experiments across multiple benchmarks demonstrate state-of-the-art performance: SVRepair achieves 36.47% accuracy on SWE-bench M, 38.02% on MMCode, and 95.12% on CodeVision, validating the effectiveness of SVRepair for multimodal program repair. Code is available here \url{https://anonymous.4open.science/r/SVRepair-5D0B/}
Paper Type: Long
Research Area: AI/LLM Agents
Research Area Keywords: Code Agent
Contribution Types: Model analysis & interpretability
Languages Studied: English
Submission Number: 732
Loading