Keywords: Video Game Quality Assurance, Video Game Testing, Vision Language Models
TL;DR: A new benchmark for assessing VLM’s capabilities in real-world video game quality assurance tasks.
Abstract: With video games now generating the highest revenues in the entertainment industry, optimizing game development workflows has become essential for the sector’s sustained growth. Recent advancements in Vision-Language Models (VLMs) offer considerable potential to automate and enhance various aspects of game development, particularly Quality Assurance (QA), which remains one of the industry’s most labor-intensive processes with limited automation options. To accurately evaluate the performance of VLMs in video game QA tasks and determine their effectiveness in handling real-world scenarios, there is a clear need for standardized benchmarks, as existing benchmarks are insufficient to address the specific requirements of this domain. To bridge this gap, we introduce VideoGameQA-Bench, a comprehensive benchmark that covers a wide array of game QA activities, including visual unit testing, visual regression testing, needle-in-a-haystack tasks, glitch detection, and bug report generation for both images and videos of various games.
Code and data are available at: https://asgaardlab.github.io/videogameqa-bench/.
Croissant File: json
Dataset URL: https://huggingface.co/datasets/taesiri/VideoGameQA-Bench
Primary Area: Datasets & Benchmarks for applications in language modeling and vision language modeling
Submission Number: 451
Loading