Abstract: Automated data visualization plays a crucial role in simplifying data interpretation, enhancing decision-making, and improving efficiency. While large language models (LLMs) have shown promise in generating (code to produce) visualizations from natural language, the absence of comprehensive benchmarks limits the rigorous evaluation of their capabilities. We introduce Text2Vis, a benchmark designed to assess text-to-visualization models, covering 20+ chart types and diverse data science queries, including trend analysis, correlation, outlier detection, and predictive analytics. It comprises 1,985 samples, each with a data table, natural language query, short answer, visualization code, and annotated charts. The queries involve complex reasoning, conversational turns, and dynamic data retrieval. We benchmark 10+ open-source and closed-source models, revealing significant performance gaps, highlighting key challenges, and offering insights for future advancements. We then propose an actor-critic agentic inference framework, where feedback from a critic model refines the generator’s output, increasing GPT-4o’s pass rate from 26% to 42% over the direct approach and improving chart quality. Finally, we introduce an automated LLM-based assessment framework for scalable evaluation that measures answer correctness, code execution success, visualization readability, and chart accuracy. We release Text2Vis at $<redacted>$.
Paper Type: Long
Research Area: Resources and Evaluation
Research Area Keywords: NLP datasets, automatic evaluation of datasets, automatic evaluation of datasets, benchmarking
Contribution Types: Data resources, Data analysis
Languages Studied: English
Submission Number: 7381
Loading