prompt4vis: prompting large language models with example mining for tabular data visualization

Published: 2025, Last Modified: 21 Jan 2026VLDB J. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: We are currently in the epoch of Large Language Models (LLMs), which have transformed numerous technological domains within the database community. In this paper, we examine the application of LLMs in text-to-visualization (text-to-vis). The advancement of natural language processing technologies has made natural language interfaces more accessible and intuitive for visualizing tabular data. However, despite utilizing advanced neural network architectures, current methods such as Seq2Vis, ncNet, and RGVisNet for transforming natural language queries into DV commands still underperform, indicating significant room for improvement. In this paper, we introduce Prompt4Vis, a novel framework that leverages LLMs and In-context learning to enhance the generation of data visualizations from natural language. Given that In-context learning’s effectiveness is highly dependent on the selection of examples, it is critical to optimize this aspect. Additionally, encoding the full database schema of a query is not only costly but can also lead to inaccuracies. This framework includes two main components: (1) an example mining module that identifies highly effective examples to enhance In-context learning capabilities for text-to-vis applications, and (2) a schema filtering module designed to streamline database schemas. Comprehensive testing on the NVBench dataset has shown that Prompt4Vis significantly outperforms the current state-of-the-art model, RGVisNet, by approximately 35.9% on development sets and 71.3% on test sets. To the best of our knowledge, Prompt4Vis is the first framework to incorporate In-context learning for enhancing text-to-vis, marking a pioneering step in the domain.
Loading