Can LLMs Really Help Query Understanding In Web Search? A Practical Perspective

Published: 2025, Last Modified: 09 Jan 2026CIKM 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: As a core module of web search, query understanding aims to bridge the semantic gap between user queries and web page documents, thereby enhancing the ability to deliver more relevant results. Recently, Large Language Models (LLMs) have achieved significant breakthroughs that have fundamentally altered the workflow of existing search ranking tasks. However, few researchers have explored the integration of LLMs into the field of query understanding. In this paper, we investigate the potential of LLMs in query understanding by conducting a comprehensive evaluation across three dimensions: term, structure, and topic. This evaluation includes several representative tasks such as segmentation, term weighting, error correction, query expansion, and intent recognition. The experimental results reveal that LLMs are particularly effective in query expansion and intent recognition but show limited improvement in other areas. This limitation may be attributed to LLMs' primary focus on modeling the semantic knowledge of entire queries, while lacking the capability to capture token-level information with finer granularity. Additionally, we explore potential practical applications of LLMs in query understanding, such as integrating the evaluation and training capabilities of smaller models with LLMs and constructing unsupervised samples. Based on comprehensive empirical results, collaborative training emerges as a promising approach to leverage LLMs for query understanding. We hope this research will advance the practical application of LLMs in query understanding and contribute to the development of this field.
Loading