Keywords: SEO, Citation Ranking, RAG, conversational search engines, LLM
TL;DR: We introduce C-SEO Bench, the first benchmark to evaluate conversational search engine optimization (C-SEO) methods across tasks, domains, and multiple adopting actors. We reveal that C-SEO is mostly ineffective, in contrast to traditional SEO.
Abstract: Large Language Models (LLMs) are transforming search engines into Conversational Search Engines (CSE). Consequently, Search Engine Optimization (SEO) is being shifted into Conversational Search Engine Optimization (C-SEO). We are beginning to see dedicated C-SEO methods for modifying web documents to increase their visibility in CSE responses. However, they are often tested only for a limited breadth of application domains; we do not know whether certain C-SEO methods would be effective for a broad range of domains. Moreover, existing evaluations consider only a single-actor scenario where only one web document adopts a C-SEO method; in reality, multiple players are likely to competitively adopt the cutting-edge C-SEO techniques, drawing an analogy from the dynamics we have seen in SEO. We present C-SEO Bench, the first benchmark designed to evaluate C-SEO methods across multiple tasks, domains, and number of actors. We consider two search tasks, question answering and product recommendation, with three domains each. We also formalize a new evaluation protocol with varying adoption rates among involved actors. Our experiments reveal that most current C-SEO methods are not only largely ineffective but also frequently have a negative impact on document ranking, which is opposite to what is expected. Instead, traditional SEO strategies, those aiming to improve the ranking of the source in the LLM context, are significantly more effective. We also observe that as we increase the number of C-SEO adopters, the overall gains decrease, depicting a congested and zero-sum nature of the problem.
Croissant File:  json
Dataset URL: https://huggingface.co/datasets/parameterlab/c-seo-bench
Code URL: https://github.com/parameterlab/c-seo-bench
Primary Area: Datasets & Benchmarks for applications in language modeling and vision language modeling
Submission Number: 285
Loading