Keywords: Large Language Models, Generative Engines, Optimization
TL;DR: In this work, we propose CORE, an optimization method that \textbf{C}ontrols \textbf{O}utput \textbf{R}ankings in g\textbf{E}nerative Engines for LLM-based search.
Abstract: The way customers search for products, compare options, and decide what to buy is rapidly changing with the introduction of large language models (LLMs). In particular, LLM-based search, also known as generative engines, enables shoppers to obtain direct recommendations instead of performing a conventional Google search. However, the ranking of these recommendations is heavily influenced by the initial order of products retrieved by the LLM. This dependency risks disadvantaging small businesses and independent creators by reducing their visibility and limiting their competitiveness online.
To address this risk, in this work, we propose CORE, an optimization method that \textbf{C}ontrols \textbf{O}utput \textbf{R}ankings in g\textbf{E}nerative Engines for LLM-based search. Since the choice of which search engine to query is determined by model developers and cannot be altered by end-users, CORE instead targets the content returned by search engines. Specifically, CORE optimizes retrieved content and appends strategically optimized content to influence the ranking of outputs generated by the LLM. We introduce three representative forms of optimization content: string-based, reasoning-based, and review-based, demonstrating their effectiveness in shaping output rankings. To evaluate the effectiveness of CORE in realistic settings, we construct AmazonCOREBench, a large-scale benchmark comprising 15 product categories with 200 products each, where the top 10 recommendations per product are collected from Amazon’s search interface.
Extensive experiments on four LLMs with search capabilities (GPT-4o, Gemini-2.5, Claude-3.7, and Grok-3) demonstrate that CORE achieves an average Promotion Success Rate of \textbf{91.4\% @Top-5}, \textbf{86.6\% @Top-3}, and \textbf{80.3\% @Top-1} under the best optimization strategy, across 15 product categories, while preserving fluency in optimized content and outperforming existing ranking manipulation methods.
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 10034
Loading