Keywords: ad hoc retrieval; game theory; ranking competition
Abstract: In competitive retrieval settings, document publishers (authors)
modify their documents in response to induced rankings so as to
potentially improve their future rankings. Previous work has focused
on analyzing ranking-incentivized document modifications for a {\em single
query}. We present a novel theoretical and empirical study of
document modification strategies applied for potentially improved ranking for {\em multiple queries}; e.g., those representing the same information need. Using game theoretic analysis, we
show that in contrast to the single-query setting, an equilibrium does
not necessarily exist; we provide full characterization of when it
does for a basic family of ranking functions. We empirically study document modification strategies in the
multiple-queries setting by organizing ranking competitions. In
contrast to previous ranking competitions devised for the single-query
setting, we also used a neural ranker and allowed in some competitions
the use of generative AI tools to modify documents. We found that
publishers tend to mimic content from documents highly ranked in the
past, as in the single-query setting, although this was a somewhat less emphasized trend when generative AI tools were allowed. We also found that it was much more
difficult with neural rankers to promote a document to the highest
rank simultaneously for multiple queries than it was with a feature-based learning-to-rank method. In addition, we demonstrate the merits of using
information induced from multiple queries to predict which document might
be the highest ranked in the next ranking for a given query.
Submission Number: 20
Loading