Abstract: Automatically rewriting a user’s query using traditional pseudo-relevance feedback (PRF) mechanisms typically increases a search system’s effectiveness in retrieving relevant documents. With recent advances in the generative capabilities of Large Language Models (LLMs), the effectiveness of PRF mechanisms that leverage LLMs has significantly improved. However, little previous work has explored the potential impact of generative relevance feedback on the fairness of the search results. In this work, we investigate how generative PRF for query rewriting influences the fair allocation of exposure in the search results. We propose a novel generative PRF mechanism for fairness using automatically generated query expansion terms, which we call Fair Generative Query Expansion (FGQE). We investigate four prompting strategies and show that FGQE can effectively be applied using zero-shot long-text generation to create effective new query terms. Our experiments on the TREC 2021 and the TREC 2022 Fair Ranking Track collections demonstrate that all of our prompting strategies enhance exposure allocation compared to both traditional and dense PRF baselines, achieving improvements of up to approximately 8% in terms of the Attention Weighted Ranked Fairness (AWRF) metric. Simultaneously, our FGQE approach enhances fairness while maintaining the relevance of search results.
External IDs:dblp:conf/ecir/JanichMO25
Loading