Abstract: Many click models have been proposed to interpret logs of natural interactions with search engines and extract unbiased information for evaluation or learning. The experimental setup used to evaluate them typically involves measuring two metrics, namely the test perplexity for click prediction and normalized discounted cumulative gain for relevance estimation. In both cases, the data used for training and testing is assumed to be collected using the same ranking policy. We question this assumption.Important downstream tasks based on click models involve evaluating a different policy than the training policy—that is, click models need to operate under policy distributional shift (PDS). We show that click models are sensitive to it. This can severely hinder their performance on the targeted task: conventional evaluation metrics cannot guarantee that a click model will perform equally well under distributional shift.To more reliably predict click model performance under PDS, we propose a new evaluation protocol. It allows us to compare the relative robustness of six types of click models under various shifts, training configurations, and downstream tasks. We obtain insights into the factors that worsen the sensitivity to PDS and formulate guidelines to mitigate the risks of deploying policies based on click models.
Loading