Abstract: Current approaches to interpreting complex ranking models are based on local approximations of the ranking model using a simple ranker in the locality of the query. Since rankings have multiple relevance factors and are aggregations of predictions, existing approaches that use a single ranker might not be sufficient to approximate a complex model resulting in low local fidelity. In this paper, we overcome this problem by considering multiple simple rankers for better approximating the black box ranking model. We pose the problem of local approximation as a Generalized Preference Coverage (GPC) problem that incorporates multiple simple rankers towards the post-hoc interpretability of ranking models. Our approach Multiplex uses a linear programming approach to judiciously extract the explanation terms. We conduct extensive experiments on a variety of ranking models and report fidelity improvements of $37\% - 54\%$ over existing baselines and competitors. We finally qualitatively compare modern neural ranking models in terms of their explanations to better understand the differences between them, showcasing our explainers' practical utility.
Paper Type: long
Consent To Share Data: yes
0 Replies
Loading