Abstract: We tackle the problem of machine unlearning within
neural information retrieval, termed Neural Machine UnRanking
(NuMuR) for short. Many of the mainstream task- or modelagnostic approaches for machine unlearning were designed for
classification tasks. First, we demonstrate that these methods
perform poorly on NuMuR tasks due to the unique challenges
posed by neural information retrieval. Then, we develop a
methodology for NuMuR named Contrastive and Consistent
Loss (CoCoL), which effectively balances the objectives of data
forgetting and model performance retention. Experimental results
demonstrate that CoCoL facilitates more effect
Loading