Abstract: News background linking is the problem of finding news articles that provide context or background on the news reported in a given news article. It is challenging as, compared to news recommendation, the newsreader is assumed to be anonymous. To date, the most effective approach to tackle this problem is a brute-force approach in which the entire news article is issued as an ad-hoc search query to retrieve the background links; however, it is still far from being optimal. Motivated by the success of Large Language Models (LLMs) in several tasks, and in particular reranking of texts, in this work, we explore the potential of using LLMs in reranking a candidate set of news articles retrieved by the full-article search approach. We propose a novel reranking approach that adopts prompt chaining with the LLM to first analyze the query article and its candidate links, then rerank a list of guided summaries of those candidates. Our findings show that aggregating the ranks we got through our proposed approach using GPT-4 Turbo LLM with the original ranks of the candidates results in a statistically-significant improvement over the state-of-the-art (SOTA) baseline, establishing a new SOTA performance for the task.
External IDs:doi:10.1007/978-3-031-88711-6_21
Loading