Efficient and practical approximation algorithms for advertising in content feeds

Published: 29 Jan 2025, Last Modified: 29 Jan 2025WWW 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Track: Economics, online markets and human computation
Keywords: Newsfeed Advertising, Ad Allocation, Approximation Algorithms, Matching, Externalities
TL;DR: We devise fast 2-approximation algorithms for an ad allocation problem in content feeds that requires ensuring contextual coherence and addressing users' decaying attention.
Abstract: Information feeds provided by platforms such as X (formerly Twitter) and TikTok are consumed by users on a daily basis. In this paper, we revisit the native advertising problem in feed, initiated by Ieong et al. Given a sequence of organic items (e.g., videos or posts) relevant to a user's interests or information search, the goal is to design an algorithm that maximizes the reward (e.g., clicks) by placing advertisements interleaved with the organic content under two considerations: (1) an advertisement can only be inserted after a relevant content item; (2) the users' attention decays after consuming content or advertisements. These considerations provide a natural model for capturing both the advertisement effectiveness and the user experience. In this paper, we design fast and practical 2-approximation greedy algorithms for the associated optimization problem, in contrast to the best-known practical algorithm that only achieves an approximation factor of 4. Our algorithms exploit a counter-intuitive structure about the problem, that is, while top items are seemingly more important due to the decaying attention of the user, taking good care of the bottom items is key for obtaining improved approximation guarantees. We then provide the first comprehensive empirical evaluation on the studied problem, showing the strong empirical performance of our algorithms.
Submission Number: 672
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview