Invariant Deep Uplift Modeling for Incentive Assignment in Online Marketing via Probability of Necessity and Sufficiency
TL;DR: This paper proposes an invariant learning based uplift modeling method, which aims to solve the out-of-distribution problem in online marketing.
Abstract: In online platforms, incentives (\textit{e.g}., discounts, coupons) are used to boost user engagement and revenue. Uplift modeling methods are developed to estimate user responses from observational data, often incorporating distribution balancing to address selection bias. However, these methods are limited by in-distribution testing data, which mirrors the training data distribution. In reality, user features change continuously due to time, geography, and other factors, especially on complex online marketing platforms. Thus, effective uplift modeling method for out-of-distribution data is crucial. To address this, we propose a novel uplift modeling method \textbf{I}nvariant \textbf{D}eep \textbf{U}plift \textbf{M}odeling, namely \textbf{IDUM}, which uses invariant learning to enhance out-of-distribution generalization by identifying causal factors that remain consistent across domains. IDUM further refines these features into necessary and sufficient factors and employs a masking component to reduce computational costs by selecting the most informative invariant features. A balancing discrepancy component is also introduced to mitigate selection bias in observational data. We conduct extensive experiments on public and real-world datasets to demonstrate IDUM's effectiveness in both in-distribution and out-of-distribution scenarios in online marketing. Furthermore, we also provide theoretical analysis and related proofs to support our IDUM's generalizability.
Lay Summary: Online platforms leverage incentives (e.g., discounts, coupons) to enhance user engagement and revenue, with uplift modeling methods estimating user responses from observational data through distribution balancing to address selection bias. However, these methods are constrained by in-distribution testing, failing to adapt to real-world scenarios where user features dynamically shift due to time, geography, and platform complexity. To tackle this, we propose \textbf{I}nvariant \textbf{D}eep \textbf{U}plift \textbf{M}odeling (IDUM), which enhances out-of-distribution generalization by identifying domain-invariant causal features. IDUM disentangles these features into necessary (directly influencing behavior) and sufficient (indirectly related) factors, employs a masking mechanism to prioritize informative invariant features for efficiency, and integrates a balancing discrepancy component to mitigate selection bias. Experiments on public and real-world datasets validate IDUM’s effectiveness in both in- and out-of-distribution settings, supported by theoretical analysis on generalization error bounds to ensure robustness.
Application-Driven Machine Learning: This submission is on Application-Driven Machine Learning.
Primary Area: Applications
Keywords: Uplift modeling, Invariant learning, Incentives assignment, Online marketing
Submission Number: 3824
Loading