Track: long paper (up to 10 pages)
Keywords: Large Language Model, Causal Reasoning, Causal Effect
Abstract: In recommendation environments with per-user resource constraints, allocating a limited display slot to an item that a user would have purchased without intervention leads to promotional waste. While traditional uplift modeling estimates the incremental impact of a recommendation, it struggles to operate directly on the unstructured textual data common in modern user histories and item profiles. Furthermore, directly prompting Large Language Models (LLMs) for causal effect estimation often results in unstable numerical reasoning. To address this, we propose CURA-CLM, a lightweight end-to-end framework that bridges textual data and strict per-user slot constraints. Our approach first extracts structured causal features from unstructured text using LLMs on a computation-manageable candidate set. We then apply robust statistical estimators to identify the Conditional Average Treatment Effect (CATE). Finally, we introduce a stratum-aware allocation objective that maximizes net revenue by explicitly penalizing resource allocation to non-persuadable user groups. Experiments on synthetic datasets demonstrate that CURA-CLM outperforms direct LLM and standard uplift ranking under per-user capacity constraints.
Anonymization: This submission has been anonymized for double-blind review via the removal of identifying information such as names, affiliations, and identifying URLs.
Submission Number: 219
Loading