LU-500: A Logo Benchmark for Concept Unlearning

ICLR 2026 Conference Submission6718 Authors

16 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Benchmark, Concept Unlearn
Abstract: Current concept unlearning approaches for copyright have achieved notable progress in handling styles or portrait-like representations. However, the task of unlearning company logos remains largely unexplored. This challenge stems from logos’ simplicity, omnipresence, and strong associations with branded products, often occupying minimal space within an image. To bridge this gap, we introduce LU-500, a comprehensive benchmark for logo unlearning, consisting of 10 prompts to generate images of logos from Fortune Global 500 companies. Our benchmark features two tracks: LUex-500, with explicit prompts, and LUim-500, requiring implicit reasoning to address real-world scenarios like standard usage and adversarial attacks. We further propose five novel, multi-grained evaluation metrics, ranging from local logo regions to global image attributes and spanning both pixel and latent spaces, enabling a robust quantitative analysis of complex visual scenes. Experimental results reveal that existing inference-time unlearning techniques, such as NP, SLD, SEGA, and fine-tuning-based methods like ESD and Forget-Me-Not, all fall short in logo unlearning. To investigate this limitation, we propose a prompt-based baseline using large language models, which demonstrates significant improvements, highlighting the potential of unlearning in semantic space. Additionally, we analyze the correlation between the unlearning performance of an image and its characteristics such as logo area, location, and fractal dimension. We find that SSIM might be a profit control for logo unlearning.
Supplementary Material: zip
Primary Area: datasets and benchmarks
Submission Number: 6718
Loading