Exemplar-Free Incremental Deepfake Detection

Published: 01 Jan 2024, Last Modified: 16 Apr 2025ECAI 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Incremental Deepfake Detection (IDD) aims to continuously update models with new domain data, adapting to evolving forgery techniques. Existing works require extra buffers to store old exemplars for maintaining previously learned knowledge. However, it is infeasible when previous data is unavailable due to storage and privacy issues. This paper focuses on a more challenging but practical exemplar-free IDD problem that requests zero old exemplars when updating the model. To address this problem, we design a domain-adaptive module that uses independent adapters to learn domain-specific knowledge for each domain, avoiding using old exemplars. Besides, we introduce an uncertainty optimization strategy to optimize the adapters more efficiently. With excellent scalability, our method can be easily deployed to various models. To simulate the practical scenarios, we designed two new protocols based on diverse deepfake datasets. Extensive experimental results demonstrate that our method outperforms the state-of-the-art methods by a large margin. The code is available at unmapped: uri https://github.com/woody-panda/EF-IDD.
Loading