everyone
since 20 Jul 2024">EveryoneRevisionsBibTeXCC BY 4.0
The rapid advancement of deepfake technology poses significant threats to social trust. Although recent deepfake detectors have exhibited promising results on deepfakes of the same type as those present in training, their effectiveness degrades significantly on novel deepfakes crafted by unseen algorithms due to the gap in forgery patterns. Some studies have enhanced detectors by adapting to the continuously emerging deepfakes through incremental learning. Despite the progress, they overlooked the scarcity of novel samples that can easily lead to insufficient learning of forgery patterns. To mitigate this issue, we introduce the Dynamic Mixed-Prototype (DMP) model, which dynamically increases prototypes to adapt to novel deepfakes efficiently. Specifically, the DMP model adopts multiple prototypes to represent both real and fake classes, enabling learning novel patterns by expanding prototypes and jointly retaining knowledge learned in previous prototypes. Furthermore, we propose the Prototype-Guided Replay strategy and Prototype Representation Distillation loss, both of which effectively prevent forgetting learned knowledge based on the prototypical representation of samples. Our method surpasses existing incremental deepfake detectors across four datasets and exhibits superior generalizability to novel deepfakes through learning limited deepfake samples.