Abstract: Although deep fake detection models have made significant progress, the challenge of performance degradation remains yet for unseen datasets. To address this, we introduce a novel data generalization approach using style transfer to generate images in various domains. Utilizing style transfer, we create a new domain where domain-specific information is eliminated and subsequently train our model on the new domain. Our approach enhances the generalization performance of the detector by adding the style-transferred images to train the deepfake detector. Through the experiments, we confirm that the performance on the trained dataset remains unchanged while achieving an improvement of 8.8% on an unseen dataset. Therefore, We verify the effectiveness of the style-transferred images for generalizing the performance upon unseen datasets.
Loading