Mark My Words: Dangers of Watermarked Images in ImageNetDownload PDF

Published: 04 Mar 2023, Last Modified: 14 Oct 2024ICLR 2023 Workshop on Trustworthy ML PosterReaders: Everyone
Keywords: Explainable AI, Trustworthy AI, Machine Learning
TL;DR: We examined the impact of watermarks on popular pre-trained Computer Vision architectures, and found that many ImageNet classes, such as "monitor", "broom", "apron", and "safe" rely on spurious correlations due to watermark imprints.
Abstract: The utilization of pre-trained networks, especially those trained on ImageNet, has become a common practice in Computer Vision. However, prior research has indicated that a significant number of images in the ImageNet dataset contain watermarks, making pre-trained networks susceptible to learning artifacts such as watermark patterns within their latent spaces. In this paper, we aim to assess the extent to which popular pre-trained architectures display such behavior and to determine which classes are most affected. Additionally, we examine the impact of watermarks on the extracted features. Contrary to the popular belief that the Chinese logographic watermarks impact the ``carton'' class only, our analysis reveals that a variety of ImageNet classes, such as ``monitor'', ``broom'', ``apron'' and ``safe'' rely on spurious correlations. Finally, we propose a simple approach to mitigate this issue in fine-tuned networks by ignoring the encodings from the feature-extractor layer of ImageNet pre-trained networks that are most susceptible to watermark imprints.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/mark-my-words-dangers-of-watermarked-images/code)
0 Replies

Loading