Deep Perceptual Similarity is Adaptable to Ambiguous Contexts

Published: 03 Nov 2023, Last Modified: 23 Dec 2023NLDL 2024EveryoneRevisionsBibTeX
Keywords: deep features, perceptual similarity, similarity metrics, computer vision, image similarity, image quality assessment, ambiguity
TL;DR: Evaluating whether deep perceptual similarity metrics can be trained to improve performance in specific context which may contradict the contexts in which they are known to perform well.
Abstract: This work examines the adaptability of Deep Perceptual Similarity (DPS) metrics to context beyond those that align with average human perception and contexts in which the standard metrics have been shown to perform well. Prior works have shown that DPS metrics are good at estimating human perception of similarity, so-called perceptual similarity. However, it remains unknown whether such metrics can be adapted to other contexts. In this work, DPS metrics are evaluated for their adaptability to different contradictory similarity contexts. Such contexts are created by randomly ranking six image distortions. Metrics are adapted to consider distortions more or less disruptive to similarity depending on their place in the random rankings. This is done by training pretrained CNNs to measure similarity according to given contexts. The adapted metrics are also evaluated on a perceptual similarity dataset to evaluate whether adapting to a ranking affects their prior performance. The findings show that DPS metrics can be adapted with high performance. While the adapted metrics have difficulties with the same contexts as baselines, performance is improved in 99% of cases. Finally, it is shown that the adaption is not significantly detrimental to prior performance on perceptual similarity. The implementation of this work is available online.
Git: https://github.com/LTU-Machine-Learning/Analysis-of-Deep-Perceptual-Loss-Networks
Permission: pdf
Submission Number: 20
Loading