Keywords: Semantic segmentation, Fourier transform, Partial supervision
Abstract: In this short paper, we experiment with the Fourier transform as a loss for neural networks, in place of more common losses (cross-entropy, Dice, Boundary loss). We start from a simple assumption: the annotator bias and noise in the annotation is located in the high-frequencies of the annotation, whereas the low-frequency information contains the core of what we want to learn. Experiments on two different datasets show that a very small fraction of the low-frequencies are enough to successfully train a neural network, despite those frequencies not being sufficient to reconstruct the segmentation directly.
Submission Number: 121
Loading