Abstract: Image-to-image translation is a very popular task in deep
learning. In particular, one of the most effective and popular approach
to solve it, when a paired dataset of examples is not available, is to
use a cycle consistency loss. This means forcing an inverse mapping in
order to reverse the output of the network back to the source domain
and reduce the space of all the possible mappings. Nevertheless, the network could learn to take shortcuts and softly apply the target domain in
order to make the reverse translation easier therefore producing unsatisfactory results. For this reason, in this paper an additional constraint
is introduced during the training phase of an unpaired image-to-image
translation network; this forces the model to have the same attention
both when applying the target domains and when reversing the translation. This approach has been tested on different datasets showing a
consistent improvement over the generated results.
Loading