Domain Adaptation with Morphologic Segmentation
Abstract
We present a novel domain adaptation
framework that uses morphologic segmentation to translate images from
arbitrary input domains (real and synthetic) into a uniform output
domain. Our framework is based on an established image-to-image
translation pipeline that allows us to first transform the input image
into a generalized representation that encodes morphology and semantics -
the edge-plus-segmentation map (EPS) - which is then transformed into
an output domain. Images transformed into the output domain are
photo-realistic and free of artifacts that are commonly present across
different real (e.g. lens flare, motion blur, etc.) and synthetic (e.g.
unrealistic textures, simplified geometry, etc.) data sets. Our goal is
to establish a preprocessing step that unifies data from multiple
sources into a common representation that facilitates training
downstream tasks in computer vision. This way, neural networks for
existing tasks can be trained on a larger variety of training data,
while they are also less affected by overfitting to specific data sets.
We showcase the effectiveness of our approach by qualitatively and
quantitatively evaluating our method on four data sets of simulated and
real data of urban scenes.