Instance Semantic Segmentation Benefits from Generative Adversarial NetworksDownload PDF

27 Sept 2021, 17:50 (modified: 26 Nov 2021, 02:02)DGMs and Applications @ NeurIPS 2021 PosterReaders: Everyone
Keywords: GANs, Instance Semantic Segmentation, Computer Vision, Deep Learning
TL;DR: We redefine mask prediction as generation in GANs w effective training. We test on various data domains, regular or irregular and coalescing shapes, wo custom settings. Generated masks account for crispier boundaries, clutter, and details.
Abstract: In design of instance segmentation networks that reconstruct masks, segmentation is often taken as its literal definition -- assigning each pixel a label. This has led to thinking the problem as a template matching one with the goal of minimizing the loss between the reconstructed and the ground truth pixels. Rethinking reconstruction networks as a generator, we define the problem of predicting masks as a GANs game framework: A segmentation network generates the masks, and a discriminator network decides on the quality of the masks. To demonstrate this game, we show effective modifications on the general segmentation framework in Mask R-CNN. We find that playing the game in feature space is more effective than the pixel space leading to stable training between the discriminator and the generator, predicting object coordinates should be replaced by predicting contextual regions for objects, and overall the adversarial loss helps the performance and removes the need for any custom settings per different data domain. We test our framework in various domains and report on cellphone recycling, autonomous driving, large-scale object detection, and medical glands. We observe in general GANs yield masks that account for crispier boundaries, clutter, small objects, and details, being in domain of regular shapes or heterogeneous and coalescing shapes. Our code for reproducing the results is available publicly.
1 Reply