Learning Segmentation from Object ColorDownload PDFOpen Website

Published: 01 Jan 2020, Last Modified: 11 May 2023MIPR 2020Readers: Everyone
Abstract: We propose Color Shift GAN (CSGAN), a method that allows learning to segment an object class without the need for pixel-wise annotations. We exploit a single textual annotation of the basic object color per image to learn the semantics of an object class. By using only a textual basic color annotation of each object, we are able to drastically reduce labeling efforts. We created a dataset of 29,910 images of cars and annotated the basic color of their body works. Our model reaches 61.4 percent IoU on our test data. CSGAN trained with additional 128 pixel-wise annotations reaches 62.0 percent. By adding 45,150 unlabeled images to the training of CSGAN we are able to increase IoU to 65.0 percent without using a single pixel-wise annotation. This verifies that our weak objective is sufficient for learning segmentation.
0 Replies

Loading