Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Conditional Networks for Few-Shot Semantic Segmentation
Kate Rakelly, Evan Shelhamer, Trevor Darrell, Alyosha Efros, Sergey Levine
Feb 12, 2018 (modified: Jun 04, 2018)ICLR 2018 Workshop Submissionreaders: everyoneShow Bibtex
Abstract:Few-shot learning methods aim for good performance in the low-data regime. Structured output tasks such as segmentation present difficulties for few-shot learning because of their high dimensionality and the statistical dependencies among outputs. To tackle this problem, we propose the co-FCN, a conditional network learned by end-to-end optimization to perform fast, accurate few-shot segmentation. The network conditions on an annotated support set of images via feature fusion to do inference on an unannotated query image. Once learned, our conditioning approach requires no further optimization for new data. Annotations are instead conditioned on in a single forward pass, making our method suitable for interactive use. We evaluate our co-FCN with dense and sparse annotations, and it achieves competitive accuracy even when given only one positive pixel and one negative pixel, reducing the annotation burden for segmenting new concepts.
Keywords:semantic segmentation, few-shot learning
TL;DR:We propose a conditional network learned end-to-end to perform few-shot semantic segmentation
Enter your feedback below and we'll get back to you as soon as possible.