Abstract: We evaluate first-order model agnostic meta-learning algorithms (including FOMAML and Reptile) on few-shot image segmentation, present a novel neural network architecture built for fast learning which we call EfficientLab, and leverage a formal definition of the test error of meta-learning algorithms to decrease
error on out of distribution tasks. We show state of the art results on the FSS1000 dataset by meta-training EfficientLab with FOMAML and using Bayesian
optimization to infer the optimal test-time adaptation routine hyperparameters.
We also construct a benchmark dataset, binary PASCAL, for the empirical study
of how image segmentation meta-learning systems improve as a function of the
number of labeled examples. On the binary PASCAL dataset, we show that
when generalizing out of meta-distribution, meta-learned initializations provide
only a small improvement over joint training in accuracy but require significantly fewer gradient updates. Our code and meta-learned model are available
at https://github.com/ml4ai/mliis.
0 Replies
Loading