Abstract: Meta-learning is a technique to transfer learning from a pre-built
model on known tasks to build a model for unknown tasks. Graidentbased meta-learning algorithms are one such family that use the
technique of gradient descent for model updates. These meta-learning
architectures are hierarchical in nature and hence incur large training times, which are prohibitive for industries relying on models
trained using the most recent data to make relevant predictions.
To address these issues, we propose MetaFaaS, a function-as-aservice (FaaS) paradigm on public cloud to build a scalable and costperformance optimal deployment framework for gradient-based
meta-learning architectures. We propose an analytical model to
predict the cost and training time on cloud for a given workload.
We validate our approach on multiple meta-learning architectures,
(MAML, ANIL, ALFA) and attain a speed-up of over 5× in training
time on FaaS. We also propose eALFA, a compute-efficient metalearning architecture, which achieves a speed-up of > 9× as compared to ALFA. We present our results with four quasi-benchmark
datasets in meta-learning, namely, Omniglot, Mini-Imagenet (Imagenet), FC100 (CIFAR), and CUBirds200.
Loading