Abstract: The constant introduction of standardized benchmarks in the literature has helped
accelerating the recent advances in meta-learning research. They offer a way
to get a fair comparison between different algorithms, and the wide range of
datasets available allows full control over the complexity of this evaluation.
However, for a large majority of code available online, the data pipeline is often specific to one dataset, and testing on another dataset requires significant
rework. We introduce Torchmeta, a library built on top of PyTorch that enables seamless and consistent evaluation of meta-learning algorithms on multiple datasets, by providing data-loaders for most of the standard benchmarks
in few-shot classification and regression, with a new meta-dataset abstraction.
It also features some extensions for PyTorch to simplify the development of
models compatible with meta-learning algorithms. The code is available here:
https://github.com/tristandeleu/pytorch-meta.
0 Replies
Loading