Keywords: learning curve, deep network, analysis, asymptotic error, learning efficiency, power law
Abstract: A learning curve models a classifier’s test error as a function of the number of training samples. Prior works show that learning curves can be used to select model parameters and extrapolate performance. We investigate how to use learning curves to analyze the impact of design choices, such as pre-training, architecture, and data augmentation. We propose a method to robustly estimate learning curves, abstract their parameters into error and data-reliance, and evaluate the effectiveness of different parameterizations. We also provide several interesting observations based on learning curves for a variety of image classification models.
One-sentence Summary: We revisit learning curves as a tool for analyzing the impact of deep network design on performance.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/arxiv:2010.11029/code)
Reviewed Version (pdf): https://openreview.net/references/pdf?id=Ls_zaNBHfB
20 Replies
Loading