ON ANYTIME LEARNING AT MACROSCALE
Abstract: In many practical applications of machine learning data arrives sequentially over time in large chunks.
Practitioners have then to decide how to allocate their computational budget in order to obtain the
best performance at any point in time. Online learning theory for convex optimization suggests that
the best strategy is to use data as soon as it arrives. However, this might not be the best strategy when
using deep non-linear networks, particularly when these perform multiple passes over each chunk
of data rendering the overall distribution non i.i.d.. In this paper, we formalize this learning setting
in the simplest scenario in which each data chunk is drawn from the same underlying distribution,
and make a first attempt at empirically answering the following questions: How long should the
learner wait before training on the newly arrived chunks? What architecture should the learner adopt?
Should the learner increase capacity over time as more data is observed? We probe this learning
setting using convolutional neural networks trained on classic computer vision benchmarks as well
as a large transformer model trained on a large-scale language modeling task.
Loading