FLUID: A Unified Evaluation Framework for Flexible Sequential Data

Published: 23 Mar 2023, Last Modified: 23 Mar 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Modern machine learning methods excel when training data is IID, large-scale, and well labeled. Learning in less ideal conditions remains an open challenge. The sub-fields of few-shot, continual, transfer, and representation learning have made substantial strides in learning under adverse conditions, each affording distinct advantages through methods and insights. These methods address different challenges such as data arriving sequentially or scarce training examples, however often the difficult conditions an ML system will face over its lifetime cannot be anticipated prior to deployment. Therefore, general ML systems which can handle the many challenges of learning in practical settings are needed. To foster research towards the goal of general ML methods, we introduce a new unified evaluation framework – FLUID (Flexible Sequential Data). FLUID integrates the objectives of few-shot, continual, transfer, and representation learning while enabling comparison and integration of techniques across these subfields. In FLUID, a learner faces a stream of data and must make sequential predictions while choosing how to update itself, adapt quickly to novel classes, and deal with changing data distributions; while accounting for the total amount of compute. We conduct experiments on a broad set of methods which shed new insight on the advantages and limitations of current techniques and indicate new research problems to solve. As a starting point towards more general methods, we present two new baselines which outperform other evaluated methods on FLUID.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: De-anonymized and trimmed down to 12 pages for camera ready.
Code: https://github.com/RAIVNLab/FLUID
Assigned Action Editor: ~Laurent_Charlin1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 565