Keywords: SSL, Self-Supervised Learning, Representation Learning, Kernels, Probabilistic Methods
Abstract: The grand goal of AI research, and particularly Self Supervised Learning (SSL), is to produce systems that can successfully solve any possible task. In contrast, current evaluation methods available to AI researchers typically rely on a fixed collection of hand-picked downstream benchmarks. Hence, a large amount of effort is put into designing and searching for large collections of evaluation tasks that can serve as a proxy for our grand goal. We argue that such a rigid evaluation protocol creates a silent bottleneck in AI research. To remedy that, we define a probabilistic space of downstream tasks obtained by adopting a distribution over tasks and by defining Task Priors Under this view, one can evaluate a model's performance over the set of all possible downstream tasks. Beyond establishing a new standard for evaluation, we believe that Task Priors will accelerate the pace of research in SSL--where downstream task evaluation is generally the sole signal that researchers have access to.
Submission Number: 151
Loading