Abstract: Understanding how agents learn to generalize — and, in particular, to extrapolate — in
high-dimensional, naturalistic environments remains a challenge for both machine learning
and the study of biological agents. One approach to this has been the use of function
learning paradigms, which allow agents’ empirical patterns of generalization for smooth
scalar functions to be described precisely. However, to date, such work has not succeeded
in identifying mechanisms that acquire the kinds of general purpose representations over
which function learning can operate to exhibit the patterns of generalization observed in
human empirical studies. Here, we present a framework for how a learner may acquire
such representations, that then support generalization-and extrapolation in particular-in a few-shot fashion in the domain of scalar function learning. Taking inspiration from a classic theory of visual processing, we
construct a self-supervised encoder that implements the basic inductive bias of invariance
under topological distortions. We show the resulting representations outperform those from
other models for unsupervised time series learning in several downstream function learning
tasks, including extrapolation.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: -added space after table titles
Code: https://github.com/SimonSegert/functionlearning-contrastive-tmlr
Assigned Action Editor: ~Kevin_Swersky1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 9
Loading