Masked Prediction: A Parameter Identifiability ViewDownload PDF

Published: 31 Oct 2022, Last Modified: 07 Jan 2023NeurIPS 2022 AcceptReaders: Everyone
Keywords: masked prediction, self-supervised learning, parameter identifiability, tensor decomposition
Abstract: The vast majority of work in self-supervised learning have focused on assessing recovered features by a chosen set of downstream tasks. While there are several commonly used benchmark datasets, this lens of feature learning requires assumptions on the downstream tasks which are not inherent to the data distribution itself. In this paper, we present an alternative lens, one of parameter identifiability: assuming data comes from a parametric probabilistic model, we train a self-supervised learning predictor with a suitable parametric form, and ask whether the parameters of the optimal predictor can be used to extract the parameters of the ground truth generative model. Specifically, we focus on latent-variable models capturing sequential structures, namely Hidden Markov Models with both discrete and conditionally Gaussian observations. We focus on masked prediction as the self-supervised learning task and study the optimal masked predictor. We show that parameter identifiability is governed by the task difficulty, which is determined by the choice of data model and the amount of tokens to predict. Technique-wise, we uncover close connections with the uniqueness of tensor rank decompositions, a widely used tool in studying identifiability through the lens of the method of moments.
TL;DR: This work offers a new lens to understanding self-supervised learning: one of parameter identifiability. We show that with proper choices of parametric forms and prediction tasks, masked prediction tasks can recover parameters of HMMs.
Supplementary Material: pdf
13 Replies

Loading