Why Size Matters: Feature Coding as Nystrom Sampling

Oriol Vinyals, Yangqing Jia, Trevor Darrell

Jan 24, 2013 (modified: Jan 24, 2013) ICLR 2013 submission readers: everyone
  • Abstract: Recently, the computer vision and machine learning community has been in favor of feature extraction pipelines that rely on a coding step followed by a linear classifier, due to their overall simplicity, well understood properties of linear classifiers, and their computational efficiency. In this paper we propose a novel view of this pipeline based on kernel methods and Nystrom sampling. In particular, we focus on the coding of a data point with a local representation based on a dictionary with fewer elements than the number of data points, and view it as an approximation to the actual function that would compute pair-wise similarity to all data points (often too many to compute in practice), followed by a Nystrom sampling step to select a subset of all data points. Furthermore, since bounds are known on the approximation power of Nystrom sampling as a function of how many samples (i.e. dictionary size) we consider, we can derive bounds on the approximation of the exact (but expensive to compute) kernel matrix, and use it as a proxy to predict accuracy as a function of the dictionary size, which has been observed to increase but also to saturate as we increase its size. This model may help explaining the positive effect of the codebook size and justifying the need to stack more layers (often referred to as deep learning), as flat models empirically saturate as we add more complexity.
  • Decision: conferenceOral-iclr2013-workshop
  • Paperhash: vinyals|why_size_matters_feature_coding_as_nystrom_sampling
  • Authorids: oriol18@gmail.com, jiayq84@gmail.com, trevordarrell@gmail.com