Statistically Meaningful Approximation: a Case Study on Approximating Turing Machines with TransformersDownload PDF

Published: 31 Oct 2022, Last Modified: 21 Oct 2022NeurIPS 2022 AcceptReaders: Everyone
Keywords: approximation theory, generalization bounds, sample complexities, learning theory
TL;DR: We propose a new notion of "statistically meaningful" approximation and show that neural nets can statistically-meaningfully approximate Boolean circuits and Turing machines.
Abstract: A common lens to theoretically study neural net architectures is to analyze the functions they can approximate. However, the constructions from approximation theory often have unrealistic aspects, for example, reliance on infinite precision to memorize target function values. To address this issue, we propose a formal definition of statistically meaningful approximation which requires the approximating network to exhibit good statistical learnability. We present case studies on statistically meaningful approximation for two classes of functions: boolean circuits and Turing machines. We show that overparameterized feedforward neural nets can statistically meaningfully approximate boolean circuits with sample complexity depending only polynomially on the circuit size, not the size of the approximating network. In addition, we show that transformers can statistically meaningfully approximate Turing machines with computation time bounded by T, requiring sample complexity polynomial in the alphabet size, state space size, and log(T). Our analysis introduces new tools for generalization bounds that provide much tighter sample complexity guarantees than the typical VC-dimension or norm-based bounds, which may be of independent interest.
Supplementary Material: pdf
13 Replies