Simplicity bias in the parameter-function map of deep neural networksDownload PDF

28 May 2019 (modified: 05 May 2023)Submitted to ICML Deep Phenomena 2019Readers: Everyone
Keywords: simplicity bias, parameter-function map, generalization
TL;DR: A very strong bias towards simple outpouts is observed in many simple input-ouput maps. The parameter-function map of deep networks is found to be biased in the same way.
Abstract: The idea that neural networks may exhibit a bias towards simplicity has a long history. Simplicity bias provides a way to quantify this intuition. It predicts, for a broad class of input-output maps which can describe many systems in science and engineering, that simple outputs are exponentially more likely to occur upon uniform random sampling of inputs than complex outputs are. This simplicity bias behaviour has been observed for systems ranging from the RNA sequence to secondary structure map, to systems of coupled differential equations, to models of plant growth. Deep neural networks can be viewed as a mapping from the space of parameters (the weights) to the space of functions (how inputs get transformed to outputs by the network). We show that this parameter-function map obeys the necessary conditions for simplicity bias, and numerically show that it is hugely biased towards functions with low descriptional complexity. We also demonstrate a Zipf like power-law probability-rank relation. A bias towards simplicity may help explain why neural nets generalize so well.
1 Reply

Loading