Abstract: In statistical learning, the Vapnik-Chervonenkis(VC)-dimension has been widely used to analyze single-layer neural networks such as Perceptron and Support Vector Machine while utilizing it for multilayer networks has rarely been explored. This motivates us to introduce the VC-dimension method to autoencoder, one of important multilayer networks. The paper proposes several theoretical observations of analyzing the relationship among network architectures, activation functions, and the learning capacity and effectiveness of autoencoders. We also provide a theoretical VC-limitation result to quantify the boundary of hidden neurons in an Autoencoder.
0 Replies
Loading