Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Gaussian Process Neurons
Sebastian Urban, Patrick van der Smagt
Feb 15, 2018 (modified: Feb 15, 2018)ICLR 2018 Conference Blind Submissionreaders: everyoneShow Bibtex
Abstract:We propose a method to learn stochastic activation functions for use in probabilistic neural networks.
First, we develop a framework to embed stochastic activation functions based on Gaussian processes in probabilistic neural networks.
Second, we analytically derive expressions for the propagation of means and covariances in such a network, thus allowing for an efficient implementation and training without the need for sampling.
Third, we show how to apply variational Bayesian inference to regularize and efficiently train this model.
The resulting model can deal with uncertain inputs and implicitly provides an estimate of the confidence of its predictions.
Like a conventional neural network it can scale to datasets of arbitrary size and be extended with convolutional and recurrent connections, if desired.
TL;DR:We model the activation function of each neuron as a Gaussian Process and learn it alongside the weight with Variational Inference.
Keywords:gaussian process neuron activation function stochastic transfer function learning variational bayes probabilistic
Enter your feedback below and we'll get back to you as soon as possible.