Keywords: Predictive Coding Networks, Predictive Coding, Artificial Neural Networks
TL;DR: We formally prove that PCGs generalize FNNs to structures untrainable by backprop, including loops and non-hierarchical graphs.
Abstract: Predictive coding graphs (PCGs) are a recently introduced generalization to predictive coding networks (PCNs), a neuroscience-inspired probabilistic latent variable model. Here, we prove how PCGs define a mathematical superset of feedforward artificial neural networks (multilayer perceptrons). This positions PCNs more strongly within contemporary machine learning, and reinforces earlier proposals to study the use of non-hierarchical neural networks for learning tasks, and more generally the notion of topology in neural networks.
Submission Number: 8
Loading