Efficient Learning Losses for Deep Hinge-Loss Markov Random FieldsDownload PDF

Published: 26 Jul 2022, Last Modified: 17 May 2023TPM 2022Readers: Everyone
Keywords: Neural Symoblic, Probabilistic Graphical Models
TL;DR: In this work, we examine the learning process for Neural Probabilistic Soft Logic (NeuPSL).
Abstract: In this work, we examine the learning process for Neural Probabilistic Soft Logic (NeuPSL). NeuPSL is a novel neuro-symbolic (NeSy) framework that unites state-of-the-art symbolic reasoning with the low-level perception of deep neural networks to create a tractable probabilistic model that supports end-to-end learning via back-propagation. We investigate two common learning losses, Energy-based and Structured Perceptron. We provide formal definitions, and identify and propose principled fixes to degenerate solutions. We then perform an extensive evaluation over a canonical NeSy task
1 Reply

Loading