Enhancing Neuro-Symbolic Integration with Focal Loss: A Study on Logic Tensor Networks

Published: 01 Jan 2024, Last Modified: 03 Nov 2024NeSy (2) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Neuro-symbolic techniques such as logic tensor networks (LTNs) enable the integration of symbolic knowledge to improve the learning capabilities of deep neural networks. LTNs in particular ground first-order logic languages into differentiable tensor operations, redefining learning as maximizing the satisfiability of a grounded theory. Despite the promising results achieved so far, the optimization task is highly sensitive to the choice of functions for grounding logical operators and aggregators, limiting their practical adoption. The present study focuses on learning in the presence of class imbalance (in object detection tasks, class imbalance arises between background vs foreground samples). In particular, we seek to combine the recently proposed logLTN with the weighting scheme introduced by the focal loss as an enhancement of the original cross-entropy loss. Preliminary experiments on an object detection benchmark show that the focal logLTN aggregator achieves higher performance and stability than its standard counterpart, with potential application in many other practical scenarios.
Loading