Random Linear Projections Loss for Hyperplane-Based Optimization in Neural Networks

Published: 26 Apr 2024, Last Modified: 15 Jul 2024UAI 2024 posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: loss functions, deep learning, optimization
TL;DR: This work presents Random Linear Projections (RLP) loss, a novel loss function that improves training efficiency by leveraging non-local geometric properties within the data.
Abstract: Advancing loss function design is pivotal for optimizing neural network training and performance. This work introduces Random Linear Projections (RLP) loss, a novel approach that enhances training efficiency by leveraging geometric relationships within the data. Distinct from traditional loss functions that target minimizing pointwise errors, RLP loss operates by minimizing the distance between sets of hyperplanes connecting fixed-size subsets of feature-prediction pairs and feature-label pairs. Our empirical evaluations, conducted across benchmark datasets and synthetic examples, demonstrate that neural networks trained with RLP loss outperform those trained with traditional loss functions, achieving improved performance with fewer data samples, and exhibiting greater robustness to additive noise. We provide theoretical analysis supporting our empirical findings.
Supplementary Material: zip
List Of Authors: Venkatasubramanian, Shyam and Aloui, Ahmed and Tarokh, Vahid
Latex Source Code: zip
Signed License Agreement: pdf
Code Url: https://github.com/AhmedAloui1997/RandomLinearProjections
Submission Number: 341
Loading