Tilted Losses in Training Quantum Neural Networks

24 Sept 2024 (modified: 13 Nov 2024)ICLR 2025 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Quantum machine learning, exponential tilting, empirical risk minimization
TL;DR: We explore the tilted empirical risk minimization in training a class of quantum neural networks, specifically in classification tasks.
Abstract:

Empirical risk minimization is a fundamental paradigm in the optimization process of machine learning (ML) models. Several techniques extend this idea by introducing parameters which further regularize this strategy in training these models. One of these paradigms is the so-called tilted empirical risk minimization (TERM), which uses a tilted hyperparameter to penalize the presence of outliers, which represent data samples that differ significantly from the rest of the dataset. Quantum machine learning (QML) models have been studied and benchmarked across various criteria stemming from classical ML, including their training via the parameter-shift rule. Therefore, it is natural to extend the concept of TERM in training QML models, namely the type of models known as quantum neural networks (QNNs). In this work, we examine the impact of a tilted loss function in training a class of QNNs, specifically for binary classification tasks involving two different datasets with induced class imbalance. In the first dataset, the Iris dataset, we show that varying the value of the tilted hyperparameter modifies the decision boundary leading to reduced importance of outliers and better training accuracy --- highlighting the importance of using tilted risk minimization. Additionally, in a synthetic dataset we validate that the training accuracy can be improved using the tilted parameter. Analytically, we extend the parameter-shift training method to accommodate weighted inputs by introducing the tilted hyperparameter for training QNNs. These results highlight the significance of incorporating regularization techniques from ML models into QML models.

Supplementary Material: zip
Primary Area: other topics in machine learning (i.e., none of the above)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 3667
Loading