On Learning with a Concurrent Verifier: Convexity, Improving Bounds, and Complex Requirements

22 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: learning theory
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Generalization analysis, verifier, learning theory
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: Machine learning technologies have been used in a wide range of practical applications. In them, it is preferable to guarantee that the input-output pairs of a model satisfy the given requirements. The recently proposed concurrent verifier (CV) is a module combined with machine learning models to guarantee that the model's input-output pairs satisfy the given requirements. The previous paper provides a generalization analysis of learning with a CV to show how the model's learnability changes using a CV. Although the paper provides basic learnability results, many CV properties remain unrevealed. Moreover, the previous work assumed a CV always works correctly, and requirements are imposed on a single input-output pair, which limits the situation where we can use a CV. We show the learning algorithms that preserve convexity when using a CV. We also show conditions that using a CV improves the generalization error bound. Moreover, we analyze the learnability when a CV is incorrect, or requirements are imposed on the combination of multiple input-output pairs.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 4681
Loading