Keywords: Generalization analysis, verifier, learning theory
TL;DR: Gives a generalization analysis when we learn with a verifier
Abstract: Machine learning technologies have been used in a wide range of practical systems.
In practical situations, it is natural to expect the input-output pairs of a machine learning model to satisfy some requirements.
However, it is difficult to obtain a model that satisfies requirements by just learning from examples.
A simple solution is to add a module that checks whether the input-output pairs meet the requirements and then modifies the model's outputs. Such a module, which we call a {\em concurrent verifier} (CV), can give a certification, although how the generalizability of the machine learning model changes using a CV is unclear. This paper gives a generalization analysis of learning with a CV. We analyze how the learnability of a machine learning model changes with a CV and show a condition where we can obtain a guaranteed hypothesis using a verifier only in the inference time.
We also show that typical error bounds based on Rademacher complexity will be no larger than that of the original model when using a CV in multi-class classification and structured prediction settings.
Supplementary Material: zip
10 Replies
Loading