Towards Predicate-powered Learning

23 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: Learning theory, Learning using statistical invariants
Abstract: The traditional approach to data-driven learning has become increasingly demanding in terms of its training data and computational resources. This work further develops a new paradigm of learning using predicates to reduce the need of data in learning. Among many recent efforts towards the same direction, learning using statistical invariants (LUSI) has been proposed to be the new paradigm of learning. Building on top of LUSI and to break the ``brute force'' learning trend, we build towards a generalized theory of predicates and the invariants. The primary objective of this work is to propose an Extended Structure Risk Minimization (ESRM) paradigm with predicates, and provide a theoretical justification of the need for predicates in learning problems from both data complexity and model complexity perspectives. In this work, we show that predicates not only can aid in reducing the need for data in training, but they are also imperative for a highly efficient model. Our primary contributions consist of the following: I) Proposing an extension to the structure risk minimization paradigm of learning, and II) Proving the efficacy of predicates in reducing both the data complexity and the model complexity.
Primary Area: learning theory
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 6969
Loading