Big Learning

20 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: generative models
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Foundation models, generative modeling, big learning, generative adversarial nets
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: We reveal a new learning dimension for machine learining.
Abstract: Recent advances in foundation models reveal a promising direction for deep learning, with the roadmap steadily moving from big data to big models/neural-nets to the presented big learning. Specifically, the big learning exhaustively exploits the information inherent in its large-scale \emph{complete/incomplete} training data, by simultaneously modeling many/all joint, conditional, and marginal data distributions across potentially diverse domains, with one universal foundation model. We reveal that the big learning principle ($i$) underlies most foundation models, ($ii$) is equipped with extraordinary flexibilities for complete/incomplete training data and various data generative tasks, ($iii$) potentially delivers all joint, conditional, and marginal data sampling capabilities with one universal model, and ($iv$) is a new dimension for upgrading conventional machine learning paradigms. We leverage the big learning principle to upgrade the generative adversarial nets (in this paper), the expectation-maximization algorithm (in the supplementary), and the variational auto-encoders (in the supplementary) to their big-learning variants, with diverse experiments conducted to justify its effectiveness.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
Supplementary Material: zip
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 2225
Loading