ENSEMBLES OF INFORMATIVE REPRESENTATIONS FOR SELF-SUPERVISED LEARNING

Published: 23 Jun 2025, Last Modified: 23 Jun 2025Greeks in AI 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Self-supervised learning, representation learning, ensemble learning, Gaussian processes
Abstract: The requirement of large-size labeled training datasets often prohibits the deployment of supervised learning models in several applications with high acquisition costs and privacy concerns. To alleviate the burden of obtaining labels, selfsupervised learning aims to identify informative data representations using auxiliary tasks that do not require external labels. The representations serve as refined inputs for the main learning task aimed at improving sample efficiency. Nonetheless, selecting individual auxiliary tasks and combining the corresponding extracted representations constitutes a nontrivial design problem. Agnostic of the approach for extracting individual representations per auxiliary task, this paper develops a weighted ensemble approach for obtaining a unified representation. The weights signify the relative dominance of individual representations in informing predictions for the main task. The representation ensemble is further augmented with the input data to improve accuracy and avoid information loss concerns. Numerical tests on real datasets showcase the merits of the advocated approach.
Submission Number: 128
Loading