Is Training Necessary for Representation Learning

22 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: unsupervised representation learning, universal encoder, finite element method, multi-scale mesh, multivariate Lagrange interpolation
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: The field of neural network-based encoders is currently experiencing rapid growth. However, in the pursuit of higher performance, models are becoming increasingly complex and specialized for specific datasets and tasks, resulting in a loss of generality. In response to this trend, we explore the finite element method (FEM) as a general solution for feature extraction and introduce LagrangeEmbedding, an untrainable encoder with a universal architecture across various types of raw data and recognition tasks. Our experimental results demonstrate its successful application and good performance in diverse domains, including data fitting, computer vision, and natural language processing. LagrangeEmbedding is explainable, it adheres to the error-bound formula in FEM, which governs the relationship between mean absolute error (MAE) and the number of model parameters. As the encoder has no trainable parameters, neural networks utilizing it only need to train a linear layer. This reduces gradient computation and significantly accelerates training convergence. Our research promises to advance machine learning by opening up new avenues for unsupervised representation learning.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
Supplementary Material: zip
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 5143
Loading