When Dimensionality Hurts: The Role of LLM Embedding Compression for Noisy Regression Tasks

ACL ARR 2025 May Submission3477 Authors

19 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: LLMs have shown remarkable success in language modelling due to scaling laws found in model size and the hidden dimension of the model's text representation. Yet, we demonstrate that compressed representations of text can yield better performance in LLM-based regression tasks. In this paper, we compare the relative performance of embedding compression in four different signal-to-noise contexts: financial return prediction, health outcome prediction, writing quality assessment and review scoring. Our results show that compressing embeddings, in a minimally supervised manner using an autoencoder's hidden representation, can mitigate overfitting and improve performance on noisy tasks, such as financial return prediction; but that compression reduces performance on tasks that have high causal dependencies between the input and target data. Our results suggest that the success of interpretable compressed representations, such as sentiment, may be due to a regularising effect.
Paper Type: Short
Research Area: NLP Applications
Research Area Keywords: Regression, Compression, Autoencoder
Contribution Types: NLP engineering experiment, Approaches to low-resource settings, Data resources
Languages Studied: English
Submission Number: 3477
Loading