Statistical Depth for Ranking and Characterizing Transformer-Based Text Embeddings

Published: 07 Oct 2023, Last Modified: 01 Dec 2023EMNLP 2023 MainEveryoneRevisionsBibTeX
Submission Type: Regular Long Paper
Submission Track: Interpretability, Interactivity, and Analysis of Models for NLP
Submission Track 2: Resources and Evaluation
Keywords: text embeddings, transformers, statistical inference, corpus analysis, statistical depth, in-context learning, synthetic data augmentation
TL;DR: We adopt a statistical depth to measure distributions of transformer-based text embeddings and introduce the practical use of this depth for both modeling and distributional inference in NLP pipelines.
Abstract: The popularity of transformer-based text embeddings calls for better statistical tools for measuring distributions of such embeddings. One such tool would be a method for ranking texts within a corpus by centrality, i.e. assigning each text a number signifying how representative that text is of the corpus as a whole. However, an intrinsic center-outward ordering of high-dimensional text representations is not trivial. A $\textit{statistical depth}$ is a function for ranking $k$-dimensional objects by measuring centrality with respect to some observed $k$-dimensional distribution. We adopt a statistical depth to measure distributions of transformer-based text embeddings, $\textit{transformer-based text embedding (TTE) depth}$, and introduce the practical use of this depth for both modeling and distributional inference in NLP pipelines. We first define TTE depth and an associated rank sum test for determining whether two corpora differ significantly in embedding space. We then use TTE depth for the task of in-context learning prompt selection, showing that this approach reliably improves performance over statistical baseline approaches across six text classification tasks. Finally, we use TTE depth and the associated rank sum test to characterize the distributions of synthesized and human-generated corpora, showing that five recent synthetic data augmentation processes cause a measurable distributional shift away from associated human-generated text.
Submission Number: 1997
Loading