Track: Full paper
Keywords: sentence embeddings, transformer models, probing, chunk structure
TL;DR: We investigate whether information about (variable number of) chunks -- noun, verb, prepositional phrases -- and their task-relevant properties can be detected in sentence embeddings from a pretrained transformer model.
Abstract: Sentence embeddings from transformer models encode much linguistic information in a fixed-length vector. We investigate whether structural information -- specifically, information about chunks and their structural and semantic properties -- can be detected in these representations. We use a dataset consisting of sentences with known chunk structure, and two linguistic intelligence datasets, whose solution relies on detecting chunks and their grammatical number, and respectively, their semantic roles. Through an approach involving indirect supervision, and through analyses of the performance on the tasks and of the internal representations built during learning, we show that information about chunks and their properties can be obtained from sentence embeddings.
Copyright PDF: pdf
Submission Number: 5
Loading