Inducing Meaningful Units from Character Sequences with Dynamic Capacity Slot Attention

Published: 01 Nov 2023, Last Modified: 01 Nov 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Characters do not convey meaning, but sequences of characters do. We propose an unsupervised distributional method to learn the abstract meaning-bearing units in a sequence of characters. Rather than segmenting the sequence, our Dynamic Capacity Slot Attention model discovers continuous representations of the objects in the sequence, extending an architecture for object discovery in images. We train our model on different languages and evaluate the quality of the obtained representations with forward and reverse probing classifiers. These experiments show that our model succeeds in discovering units which are similar to those proposed previously in form, content, and level of abstraction, and which show promise for capturing meaningful information at a higher level of abstraction.
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission: * minor edits to the text for the camera-ready version
Video: https://www.youtube.com/watch?v=KjAc20Co1Nk
Assigned Action Editor: ~Yale_Song1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 1107
Loading