Context matters: evaluation of target and context features on variation of object naming

Published: 21 Sept 2023, Last Modified: 04 Aug 2024The 1st Workshop on Linguistic Insights from and for Multimodal Language ProcessingEveryoneRevisionsCC BY 4.0
Abstract: Semantic underspecification in language poses significant difficulties for models in the field of referring expression generation. This challenge becomes particularly pronounced in setups, where models need to learn from multiple modalities and their combinations. Given that different contexts require different levels of language adaptability, models face difficulties in capturing the varying degrees of specificity. To address this issue, we focus on the task of object naming and evaluate various context representations to identify the ones that enable a computational model to effectively capture human variation in object naming. Once we identify the set of useful features, we combine them in search of the optimal combination that leads to a higher correlation with humans and brings us closer to developing a standard referring expression generation model that is aware of variation in naming. The results of our study demonstrate that achieving human- like naming variation requires the model to possess extensive knowledge about the target object from multiple modalities, as well as scene-level context representations. We believe that our findings contribute to the development of more sophisticated models of referring expression generation that aim to replicate human-like behaviour and performance. Our code is available at https://github.com/GU-CLASP/object-naming-in-context.
Loading