Abstract: Word embeddings play a crucial role in various NLP-based downstream tasks by mapping words onto a relevant space, primarily determined by their co-occurrences and similarities within a given context. The objective is to capture the meaning of words based on their context and relationships with other words. However, in practice, these meanings often fall short, as the underlying semantics are intricately tied to human perception, alongside ambiguity and vagueness. In essence, identifying a word requires not only considering its context, similarity to nearby words, or co-occurrences but also incorporating the actual understandings of individuals within those contexts. This paper addresses the limitations of existing well-known word embedding techniques, which overlook such nuanced competencies. The provided discussion in this paper introduces fuzzy uncertainties as a method to enhance word embeddings, aiming for a more practical application in NLP-based solutions by incorporating rich word embeddings of quality. The paper also examines a relevant example as part of an effort to encourage the enhancements discussed.
Loading