Improving Event Representation with Supervision from Available Semantic ResourcesOpen Website

Published: 01 Jan 2023, Last Modified: 30 Jun 2023DASFAA (3) 2023Readers: Everyone
Abstract: Learning distributed representations of events is an indispensable but challenging task for event understanding. Existing studies address this problem by either composing the embeddings of event arguments as well as their attributes, or exploiting various relations between events like co-occurrence and discourse relations. In this paper we argue that the knowledge learned from sentence embeddings and word semantic meanings could be leveraged to produce superior event embeddings. Specifically, we utilize both natural language inference datasets for learning sentence embeddings and the knowledge base WordNet for word semantics. We propose a Multi-Level Supervised Contrastive Learning model (MLSCL) for learning event representations. Our model fuses diverse semantic resources at the levels of sentences, events, and words in an end-to-end way. We conduct comprehensive experiments on three similarity tasks and one script prediction task. Experimental results show that MLSCL achieves new state-of-the-art performances on all tasks consistently and higher training efficiency than prior competitive model SWCC.
0 Replies

Loading