Analyzing the Role of Semantic Representations in the Era of Large Language ModelsDownload PDF

Anonymous

16 Oct 2023ACL ARR 2023 October Blind SubmissionReaders: Everyone
Abstract: Traditionally, natural language processing (NLP) models often use a rich set of features created by linguistic expertise. A typical example is the semantic representation, which turns a piece of text into a structured graph representing the relations among the concepts and entities mentioned in the text. However, in the era of large language models (LLMs), more and more tasks are turned into a generic, end-to-end sequence generation problem. In this paper, we investigate the question -- are the linguistically-grounded semantic representations of text still needed for NLP tasks? Specifically, we explore five diverse NLP tasks, and provide a comprehensive analysis of cases where semantic representations are needed or not needed for the task performance. We incorporate extensive text feature analyses to understand both of the cases, and conduct case studies to inspect the in-depth reasons behind. Our study provide some insights and suggestions for future NLP researchers to look at the role of the traditional semantic representations in the era of LLMs.
Paper Type: long
Research Area: Interpretability and Analysis of Models for NLP
Contribution Types: Model analysis & interpretability
Languages Studied: English
Consent To Share Submission Details: On behalf of all authors, we agree to the terms above to share our submission details.
0 Replies

Loading