Using Relational and Causality Context for Tasks with Specialized Vocabularies that are Challenging for LLMs

Published: 10 Oct 2024, Last Modified: 07 Dec 2024CaLM @NeurIPS 2024 OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Linguistic Causality, Graph Neural Network, LLM, Short Report Classification, Specialized Vocabulary
Abstract: Short text is typical for reports such as incident synopsis and product feedback for efficiency and convenience. However, classifying short reports can be very challenging due to incomplete information and limited labeled data, and in some cases, many domain-specific terms. To address these issues, we examine the use of causality, as represented by linguistic cause and effect, in models for short report classification. We propose two augmentations of a hierarchical graph attention network to represent latent causes and effects. We also investigate the effectiveness of using a pretrained Language Model SBERT vs. the more traditional tf-idf representations for reports with general and specialized vocabularies. Experiments on five public report datasets verify that inclusion of causality in modeling short report datasets with many domain-specific terms improves classification performance.
Submission Number: 14
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview