Compression and Abstraction Using Graph Based Meaning RepresentationsDownload PDF

Anonymous

17 Feb 2023 (modified: 05 May 2023)ACL ARR 2023 February Blind SubmissionReaders: Everyone
Abstract: Graph-based meaning representations are widely used in NLP, where their abstraction level is determined once by dataset curators. Humans however, often use different levels of abstraction to adjust to different audience traits, like age or expertise. We develop methods to automatically adjust the abstraction level of graph based meaning representations to be more abstract or more granular. To get more abstract graphs, we develop an unsupervised pattern-finding and lossless graph-compression algorithm. We use this approach to compress the Process Execution Graph (PEG) dataset, and find semantically meaningful, cognitively-plausible patterns, leading to improved parsing precision (at the cost of recall).Finally, we present a case study for making representations of procedural texts more granular. We employ macro expansion to produce a challenging text-to-code dataset over the PEG graphs, decomposing predicates into their granular implementation. Taken together, we hope that this work will spur future research into better-suitable abstraction levels for different settings and scenarios.
Paper Type: long
Research Area: Linguistic theories, Cognitive Modeling and Psycholinguistics
0 Replies

Loading