SAFT: Structure-Aware Fine-Tuning of LLMs for AMR-to-Text Generation

08 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Abstract Meaning Representation, Graph-to-Text Generation, Large Language Models, Magnetic Laplacian, Fine-Tuning, Graph Positional Encodings
TL;DR: We introduce a structure-aware fine-tuning method for AMR-to-text generation that injects graph topology into LLMs using Magnetic Laplacian-based positional encodings, achieving new state-of-the-art results on AMR 3.0.
Abstract: Large Language Models (LLMs) are increasingly applied to tasks involving structured inputs such as Abstract Meaning Representations (AMRs). However, common approaches either linearize graphs, discarding crucial structural cues, or rely on specialized architectures that are incompatible with standard pretrained LLMs. We present SAFT, a structure-aware fine-tuning method that augments LLMs with graph-sensitive positional encodings derived from the magnetic Laplacian of AMRs. These encodings are projected into the LLM embedding space, introducing relational inductive bias without modifying the model architecture. Designed to be applicable across tasks involving graph-structured inputs, we demonstrate its effectiveness on AMR-to-text generation, where SAFT establishes a new state of the art on AMR 3.0 with a +3.5 BLEU improvement over prior baselines. Performance gains grow with graph complexity, highlighting the value of structure-aware representations in enhancing LLM performance.
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 3152
Loading