Keywords: Figurative expression, Topology data analysis, Attention analysis
Abstract: Figurative expressions remain challenging for language models, which often default to literal interpretations rather than capturing implicit meaning. This vulnerability affects the understanding of everyday dialogue and increases the exposure to adversarial prompts that exploit figurative or indirect phrasing. We integrate a topology-based algorithm into encoder-only architectures to strengthen signals relevant to figurative meaning and observe consistent improvements across multiple benchmarks. We further propose SATS, which achieves low latency and matches or exceeds most open-source LLMs while using 9.6× fewer parameters (within 0.8%p of Qwen3). Our approach is lightweight and model-agnostic, and complements instruction-tuned LLMs by improving the robustness of detecting and interpreting figurative and implicit meaning.
Supplementary Material: zip
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 4380
Loading