Keywords: soft prompts, parameter-efficient fine-tuning, large language models
Abstract: Soft prompt tuning achieves excellent performance in few-shot tasks. However, soft prompt tuning lacks interpretability, and traditional prompt tuning methods fail to analyze its internal structural features or optimize from this perspective. To address this limitation, this research proposes a topology-aware optimization method focused on the internal structure of soft prompts. By introducing persistent homology methods from topological data analysis (TDA), we characterize the structural evolution features of soft prompts during training, discovering that changes in connectivity persistence and redundancy affect soft prompt tuning performance. When both structural connectivity and persistent homology entropy simultaneously approach convergence, soft prompts can more easily guide models to output correct reasoning chains. Based on this phenomenon, we developed a new loss function with specific topological structure analysis, called TDA for Softprompt Loss Function (TSLoss), which introduces topological measurement tools through TDA to quantify connectivity and redundancy between semantic units, learning information related to topological structure transformations trending toward structural stability. Extensive experiments demonstrate that TSLoss can significantly accelerate the convergence speed of prompt tuning, outperforming traditional prompt tuning methods, and providing an interpretable research direction for soft prompt tuning from a new perspective.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 24762
Loading