Keywords: Prefix Conditioning, Prefix Tuning, Supervised Fine-Tuning, Reasoning Models, Token-Level Loss, Model Evaluation
Abstract: Recent alignment studies commonly remove introductory boilerplate phrases from supervised fine-tuning (SFT) datasets. This work challenges that assumption. We hypothesize that safety- and reasoning-oriented prefix sentences serve as lightweight alignment signals that can guide model decoding toward safer and more coherent responses. To examine this, we fine-tune three R1 series models across three core model capabilities: reasoning (mathematics, coding), safety, and factuality, systematically varying prefix inclusion from 0\% to 100\%.
Results show that prefix-conditioned SFT improves both safety and reasoning performance, yielding up to +6\% higher Safe@1 accuracy on adversarial benchmarks (WildJailbreak, StrongReject) and +7\% improvement on GSM8K reasoning. However, factuality and coding tasks show marginal or negative effects, indicating that prefix-induced narrowing of the search space benefits structured reasoning. Token-level loss analysis further reveals that prefix tokens such as “revised” and “logically” incur higher gradient magnitudes, acting as alignment anchors that stabilize reasoning trajectories. Our findings suggest that prefix conditioning offers a scalable and interpretable mechanism for improving reasoning safety, serving as an implicit form of alignment that complements traditional reward-based methods.
Paper Type: Long
Research Area: Language Models
Research Area Keywords: chain-of-thought, fine-tuning, safety and alignment, prompting, robustness, security and privacy
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Data analysis
Languages Studied: English
Submission Number: 3753
Loading