Keywords: Large Language Models, Reasoning Models, Distillation Data Detection, Reasoning Distillation
TL;DR: We first present the problem of distillation data detection and emphasize its unique challenge of partial availability. We then propose Token Probability Deviation, a novel and effective method for detecting distillation data.
Abstract: Reasoning distillation has emerged as an efficient and powerful paradigm for enhancing the reasoning capabilities of large language models. However, reasoning distillation may inadvertently cause benchmark contamination, where evaluation data included in distillation datasets can inflate performance metrics of distilled models. In this work, we formally define the task of distillation data detection, which is uniquely challenging due to the partial availability of distillation data. Then, we propose a novel and effective method $\textit{Token Probability Deviation}~(\textit{TBD})$, which leverages the probability patterns of the generated $\textit{output}$ tokens. Our method is motivated by the analysis that distilled models tend to generate near-deterministic tokens for seen questions, while often producing more low-probability tokens for unseen questions. Our key idea behind TBD is to quantify how far the generated tokens' probabilities deviate from a high reference probability. In effect, our method achieves competitive detection performance by producing lower scores for seen questions than for unseen questions. Extensive experiments demonstrate the effectiveness of our method, achieving an AUC of 0.918 and a TPR@1\% FPR of 0.470 on the S1 dataset.
Supplementary Material: zip
Primary Area: foundation or frontier models, including LLMs
Submission Number: 9008
Loading