Beyond Raw Detection Scores: Markov-Informed Calibration for Boosting Machine-Generated Text Detection
Keywords: Machine-generated Text Detection, Markov-aware Calibration, Raw Detection Score
TL;DR: Markov-Informed Calibration for Boosting Machine-Generated Text Detection
Abstract: While machine-generated texts (MGTs) offer great convenience, they also pose risks such as disinformation and phishing, highlighting the need for reliable detection. Metric-based methods, which extract statistically distinguishable features of MGTs, are often more practical than complex model-based methods that are prone to overfitting. Given their diverse designs, we first place representative metric-based methods within a unified framework, enabling a clear assessment of their advantages and limitations. Our analysis identifies a core challenge across these methods: the token-level detection score is easily biased by the inherent randomness of the MGTs generation process. To address this, we theoretically and empirically reveal two relationships of context detection scores that may aid calibration: Neighbor Similarity and Initial Instability. We then propose a Markov-informed score calibration strategy that models these relationships using Markov random fields, and implements it as a lightweight component via a mean-field approximation, allowing our method to be seamlessly integrated into existing detectors. Extensive experiments in various real-world scenarios, such as cross-LLM and paraphrasing attacks, demonstrate significant gains over baselines with negligible computational overhead. The code is available at \url{https://anonymous.4open.science/r/MRF-Enhance}.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 16501
Loading