COFT: Counterfactual–Conformal Decoding for Fair Chain‑of‑Thought Reasoning in Large Language Models
Keywords: LLM, Reasoning, Chain-of-Thought, Fairness, Bias, Counterfactual, Conformal, Decoding
Abstract: Large language models (LLMs) can reveal and amplify societal biases during chain-of-thought (CoT) generation. We present COFT (Chain of Fair Thought), a training-free decoding method that provides instance-level fairness control with statistical guarantees for any frozen causal language model.
COFT operates in three stages. First, it creates a masked counterfactual prompt by replacing sensitive spans with neutral tokens. Second, it compares the factual and masked logit distributions through lightweight logit fusion to attenuate attribute-driven biases. Third, it uses dual-branch split-conformal calibration to certify per-step candidate token sets at a user-chosen risk level.
We evaluate COFT across six models and multiple bias benchmarks. Our method reduces standard bias metrics by 30–55\% (median 38\%) while preserving task utility and language quality. Reasoning accuracies remain unchanged within run-to-run noise margins. The computational overhead is modest, equivalent to one additional forward pass (<=11%).
COFT offers a clear, auditable path to safer CoT generation with significant bias reduction, negligible utility loss, and no requirement for retraining, auxiliary classifiers, or weight access.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 20857
Loading