Abstract: Chain-of-thoughts (CoT) instructs large language models (LLMs) to generate intermediate steps before reaching the final answer, and has been proven effective to help LLMs solve complex reasoning tasks.
However, the inner mechanism of CoT still remains largely unclear.
In this paper, we empirically study the role of CoT tokens in LLMs on two compositional tasks: multi-digit multiplication and dynamic programming.
While CoT is essential for solving these problems, we find that preserving only tokens that store intermediate results would achieve comparable performance.
Furthermore, we observe that storing intermediate results in an alternative latent form will not affect model performance.
We also randomly intervene some values in CoT, and notice that subsequent CoT tokens and the final answer would change correspondingly.
These findings suggest that CoT tokens function like variables in computer programs, but with potential drawbacks like unintended shortcuts and computational complexity limits between tokens.
Paper Type: Long
Research Area: Interpretability and Analysis of Models for NLP
Research Area Keywords: chain-of-thought
Contribution Types: Model analysis & interpretability
Languages Studied: English
Submission Number: 211
Loading