Abstract: Commercial LLM services often conceal internal reasoning traces while still charging users for every generated token, including those from hidden intermediate steps, raising concerns of token inflation and potential overbilling. This gap underscores the urgent need for reliable token auditing, yet achieving it is far from straightforward: cryptographic verification (e.g., hash-based signature) offers little assurance when providers control the entire execution pipeline, while user-side prediction struggles with the inherent variance of reasoning LLMs, where token usage fluctuates across domains and prompt styles.
To bridge this gap, we present PALACE (Predictive Auditing of LLM APIs via Reasoning Token Count Estimation), a user-side framework that estimates hidden reasoning token counts from prompt–answer pairs without access to internal traces. PALACE introduces a GRPO-augmented adaptation module with a lightweight domain router, enabling dynamic calibration across diverse reasoning tasks and mitigating variance in token usage patterns. Experiments on math, coding, medical, and general reasoning benchmarks show that PALACE achieves low relative error and strong prediction accuracy, supporting both fine-grained cost auditing and inflation detection. Taken together, PALACE represents an important first step toward standardized predictive auditing, offering a practical path to greater transparency, accountability, and user trust.
Paper Type: Long
Research Area: Interpretability and Analysis of Models for NLP
Research Area Keywords: Language Modeling, Interpretability and Analysis of Models for NLP, Resources and Evaluation,
Languages Studied: English
Reassignment Request Area Chair: This is not a resubmission
Reassignment Request Reviewers: This is not a resubmission
A1 Limitations Section: This paper has a limitations section.
A2 Potential Risks: Yes
A2 Elaboration: Discussed in limitations.
B Use Or Create Scientific Artifacts: Yes
B1 Cite Creators Of Artifacts: Yes
B1 Elaboration: Discussed in experimental settings.
B2 Discuss The License For Artifacts: Yes
B2 Elaboration: Discussed in experimental settings.
B3 Artifact Use Consistent With Intended Use: Yes
B3 Elaboration: Discussed in experimental settings.
B4 Data Contains Personally Identifying Info Or Offensive Content: Yes
B4 Elaboration: Discussed in experimental settings.
B5 Documentation Of Artifacts: Yes
B5 Elaboration: Discussed in experimental settings.
B6 Statistics For Data: Yes
B6 Elaboration: Discussed in experimental settings.
C Computational Experiments: Yes
C1 Model Size And Budget: Yes
C1 Elaboration: Discussed in experimental settings.
C2 Experimental Setup And Hyperparameters: Yes
C2 Elaboration: Discussed in experimental settings and appendix.
C3 Descriptive Statistics: Yes
C3 Elaboration: Discussed in experimental settings and appendix.
C4 Parameters For Packages: Yes
C4 Elaboration: Discussed in experimental settings and appendix.
D Human Subjects Including Annotators: No
D1 Instructions Given To Participants: N/A
D2 Recruitment And Payment: N/A
D3 Data Consent: N/A
D4 Ethics Review Board Approval: Yes
D4 Elaboration: Discussed in experimental settings and appendix.
D5 Characteristics Of Annotators: N/A
E Ai Assistants In Research Or Writing: Yes
E1 Information About Use Of Ai Assistants: Yes
E1 Elaboration: I use the AI models to assist the writing.
Author Submission Checklist: yes
Submission Number: 388
Loading