Information Fidelity in Tool-Using LLM Agents: A Martingale Analysis of the Model Context Protocol

Published: 19 Dec 2025, Last Modified: 05 Jan 2026AAMAS 2026 ExtendedAbstractEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large Language Models, Model Context Protocol, Tool-Augmented Agents, Semantic Distortion, Martingale Concentration
TL;DR: Modeling MCP tool-use as a martingale, we prove sublinear distortion deviation and derive a periodic re-grounding rule; validated on Qwen2, Mistral, and Llama-3.
Abstract: As large language models (LLMs) increasingly integrate external tools via the Model Context Protocol (MCP), they face a new challenge in *Information Fidelity*: tool errors can accumulate across long interaction chains, threatening reliability in domains like finance and healthcare. We introduce the first theoretical framework for MCP-mediated, tool-augmented LLM agents, deriving high-probability bounds on cumulative semantic distortion by modeling interactions as a bounded-difference martingale. To achieve this, we develop a novel semantic distortion metric that combines discrete fact matching with continuous semantic similarity, and establish martingale concentration bounds that quantify how the deviation of cumulative distortion grows as $O(\sqrt{T})$ across sequential queries with exponentially decaying dependencies. Experiments with Qwen2-7B-Instruct, Mistral-7B-Instruct-v0.3, and Llama-3-8B-Instruct under MCP confirm sublinear deviation predicted by our framework. Together, these results provide both theoretical guarantees for system reliability and practical design principles that practitioners can apply to tool-using LLM agents.
Area: Generative and Agentic AI (GAAI)
Generative A I: I acknowledge that I have read and will follow this policy.
Submission Number: 624
Loading