AgentTrace: A Structured Logging Framework for Agent System Observability

Published: 10 Jan 2026, Last Modified: 10 Jan 2026LaMAS 2026 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large Language Model Agents, Structured Logging, Agent Observability, Cognitive Traceability, Multi-agent systems, Trust and Accountability, Safety and Governance, Agent Evaluation, AI Transparency
TL;DR: AgentTrace is a structured observability framework that captures cognitive, operational, and contextual traces of LLM agents for transparency, safety, and accountability
Abstract: Despite the growing capabilities of autonomous agents powered by large language models (LLMs), their adoption in high-stakes domains remains limited. A key barrier is security: the inherently nondeterministic behavior of LLM agents defies static auditing approaches that have historically underpinned software assurance. Existing security methods, such as proxy-level input filtering and model glassboxing, fail to provide sufficient transparency or traceability into agent reasoning, state changes, or environmental interactions. In this work, we introduce AgentTrace, a dynamic observability and telemetry framework designed to fill this gap. AgentTrace instruments agents at runtime with minimal overhead, capturing a rich stream of structured logs across three surfaces: operational, cognitive, and contextual. Unlike traditional logging systems, AgentTrace emphasizes continuous, introspectable trace capture, designed not just for debugging or benchmarking, but as a foundational layer for agent security, accountability, and real-time monitoring. Our research highlights how AgentTrace can enable more reliable agent deployment, fine-grained risk analysis, and informed trust calibration, thereby addressing critical concerns that have so far limited the use of LLM agents in sensitive environments.
Submission Number: 50
Loading