Keywords: TRiSM, Agentic AI, Trust, Risk, and Security, LLM-based Multi-Agentic Systems
TL;DR: TriSM for Agentic AI
Abstract: Agentic AI systems built on large language models (LLMs) and multi-agent architectures are enabling unprecedented autonomy and collaboration, but also introduce unique risks in trustworthiness and security. This paper provides a concise review of Trust, Risk, and Security Management (TRiSM) for LLM-driven multi-agent systems. We outline the distinctive challenges of Agentic AI , where multiple LLM-based agents with tools and memory pursue complex goals , and motivate the need for robust governance. We then present the TRiSM framework adapted to Agentic AI, organized around five key pillars: Explainability, ModelOps, Application Security, Model Privacy, and Governance. A taxonomy of novel threats (e.g., prompt injection, collusive agent behavior, and memory poisoning) is summarized, highlighting how emergent risks arise from inter-agent interactions. To facilitate evaluation, we describe two new metrics : Component Synergy Score (CSS) and Tool Utilization Efficacy (TUE) , which quantify inter-agent collaboration quality and effective tool use. Overall, adopting the TRiSM framework in LLM-based multi-agent systems is crucial to ensure these advanced AI agents remain safe, transparent, and accountable in high-stakes applications.
Submission Type: Position/Review Paper (4-9 Pages)
Submission Number: 23
Loading