The Triad of Identity, Trust and Responsibility in Multi-Agent Systems

Published: 19 Dec 2025, Last Modified: 05 Jan 2026AAMAS 2026 FullEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Identity, Coordination, Responsibility, Trust, Ethics, Normative Behaviour
Abstract: The design of autonomous AI agents that behave responsibly and foster trust in open multi-agent systems remains a fundamental challenge. Traditional game-theoretical approaches largely assume self-interested behaviour, yet real-world collaborations among humans often rely on prosocial considerations that extend beyond individual utility. To address this, for the first time in this paper, we investigate the triad of identity, responsibility, and trust as core elements shaping responsible multi-agent behaviour. We propose a novel agent model, building on the notion of Computational Transcendence, which equips agents with an elastic sense of identity, enabling them to incorporate the welfare of others into their decision-making. Our framework integrates subjective (identity-based) and objective (experience-based and reputation-based) components of trust, thereby linking individual responsibility with cooperative behaviour in repeated interactions. Using Iterated Prisoner’s Dilemma (IPD) simulations on different network structures, we analyse how varying levels of identity and trust affect responsible behaviour. Results demonstrate that the interplay of these three concepts can promote emergent responsibility, mitigate exploitation, and sustain long-term cooperation in dynamic multi-agent environments. We argue that this triadic perspective provides a principled foundation for designing trustworthy, responsible, and identity/value aware agents with implications for future human–AI collaboration.
Area: Coordination, Organisations, Institutions, Norms and Ethics (COINE)
Generative A I: I acknowledge that I have read and will follow this policy.
Submission Number: 743
Loading