Measuring Trust and Automatic Verification in Multi-Agent SystemsDownload PDFOpen Website

Published: 01 Jan 2021, Last Modified: 06 May 2023DSA 2021Readers: Everyone
Abstract: Due to the shortage of resources and services, agents are often in competition with each other. Excessive competition will lead to a social dilemma. Under the viewpoint of breaking social dilemma, we present a novel trust-based logic framework called Trust Computation Logic (TCL) for measure method to find the best partners to collaborate and automatically verifying trust in Multi-Agent Systems (MASs). TCL starts from defining trust state in Multi-Agent Systems, which is based on contradistinction between behavior in trust behavior library and in observation. In particular, a set of reasoning postulates along with formal proofs were put forward to support our measure process. Moreover, we introduce symbolic model checking algorithms to formally and automatically verify the system. Finally, the trust measure method and reported experimental results were evaluated by using DeepMind’s Sequential Social Dilemma (SSD) multi-agent game-theoretic environments.
0 Replies

Loading