Keywords: Decentralised trust, transparency logs, data provenance, AI auditing, verifiable credentials, web governance
TL;DR: This paper introduces a decentralised trust layer that makes AI-powered web platforms transparently verifiable.
Abstract: Artificial Intelligence (AI) increasingly mediates web experiences—ranking, recommendation, advertising, and moderation—yet independent verification of AI behaviour and data provenance remains rare. We present a Decentralised Trust Layer (DTL) that turns transparency from policy into protocol. DTL anchors provenance claims, binds deployed model versions to signed metadata, and produces privacy-preserving inference receipts in an append-only transparency log inspired by certificate transparency. We formalise a threat model, provide a deployable protocol suite (provenance anchoring, model lineage registry, inference transparency, and decentralised audit sampling),
and specify a fully reproducible evaluation protocol. To remain faithful to what can be executed without proprietary dependencies,
our measurements focus on cryptographic and logging overheads and proof verification costs; results indicate microsecond-scale per-
request costs for receipt generation and logarithmic proof sizes consistent with transparency-log theory. We discuss governance,
privacy, and regulatory alignment and outline how DTL can be integrated into existing web standards and platform architectures.
Submission Number: 8
Loading