Keywords: AI governance, multi-agent systems, hierarchical governance, Regulatory-Graph PSO, continuous compliance
TL;DR: Always-on AI governance: Hierarchical RG-PSO tunes autonomy–alignment (α) and stability (λ) to keep institutional agents ethically compliant under a fair, seed-controlled evaluation protocol.
Abstract: As universities adopt AI for high-stakes decisions, episodic audits fail to match deployment tempo. This work addresses governing autonomous AI agents that must maintain ethical compliance while adapting to institutional contexts. We contribute a fair-baseline evaluation protocol and multi-layer governance simulator with compute/seed hygiene, realistic frictions, and actionable policy levers $(\lambda, \alpha)$. Departments act as autonomous agents within regulatory constraints, coordinated through hierarchical governance (departments $\rightarrow$ universities $\rightarrow$ countries). Our Regulatory-Graph PSO uses parameter $\alpha$ to balance local autonomy ($\alpha=0$) with global alignment ($\alpha=0.6$). The protocol ensures rigor through fixed iterations per scale, 30-seed replication (seeds 100–129), and statistical corrections (Holm–Bonferroni, bootstrap CIs). Key results: $\lambda \in [0.05, 0.3]$ controls policy stability; $\alpha \in [0.30, 0.35]$ achieves optimal balance (fitness $=0.9946\pm0.0002$, Gini $=0.0010\pm0.0002$) from 390 experiments; adversarial detection varies (static gaming AUC $\approx 0.50$, manipulation/oscillation AUC $\approx 1.00$). The framework scales to 72-department hierarchies with concrete KPIs and EU AI Act mapping.
Submission Number: 24
Loading