Abstract: The proliferation of agentic artificial intelligence has outpaced the conceptual tools needed to characterize agency in computational systems. Prevailing definitions mainly rely on autonomy and goaldirectedness. Here, we argue for a minimal notion open to principled
inspection given three criteria: intentionality as action grounded in beliefs and desires, rationality as normatively coherent action entailed by
a world model, and explainability as action causally traceable to internal states; we subsequently instantiate these as a partially observable
Markov decision process under a variational framework wherein posterior beliefs, prior preferences, and the minimization of expected free energy jointly constitute an agentic action chain. Using a canonical T-maze
paradigm, we evidence how empowerment, formulated as the channel capacity between actions and anticipated observations, serves as an operational metric that distinguishes zero-, intermediate-, and high-agency
phenotypes through structural manipulations of the generative model.
We conclude by arguing that as agents engage in epistemic foraging to
resolve ambiguity, the governance controls that remain effective must
shift systematically from external constraints to the internal modulation of prior preferences, offering a principled, variational bridge from
computational phenotyping to AI governance strategy.
Loading