Keywords: AI agent, Existential risk, Regulation, Inference-time computation, Reasoning, Planning, Safety
TL;DR: AI agents should be regulated based on the extent to which they operate autonomously rather than training or inference compute.
Abstract: This position paper argues that AI agents should be regulated by the extent to which they operate autonomously. AI agents with long-term planning and strategic capabilities can pose significant risks of human extinction and irreversible global catastrophes. While existing regulations often focus on computational scale as a proxy for potential harm, we argue that such measures are insufficient for assessing the risks posed by agents whose capabilities arise primarily from inference-time computation. To support our position, we discuss relevant regulations and recommendations from scientists regarding existential risks, as well as the advantages of using action sequences---which reflect the degree of an agent’s autonomy---as a more suitable measure of potential impact than existing metrics that rely on observing environmental states.
Submission Number: 309
Loading