Abstract: The mimesis of human traits exhibited by large language models (LLMs) has led some users to perceive these technical systems as agentic, capable of achieving reciprocal and seemingly human-like communication. These misperceptions have, in turn, been linked to documented harms in human-AI interactions (HAIs). This conceptual paper explores current interventions in response to interaction harms, taking AI companions as an illustrative example. We analyze documented cases of AI companion applications that have led to severe harms, including suicide, illustrating that current redressive approaches fail to account for the network of distributed human agents that collectively "animate" anthropomorphic features and encourage some users to regard AI systems as social "agents." By framing anthropomorphism as a social affordance that reproduces a broader distributed process spanning development, design, user interaction, socio-cultural contexts, and institutional forces, this paper demonstrates the necessity for distributed governance of anthropomorphic AI features across these diverse agentic forces. We proceed to discuss obstacles to appropriate governance, including power asymmetries between different agents, and outline existing models that could be adapted for more effective interventions.
Loading