Abstract: Modern AI progress has been organized around tasks, data, and metrics that make capabilities trainable, comparable, and scalable. This organization makes external task performance easier to train and evaluate, but naturally leaves underdeveloped the question of how AI systems should understand people and use that understanding when they act. Human information already enters AI through feedback, preference data, personalization, evaluation, and deployment analysis, yet these uses often remain setting-specific or limited in depth. We introduce the \emph{human model} concept as a unifying perspective on such work. Existing human-related AI methods can be viewed as partial forms of human-state modeling across cognition, affect, and behaviour. Under this perspective, we formulate the generalization hypothesis: whether, to what extent, and in what forms setting-specific human models can generalize. We argue that testing this generalization hypothesis should become a central research agenda for AI, and discuss what forms of data infrastructure could turn it into an empirical research problem and support the development of scalable human models. If human models can support generalizable understanding of people, they may become a crucial component toward trustworthy AGI.
Loading