Forward-Looking and Backward-Looking Responsibility Attribution in Multi-Agent Sequential Decision Making
Abstract: As AI systems gain more and more agency in modern-day society, the problem of responsibility attribution in AI is no longer just a philosophically interesting one, but a practical one as well. The rise of AI agency means that an increasing number of everyday tasks are now being handled by AI agents. As a result, addressing conceptual and technical challenges of attributing responsibility for the failure of a multi-agent AI system has become urgent. Such challenges are particularly prominent when the temporal dimension of decision making is taken into account. In general, the concept of responsibility attribution may have different meanings depending on the context. In particular, in my research I consider the distinction between forward-looking and backward-looking responsibility. Forward-looking responsibility looks at the future and holds agents accountable for what is expected to happen. On the other hand, backward-looking responsibility looks at the past and holds agents accountable for a specific realization of the system and an outcome of interest. This paper summarizes my contributions on forward- and backward-looking responsibility attribution in multi-agent sequential decision making and describes my future research plans.
Loading