Keywords: Formalisation, Responsibility, Agents
Abstract: The notion of ‘responsibility’ as a higher-level construct that
dynamically impacts each agent’s goals, priorities and actions is very
appealing, especially as humans regularly use such concepts in
everyday reasoning. Our aim is to utilise `responsibility' to drive
proactive \emph{computational} agent behaviour and, importantly, to
highlight when an agent need \underline{not} do anything as well as
when it should.
In this work, we look at formalising responsibility, and especially
how the concept of responsibility leads to goals or actions within our
agents. We are also interested in hierarchies of responsibility. For
example, even though responsible for some aspect our agent might
decide to do nothing if it believes some other agent is \emph{more}
responsible. We are also interested in the converse of responsibility
-- an agent \emph{not} being responsible -- and want to also use this
to drive agent behaviour. In particular, there may be different
varieties of this ``lack of responsibility'' -- not just
\emph{irresponsibility} but \emph{recklessness} and even
\emph{maliciousness} that we also aim to formalise.
Submission Number: 3
Loading