Abstract: Sociotechnical systems (STSs) include artificial intelligence (AI) to make (semi-) automated decisions that impact our lives. Their reasoning processes still often remain unclear to people interacting with such systems, which may also harm people by making flawed decisions. The users do not have ways to challenge automated decisions and obtain proper restitution if necessary. Organizations may be willing to provide transparency about their decision-making process, but answering each of the questions people ask could be cumbersome. We propose a mediator agent framework that will bridge the gap between organizations that employ AI and people who were harmed by its automated decisions. Our approach helps the organizations to become answerable data practices, and it empowers people to report the harms, ask for clarifications, as well as remedies through dialogues. We implement a prototype to demonstrate the applicability of our approach through a real-world scenario.
External IDs:dblp:conf/ecai/KekulluogluRK24
Loading