Track: Type D (Master/Bachelor Thesis Abstracts)
Keywords: Reinforcement Learning, Multi-Agent, Explainability, Cooperation
Abstract: Artificial intelligence, has a profound impact on numerous fields, including medicine, biology, finance, and many more, an impact that in recent years has grown substantially. This growth can in no small part be associated with important developments in regards to computational power, allowing increasingly advanced and complex models to be developed. This results in fast growing efficiency in intelligent models, notably in Multi-Agent Reinforcement Learning (MARL). A branch of AI where multiple intelligent agents learn to behave optimally in a specific environment, by potentially collaborating or competing. However, the steep increase in efficiency comes at a cost: powerful models often behave like black boxes, making it hard if not near impossible to understand why these AI systems take the decisions they do. This can cause a lack of trust and security regarding AI systems which are being deployed at a rapid pace in the real world.
In this work we see to answer these concerns when it comes to MARL, specifically we implement techniques to get insights on whether agent cooperation is explicitly decided or the byproduct of agents following a selfish agenda. Our contributions in that regard are the implementation of Reward Decomposition and Soft Decision Tree distillation in MARL.
Serve As Reviewer: ~Tom_Lenaerts2
Submission Number: 27
Loading