Adversarial Policy Generation in Automated Parking

Published: 07 Sept 2024, Last Modified: 15 Sept 2024ECCV 2024 W-CODA Workshop Abstract Paper TrackEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Adversarial Policy, Automated Driving, Deep Reinforcement Learning, Corner-Case Generation
Subject: Corner case mining and generation for autonomous driving
Confirmation: I have read and agree with the submission policies of ECCV 2024 and the W-CODA Workshop on behalf of myself and my co-authors.
Abstract: Automated driving (AD) systems require rigorous testing to ensure safety and robustness, especially in corner-case scenarios, before real-world deployment. Deep reinforcement learning (DRL) is a promising approach for decision-making in AD, enabling dynamic learning through trial and error. Adversarial agents can be used to expose DRL systems to critical corner-cases, but reward functions solely opposing the AD agent’s objectives can lead to unrealistic behaviors, such as overly incentivizing crashes. This paper explores an automated parking (AP) scenario where an adversarial agent disrupts a parking agent exiting an adjacent slot—a common but under-explored corner-case challenge. We propose a more balanced adversary reward function, aiming for realistic yet disruptive behavior compared to the baseline approach. The results show promising improvements in correspondence with the operational design domain (ODD) of AP systems, encouraging further investigation into system performance after several victim-adversary training iterations.
Submission Number: 12
Loading