Uncertainty-aware Deep Imitation Learning and Deployment for Autonomous Navigation through Crowded Intersections
Abstract: Navigation through crowded intersections is a challenge for autonomous vehicles, where uncertainty arises from interaction with other road users, encountering new scenes and weathers, etc. Recent end-to-end autonomous control deep models learned from human drivers have shown promising driving performance, whereas they are not as transparent and safe as traditional rule-based systems. When facing situations that they are unfamiliar with or uncertain about, the deep models’ predictions could be unsafe and untrustworthy. Without the ability to identify these situations and issue warnings beforehand, cascading errors of deep models may result in catastrophes. Therefore, this work combines the strengths of both data-driven and traditional rule-based approaches to achieve better driving quality and safety. We propose a heterogeneity uncertainty quantification method based on imitation learning, where both data and model uncertainties of the lateral and longitudinal control tasks are quantified. We also propose a policy deployment strategy where a safety indicator is developed upon estimated uncertainty to bridge the data-driven performance layer and the rule-based fallback layer. We learned from human driving demonstrations and conducted extensive closed-loop tests. Results demonstrate the effectiveness and importance of the proposed uncertainty quantification method and policy deployment strategy.
External IDs:dblp:conf/iros/ZhuWZ24
Loading