Abstract: Forecasting the future paths of surrounding vehicles of a Connected Autonomous Vehicle (CAV) can enhance connectivity and efficiency of vehicular networks, and accurate motion forecasting of nearby vulnerable road users can advance road safety and urban mobility. This task needs a high-level situational awareness for the CAV. Early methods rely solely on vehicle kinematics and overlook the uncertainty within agents behavior and the effects of surrounding context on the behavior of nearby agents, resulting in lower performance or infeasible predictions. In the current work, we introduce a novel context-aware forecasting approach for CAVs that benefits from inverse reinforcement learning (IRL) to condition the future motions of nearby agents on scene-based state sequences defined using a Markov Decision Process. More precisely, we map the images of the surrounding context and the behavior history of agents into rewards and learn optimal expert behaviors using IRL. We validate the path forecasting efficiency of our model using two large motion prediction benchmarks with different scenes and achieve state-of-the-art results in terms of FDE and ADE metrics.
Loading