Learning Road Scene-level Representations via Semantic Region PredictionDownload PDF

16 Jun 2022, 10:45 (modified: 16 Nov 2022, 03:24)CoRL 2022 PosterReaders: Everyone
Student First Author: yes
Keywords: Semantic Region Prediction, Egocentric Vision, Driver Intent, Risk Object Identification
TL;DR: We propose a novel task called Semantic Region Prediction to learn road scene-level representations for two vital tasks in automated driving systems.
Abstract: In this work, we tackle two vital tasks in automated driving systems, i.e., driver intent prediction and risk object identification from egocentric images. Mainly, we investigate the question: what would be good road scene-level representations for these two tasks? We contend that a scene-level representation must capture higher-level semantic and geometric representations of traffic scenes around ego-vehicle while performing actions to their destinations. To this end, we introduce the representation of semantic regions, which are areas where ego-vehicles visit while taking an afforded action (e.g., left-turn at 4-way intersections). We propose to learn scene-level representations via a novel semantic region prediction task and an automatic semantic region labeling algorithm. Extensive evaluations are conducted on the HDD and nuScenes datasets, and the learned representations lead to state-of-the-art performance for driver intention prediction and risk object identification.
Supplementary Material: zip
16 Replies