Coopernaut: End-to-End Driving with Cooperative Perception for Networked VehiclesDownload PDFOpen Website

2022 (modified: 20 Nov 2022)CVPR 2022Readers: Everyone
Abstract: Optical sensors and learning algorithms for autonomous vehicles have dramatically advanced in the past few years. Nonetheless, the reliability of today's autonomous vehicles is hindered by the limited line-of-sight sensing capability and the brittleness of data-driven methods in handling extreme situations. With recent developments of telecommunication technologies, cooperative perception with vehicle-to-vehicle communications has become a promising paradigm to enhance autonomous driving in dangerous or emergency situations. We introduce Coopernaut,an end-to-end learning model that uses cross-vehicle perception for vision-based cooperative driving. Our model encodes Li-DAR information into compact point-based representations that can be transmitted as messages between vehicles via realistic wireless channels. To evaluate our model, we develop Autocastsim,a network-augmented driving simulation framework with example accident-prone scenarios. Our experiments on Autocastsim suggest that our cooperative perception driving models lead to a 40% improvement in average success rate over egocentric driving mod-els in these challenging driving situations and a <tex xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">$5\times$</tex> smaller bandwidth requirement than prior work V2VNet. Cooper-nautand Autocastsim are available at https://ut-austin-rpl.github.io/Coopernaut/.
0 Replies

Loading