LVC: Augmenting Autonomous Driving via Language-based V2V Cooperation

ACL ARR 2026 January Submission1378 Authors

29 Dec 2025 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: autonomous driving, V2V cooperation, large language models, CARLA simulator, multi-agent coordination, safety-critical scenarios
Abstract: Although Vehicle-to-Vehicle (V2V) communication is a promising solution for cooperative driving, existing approaches are largely limited to basic scenarios such as intersections and straight roads due to the lack of diverse benchmarks, failing to fully exploit its potential. In this work, we propose Comm2Interact, a novel and comprehensive V2V benchmark built on the CARLA simulator. The benchmark is specially curated to challenge agents with complex traffic scenarios, such as view occlusions, roundabouts, and tactical overtaking and requires diverse cooperative capabilities, ranging from perception sharing and right-of-way negotiation to maneuver coordination. To effectively navigate these intricate scenarios, we propose LVC, an LLM-based cooperative driving framework that transforms high-level intention into precise control commands. LVC leverages a set of interaction primitives to decompose complex scenarios into atomic, manageable sub-tasks, and employs a Memory Module to handle long-tail edge cases via reflection, ensuring safe operations. Extensive experiments on both the proposed challenging benchmark and existing V2V benchmarks demonstrate that LVC performs favorably against state-of-the-art methods in terms of both safety and success rates, showing its effectiveness in handling diverse traffic interactions.
Paper Type: Long
Research Area: NLP Applications
Research Area Keywords: benchmarking, multimodal applications
Contribution Types: Data resources, Position papers
Languages Studied: English
Submission Number: 1378
Loading