Keywords: Large Language Model, Multi-Agent Reinforcement Learning, Dynamic Trust Mechanism, Cooperative On-Ramp Merging Control
TL;DR: This study presents the Trust-MA framework, aimed at enhancing autonomous driving behaviors through the integration of RL models and LLMs.
Abstract: Intelligent transportation systems require connected and automated vehicles (CAVs) to conduct safe and efficient cooperation with human-driven vehicles (HDVs) in complex real-world traffic environments. However, the inherent unpredictability of human behaviors, especially at bottlenecks such as highway on-ramp merging areas, often disrupts traffic flow and compromises system performance when a deep understanding of the environment, the agents’ intentions and humans' driving styles across various scenarios is needed. To address the challenge of cooperative on-ramp merging in heterogeneous traffic environments, this paper introduces a trust-based multi-agent (Trust-MA) framework, integrating trust-based reinforcement learning (RL) for individual interactions, a fine-tuned large language model (LLM) for regional cooperation, a reward function for global optimization, and the retrieval-augmented generation (RAG) mechanism to dynamically optimize decision-making across complex driving scenarios. Comparative experiments validate the effectiveness of the proposed Trust-MA approach, demonstrating significant improvements in safety, efficiency, comfort, and adaptability in multi-agent environments. These findings highlight the significant impact of cascading coordinated communication and dynamic functional alignment in advanced, human-like multi-agent autonomous driving environments.
Submission Number: 38
Loading