Abstract: The integration of large language models (LLMs) with mobile edge computing (MEC) systems presents a novel approach to enhancing vehicle-to-everything (V2X) connected autonomous driving. This study aims to address the prevalent challenges in multi-model sensor data fusion, such as latency, privacy preservation, and the need for dynamic adaptation to evolving environmental conditions, by leveraging real-time data from LiDAR sensors. We propose an LLM-based framework to improve V2X driving assistance systems’ operational efficiency, safety, and reliability, where pictures and image recognition work as integrated data from multiple sensors to train various vehicle and lane detection models. Based on the benefits of federated learning, i.e., distributed at each MEC server and optimising models accordingly, these training models can avoid the data privacy issue in V2X driving assistance implementation. The application of generated test data significantly improves the success rate of the lane detection feature and pedestrian detection, by 95% and 85%, respectively. The experiment results demonstrate that our proposed framework is effective and feasible.
External IDs:dblp:conf/icccn/WanGHDMZ25
Loading