Deep Reinforcement Learning-Empowered Task Offloading for Efficient DNN Partition in Vehicular Edge Computing

Published: 2025, Last Modified: 06 Jan 2026ICWS 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Deep neural networks (DNNs) have driven breakthroughs in autonomous driving through end-to-end methods, utilizing their powerful learning capabilities to generate vehicle controls directly from sensor data. However, maximizing the satisfaction of DNN inference requirements under the constraints of limited computing and energy resources on the vehicle side has emerged as a critical challenge in Vehicular Edge Computing(VEC). To address these challenges, we propose the reinforcement learning-empowered task diversion scheduling algorithm named RTD. This algorithm intelligently offloads computationally intensive portions of the DNN to Roadside Units (RSUs) by taking into account factors such as the battery coefficient and the type of DNNs. Firstly, we utilize the FLOPs method to model the data flow structure and computational load distribution of the DNNs. Subsequently, we formulate the task offloading model as an optimization problem that jointly considers latency, energy consumption, and the remaining battery power of the vehicle. Finally, after simplifying the optimization problem using the diversion algorithm, we employ the SAC method to determine the optimal offloading strategy. Extensive experiments demonstrate that RTD significantly reduces overall task completion time, effectively handles time-sensitive tasks, properly protects low-battery vehicles, and adapts well to dynamic network environments.
Loading