Abstract: The rapid advancement of the Internet of Vehicles (IoV) has created significant challenges in large-scale data processing and low-latency decision-making. Multi-access edge computing substantially enhances data processing capabilities and system response times by pushing computational resources to the network edge. Task offloading technology transfers vehicle computing tasks to edge servers, alleviating the computational burden on user devices. Traditional offloading methods struggle with the challenges posed by the rapid growth of IoV. With the advancement of deep learning, large models have emerged as powerful tools for handling complex tasks. In this paper, we propose a task offloading algorithm that integrates a Transformer-based large model with deep reinforcement learning. We utilize the Transformer-based large model to replace the action generation network in deep reinforcement learning. The proposed method utilizes extensive historical data and real-time task information to dynamically adjust task allocation schemes. This approach enables the conclusion of task offloading strategies that minimize energy consumption and latency. Additionally, we propose a task classification framework to further reduce offloading delays and energy consumption. The framework categorizes tasks based on key attributes such as computational complexity, data size, and time sensitivity. We propose a reward function based on task adaptability to conclude the task offloading strategy that minimizes both offloading delay and energy consumption. Experimental results demonstrate that the proposed algorithm can accurately and quickly conclude the task offloading strategy to minimize energy consumption and latency.
External IDs:doi:10.1109/tvt.2025.3579490
Loading