Joint Optimization for Volatile Federated Learning in Vehicular Edge Computing: A Deep Reinforcement Learning Approach
Abstract: Federated learning (FL) is highly valued for its ability to reduce communication overhead and protectuser privacy. However, implementing FL in vehicular edge computing (VEC) presents various challenges, such as dropout effect, straggler effect and inefficient communication. Moreover, the dynamic and volatile nature of vehicles in VEC-enabled mobile volatile federated edge learning (VFEL) systems exacerbates these challenges. In this paper, we focus on the volatility in mobile VFEL systems, modeling the dropout problem with vehicles' local computation and communication volatility rates. We formulate the objective to optimize system reliability and learning cost, converting it into a Markov decision process considering environmental dynamics. To achieve the optimal vehicle selection and resource allocation scheme, we propose a reliability-aware twin delayed deep deterministic policy gradient (RA-TD3) scheme by combining the twin delayed deep deterministic policy gradient (TD3) algorithm and convex optimization. Our experimental results demonstrate that the proposed RA-TD3 scheme improves the success rate and reduces the learning cost while maintaining higher learning accuracy.
External IDs:dblp:journals/tvt/LiFMAS25
Loading