User Allocation in Mobile Edge Computing: A Deep Reinforcement Learning ApproachDownload PDFOpen Website

Published: 01 Jan 2021, Last Modified: 12 May 2023ICWS 2021Readers: Everyone
Abstract: In recent times, the need for low latency has made it necessary to deploy application services physically and logically close to the users rather than using the cloud for hosting services. This paradigm of computing, known as edge or fog computing, is becoming increasingly popular. An edge user allocation policy determines how to allocate service requests from mobile users to MEC servers. Current state-of-the-art techniques assume that the total resource utilization on an edge server is equal to the sum of the individual resource utilizations of services provisioned from the edge server. However, the relationship between resources utilized on an edge server with the number of service requests served from there is usually highly non-linear, hence, mathematically modelling the resource utilization is challenging. This is especially true in case of an environment with CPU-GPU co-execution, as commonly observed in modern edge computing. In this work, we provide an on-device Deep Reinforcement Learning (DRL) framework to predict the resource utilization of incoming service requests from users, thereby estimating the number of users an edge server can accommodate for a given latency threshold. We further propose an algorithm to obtain the user allocation policy. We compare the performance of the proposed DRL framework with traditional allocation approaches and show that the DRL framework outperforms deterministic approaches by at least 10% in terms of the number of users allocated.
0 Replies

Loading