Abstract: Serverless edge computing has emerged as a new paradigm for running short-lived computations on edge devices. Considering the challenges posed by multiple edge servers and non-negligible cold start latency in serverless edge computing, we investigate the problem of function caching on multiple edge servers with relaying and bypassing. Our objective is to minimize the total latency of serving all function requests, which may either be processed by an idle container on the local server, initiate a new container on the local server, relayed to other edge servers, or bypassed to the cloud server. We propose FunCa, a greedy-based algorithm, and FunCa + , an extension version that supports bypassing. Large-scale simulation experiments using Azure trace and Alibaba trace demonstrate that compared to Camul, the state-of-the-art algorithm for handling requests on multiple edge servers, FunCa can reduce latency by 52.2% and 73.27% in the two traces, respectively.
Loading