A Performance Modelling Approach for SLA-Aware Resource Recommendation in Cloud Native Network Functions

Published: 01 Jan 2020, Last Modified: 14 Nov 2024NetSoft 2020EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Network Function Virtualization (NFV) becomes the primary driver for the evolution of 5G networks, and in recent years, Network Function Cloudification (NFC) proved to be an inevitable part of this evolution. Microservice architecture also becomes the de facto choice for designing a modern Cloud Native Network Function (CNF) due to its ability to decouple components of each CNF into multiple independently manageable microservices. Even though taking advantage of microservice architecture in designing CNFs solves specific problems, this additional granularity makes estimating resource requirements for a Production Environment (PE) a complex task and sometimes leads to an over-provisioned PE. Traditionally, performance engineers dimension each CNF within a Service Function Chain (SFC) in a smaller Performance Testing Environment (PTE) through a series of performance benchmarks. Then, considering the Quality of Service (QoS) constraints of a Service Provider (SP) that are guaranteed in the Service Level Agreement (SLA), they estimate the required resources to set up the PE. In this paper, we used a machine learning approach to model the impact of each microservice's resource configuration (i.e., CPU and memory) on the QoS metrics (i.e. serving throughput and latency) of each SFC in a PTE. Then, considering an SP's Service Level Objectives (SLO), we proposed an algorithm to predict each microservice's resource capacities in a PE. We evaluated the accuracy of our prediction on a prototype of a cloud native 5G Home Subscriber Server (HSS). Our model showed 95%-78% accuracy in a PE that has 2-5 times more computing resources than the PTE.
Loading