Benchmarking Deep Neural Network Inference Performance on Serverless Environments With MLPerf

Unai Elordi, Luis Unzueta, Jon Goenetxea, Sergio Sanchez-Carballido, Ignacio Arganda-Carreras, Oihana Otaegui

Published: 01 Jan 2021, Last Modified: 10 Nov 2025IEEE SoftwareEveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: We provide a novel decomposition methodology from the current MLPerf benchmark to the serverless function execution model. We have tested our approach in Amazon Lambda to benchmark the processing capabilities of OpenCV and OpenVINO inference engines.
Loading