Abstract: The growth and ubiquity of machine learning (ML) models in distributed applications implies that ML inference can no longer be conducted in a centralized fashion. This has led to ML models being evaluated in a decentralized manner so that personalization and recommendation decisions can be made closer to where the content is served. For instance, a model that predicts the next location of a user may take as input the context of the user, and provide content on predicted locations. This inference can be performed on the person's mobile phone.
0 Replies
Loading