Abstract: Federated learning (FL) has emerged as a promising distributed learning paradigm with an added advantage of data privacy. With the growing interest in collaboration among data owners, FL has gained significant attention from organizations. The idea of FL is to enable collaborating participants train machine learning (ML) models on decentralized data without breaching privacy. In simpler words, federated learning is the approach of “bringing the model to the data, instead of bringing the data to the model”. Federated learning, when applied to data which is partitioned vertically across participants, is able to build a complete ML model by combining local models trained only using the data with distinct features at the local sites. This architecture of FL is referred to as vertical federated learning (VFL), which differs from the conventional FL on horizontally partitioned data. As VFL is different from conventional FL, it comes with its own issues and challenges. Motivated by the comparatively less explored side of FL, this paper provides a comprehensive overview of existing methods and developments in VFL, covering various aspects such as communication, learning, privacy, and applications. We conclude by identifying gaps in the current literature and proposing potential future directions for research in VFL.
External IDs:dblp:journals/kais/KhanTW25
Loading