Abstract: We describe federated reconnaissance, a class of learning problems in which distributed clients learn
new concepts independently and communicate that knowledge efficiently. In particular, we propose
an evaluation framework and methodological baseline for a system in which each client is expected
to learn a growing set of classes and communicate knowledge of those classes efficiently with other
clients, such that, after knowledge merging, the clients should be able to accurately discriminate
between classes in the superset of classes observed by the set of clients. We compare a range
of learning algorithms for this problem and find that prototypical networks are a strong approach
in that they are robust to catastrophic forgetting while incorporating new information efficiently.
Furthermore, we show that the online averaging of prototype vectors is effective for client model
merging and requires only a small amount of communication overhead, memory, and update time
per class with no gradient-based learning or hyperparameter tuning. Additionally, to put our results
in context, we find that a simple, prototypical network with four convolutional layers significantly
outperforms complex, state of the art continual learning algorithms, increasing the accuracy by over
22% after learning 600 Omniglot classes and over 33% after learning 20 mini-ImageNet classes
incrementally. These results have important implications for federated reconnaissance and continual
learning more generally by demonstrating that communicating feature vectors is an efficient, robust,
and effective means for distributed, continual learning.
0 Replies
Loading