Abstract: Federated Learning (FL) enables collaborative
model training across distributed clients without sharing raw
data, making it ideal for privacy-sensitive applications. However, FL models often suffer performance degradation due
to distribution shifts between training and deployment. TestTime Adaptation (TTA) offers a promising solution by allowing
models to adapt using only test samples. However, existing TTA
methods in FL face challenges such as computational overhead,
privacy risks from feature sharing, and scalability concerns
due to memory constraints. To address these limitations, we
propose Federated Continual Test-Time Adaptation (FedCTTA),
a privacy-preserving and computationally efficient framework
for federated adaptation. Unlike prior methods that rely on
sharing local feature statistics, FedCTTA avoids direct feature
exchange by leveraging similarity-aware aggregation based on
model output distributions over randomly generated noise samples. This approach ensures adaptive knowledge sharing while
preserving data privacy. Furthermore, FedCTTA minimizes the
entropy at each client for continual adaptation, enhancing the
model’s confidence in evolving target distributions. Our method
eliminates the need for server-side training during adaptation and
maintains a constant memory footprint, making it scalable even
as the number of clients or training rounds increases. Extensive
experiments show that FedCTTA surpasses existing methods
across diverse temporal and spatial heterogeneity scenarios.
Loading