Abstract: We consider federated learning of linearlyparameterized nonlinear systems. We establish theoretical guarantees on the effectiveness of federated nonlinear system identification compared to centralized approaches, demonstrating
that the convergence rate improves as the number of clients
increases. Although the convergence rates in the linear and
nonlinear cases differ only by a constant, this constant depends
on the feature map ϕ, which can be carefully chosen in the nonlinear setting to increase excitation and improve performance.
We experimentally validate our theory in physical settings
where client devices are driven by i.i.d. control inputs and
control policies exhibiting i.i.d. random perturbations, ensuring
non-active exploration. Experiments use trajectories from nonlinear dynamical systems characterized by real-analytic feature
functions, including polynomial and trigonometric components,
representative of physical systems including pendulum and
quadrotor dynamics. We analyze the convergence behavior
of the proposed method under varying noise levels and data
distributions. Results show that federated learning consistently
improves convergence of any individual client as the number
of participating clients increases.
Loading