Accelerating Automatic Differentiation of Direct Form Digital Filters

Published: 21 Nov 2025, Last Modified: 22 Nov 2025DiffSys 2025EveryoneRevisionsCC BY 4.0
Keywords: direct form filters, transposed direct form filters, differentiable DSP, state-space models, backpropagation through time, parallel scan, associative scan, CUDA, PyTorch
TL;DR: How to evaluate differentiable filters 1000 times faster in PyTorch.
Abstract: We introduce a general formulation for automatic differentiation through direct form filters, yielding a closed-form backpropagation that includes initial condition gradients. The result is a single expression that can represent both the filter and its gradients computation while supporting parallelism. C++/CUDA implementations in PyTorch achieve at least 1000x speedup over naive Python implementations and consistently run fastest on the GPU. For the low-order filters commonly used in practice, exact time-domain filtering with analytical gradients outperforms the frequency-domain method in terms of speed.
Email Sharing: We authorize the sharing of all author emails with Program Chairs.
Data Release: We authorize the release of our submission and author names to the public in the event of acceptance.
Submission Number: 9
Loading