Accelerating Automatic Differentiation of Direct Form Digital Filters
Published in 1st Workshop on Differentiable Systems and Scientific Machine Learning @ EurIPS, 2025
We introduce a general formulation for automatic differentiation through direct form filters, yielding a closed-form backpropagation that includes initial condition gradients. The result is a single expression that can represent both the filter and its gradients computation while supporting parallelism. C++/CUDA implementations in PyTorch achieve at least 1000x speedup over naive Python implementations and consistently run fastest on the GPU. For the low-order filters commonly used in practice, exact time-domain filtering with analytical gradients outperforms the frequency-domain method in terms of speed. The source code is available here.
Recommended citation: Chin-Yun Yu and György Fazekas, "Accelerating Automatic Differentiation of Direct Form Digital Filters", 1st Workshop on Differentiable Systems and Scientific Machine Learning @ EurIPS 2025, December 2025.
Download Paper
