### Speaker

### Description

We provide rigorous theoretical bounds for Anderson acceleration (AA) that allow efficient approximate calculations of the residual that reduce communication time and storage space while maintaining convergence. Specifically, we propose a reduced variant of AA, which consists in projecting the least squares to compute the Anderson mixing onto a subspace of reduced dimension. We numerically assess the performance of the reduced AA on: (i) linear deterministic fixed-point iterations arising from the Richardson’s scheme to solve linear systems with open-source benchmark matrices with various preconditioners, (ii) non-linear deterministic fixed-point iterations arising from non-linear time-dependent Boltzmann equations, and (iii) non-linear stochastic fixed-point iterations arising from the training of neural networks. The dimensionality of the subspace onto which the least-squares to compute AA is projected adapts dynamically at each iteration as prescribed by computable quantities derived from the theoretical error bounds. The results show a reduction of the computational time without compromising the final accuracy.

This work was supported in part by the Office of Science of the Department of Energy, by the Exascale Computing Project (17-SC-20-SC), a collaborative effort of the U.S. Department of Energy Office of Science and the National Nuclear Security Administration, and by the Laboratory Directed Research and Development (LDRD) Program of Oak Ridge National Laboratory managed by UT-Battelle, LLC for the US Department of Energy under contract DE-AC05-00OR22725. This work used resources of the Oak Ridge Leadership Computing Facility and of the Edge Computing program at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725.

## References

[1] D. G. Anderson, Iterative procedures for nonlinear integral equations, Journal of the Association for Computing Machinery, 12(4), 1965.

[2] V. Simoncini and D. Szyld, Theory of inexact Krylov subspace methods and applications to scientific computing, SIAM Journal on Scientific Computing, 25(2), pp. 454–477, 2003.

[3] H. F. Walker and P. Ni, Anderson acceleration for fixed-point iterations, SIAM Journal on Numerical Analysis, 49(4), pp. 1715–1735, 2011.

[4] A. Toth and C. T. Kelley, Convergence analysis for Anderson acceleration, SIAM Journal on Numerical Analysis, 53(2), pp. 805–819, 2015.

[5] C. Brezinski, M. Redivo-Zaglia and Y. Saad, Shanks sequence transformations and Anderson acceleration, SIAM Review, 60(3), 2018,

[6] M. Lupo Pasini, Convergence analysis of Anderson-type acceleration of Richardson’s iteration, Numerical Linear Algebra with Applications, 26(4), 2019.

[7] C. Brezinski, S. Cipolla, M. Redivo-Zaglia and Y. Saad, Shanks and Anderson-type acceleration techniques for systems of nonlinear equations, IMA Journal of Numerical Analysis, drab061, https://doi.org/10.1093/imanum/drab061, 2021.